id
stringlengths 30
36
| source
stringclasses 1
value | format
stringclasses 1
value | text
stringlengths 5
878k
|
---|---|---|---|
no-problem/0003/astro-ph0003451.html
|
ar5iv
|
text
|
# Stochastic Fluctuations in the Spectrophotometric Properties of Star Clusters
## 1. Introduction
Most population synthesis codes (those based on Monte Carlo simulations excepted) predict mean properties of stellar populations as a function of fundamental model parameters such as the stellar initial mass function (IMF) and the star formation history (SFH). However, for a given model the number of stars in each area of the HR diagram is a statistical variable obeying Poisson statistics. The resulting intrinsic dispersion of integrated spectrophotometric properties is observed, both as pixel to pixel fluctuations in otherwise uniform objects (“surface brightness fluctuations”, Tonry & Schneider 1988) and as cluster to cluster variations among cluster samples restricted to similar SFHs (Ferraro et al. 1995, Girardi et al. 1995). Clusters are tempting targets for the tests and calibrations of population synthesis prediction because of the coeval nature of their stars. In this context, it is important to remember that the properties of individual clusters are representative of the mean properties only in the limit of large star numbers, i.e. large cluster masses (assuming a universal IMF). Discussing how large these masses need to be in practice is the purpose of this paper.
All results presented here are based on the population synthesis code Pégase (Fioc & Rocca-Volmerange 2000) and extensions thereof. They assume solar metallicity and a Salpeter IMF extending from 0.1 to 120 M.
## 2. Luminosity fluctuations
The variance of the luminosity $`L`$ of a population (or of $`L_\lambda `$ to allow for wavelength dependence) is proportional to $`_in_iL_i^2`$, where the sum extends over all luminosities $`L_i`$ of the HR diagram, and where $`n_i`$ is the corresponding expectation number of stars<sup>1</sup><sup>1</sup>1In practice, the luminosities are binned and the rms luminosity of each bin must be used ; the formula assumes statistical independence of the bins, which is justified when many bins of relatively large $`n_i`$ but low $`L_i`$ contribute a negligible amount to the total luminosity or variance.. Intrinsically luminous stars, which already contribute significantly to the integrated luminosity despite their relatively small numbers, contribute even more exclusively to the variance (Figs. 1$`a,b`$). Figs. 1$`c,d`$ and $`e`$ illustrate how the strongest contributors to the flux density depend on wavelength, using a 1 Gyr old stellar population as an example. Fig. 1$`f`$ shows the resulting mean spectral distribution of the flux and of the relative rms flux deviations from this mean, $`\sigma _L/L`$, for a population of 10<sup>6</sup> M of stars.
At all but the youngest ages, the most luminous of the red stars determine the bolometric as well as the near-IR luminosity fluctuations. Turn-off stars also contribute to var($`L_{\mathrm{bol}}`$) during the first few $`10^7`$ yrs ; at all times they are responsible for the optical fluctuations together with horizontal branch and red clump stars.
The relative fluctuations around the mean flux, $`\sigma _L/L`$, are large when the subpopulation that contributes most of the variance consists of stars of large intrinsic luminosity and when the total number of these stars is small (i.e. one is not in the large $`n_i`$ limit considered by Tonry & Schneider 1988). Consider the simplified case of a population of $`N`$ stars of which a mean number $`\alpha N`$ belongs to the subpopulation of interest (at the wavelength of interest). let $`l_s`$ be the intrinsic individual luminosity of each subpopulation stars, and $`l_o`$ that of each other star: $`L=\alpha Nl_s+(1\alpha )Nl_o=Nl`$. Assuming Poisson statistics for the number of stars in the subpopulation ($`\alpha N`$), one gets
$$\frac{\sigma _L}{L}=\frac{\sqrt{\alpha N}(l_sl_o)}{\alpha N(l_sl_o)+Nl_o}=\frac{1}{\sqrt{N}}\frac{\sqrt{\alpha }(l_sl_o)}{l}$$
$`\alpha `$, $`l_s`$, $`l_o`$ are given by population synthesis calculations. Other assumptions (e.g. fixing other quantities than the total number of stars $`N`$) lead to slightly different formulae with similar behaviour.
Table 1 lists the masses required to ensure the relative luminosity fluctuations are below 10 %, with the IMF of Fig. 1. The luminosity fluctuations directly translate into stochastic fluctuations of the mass-to-light ratio ($`M/L`$); note that differences in the lower IMF could add significantly to the spread in $`M/L`$ in cluster samples.
## 3. Fluctuations in colours and spectrophotometric indices
The statistics of flux ratios are non trivial. The 2 fluxes that define colours or other spectrophotometric indices are in general not independent and not Gaussian (the Gaussian approximation can be used for the large number limit, in which case however the fluctuations are too small to require consideration).
As a consequence of Poisson statistics, the most likely HR diagrams for a given stellar population model underpopulate areas where the expectation number of stars is smaller than or of the order of 1. 80 % probability intervals, defined so that the statistical variable (e.g. the number of stars of interest) has probabilities of 10 % to lie outside the interval on either side, are centered on a number smaller than the expectation value ; their size is not equal to 2 $`\times `$ 1.28 $`\sigma `$, as it would be for Gaussians. In practice, the stars with high $`L`$ and small expectation numbers are red, and the most likely colours will thus be bluer than the mean. This explains the behaviour of the Monte-Carlo simulations of Santos & Frogel (1997), and in particular their Fig. 5 : in a population of $`10^3`$ stars, the number of post main sequence stars evolves with time from about 10 at 50 Myr to about 100 at 1 Gyr, and the expected number of luminous cool giants (red supergiants of AGB stars) is of the order of one, resulting in most common J-K colours significantly below the large cluster limit.
When the baseline between the two passbands defining a spectrophotometric index is small, the corresponding fluxes in most cases originate from the same population and can be considered 100 % correlated<sup>2</sup><sup>2</sup>2Exceptions are indices such as the strenth of optical or near-IR emission lines, for which the line flux is dominated, and the continuum contaminated, by recombination radiation due to the presence of hot stars.. Fig. 2 shows the 80 % probability limits obtained with this assumption for 3 commonly used indices (gas recombination is radiation not included). As expected, the behaviour of the probability intervals is complex. The stochastic fluctuations of the CO index for instance are largest when red supergiants exist, because the latter have particularly strong CO bands ; at later stages, fluctuations in the numbers of the current most luminous red stars matter less, since red giants of various ages and luminosities have relatively similar CO bands.
The mass required for a meaningful direct comparison between predicted mean properties and those of a real stellar population depends on the scientific application. H<sub>2</sub>O bands are strongest in the stars of the upper AGB (TP-AGB), which are most important between $`10^8`$ and a few $`10^9`$ yrs. These stars are still poorly understood. If the purpose of a cluster observation is to test the effective temperature scale of TP-AGB star spectra (e.g. Lançon et al. 1999), Fig. 2$`d`$ shows that $`10^4`$ M of stars (i.e. typical LMC cluster masses) are only marginally sufficient. More appropriate clusters should have $`10^5`$ M or more.
## 4. Conclusions
Stochastic fluctuations due to small numbers of bright stars need to be considered when stellar populations are compared to population synthesis models, be it using star counts or using integrated properties. The most probable spectrophotometric properties of small clusters are usually different from their expectation value (i.e. the properties in the large cluster limit), leading to systematic effects in the determination of age, metallicity or other fundamental parameters. The adequate definition of a massive cluster, for which these effects would be negligible, depends strongly on the spectrophotometric property studied and on the star formation history. The cluster populations formed in galaxy mergers, thoroughly discussed during this workshop and known to contain objects of more than 10<sup>6</sup> M, are becoming accessible to spectrographs on large telescopes and clearly represent important targets for population synthesis studies of the near future (e.g. Mouhcine & Lançon, this volume).
### Acknowledgments.
We thank J.L. Vergely and D. Kunth for motivating discussions on stochastic fluctuations and their consequences.
## References
Ferraro, F. R., Fusi Pecci, F., Testa, V., et al. 1995, MNRAS, 272, 391
Fioc, M., & Rocca-Volmerange, B. 2000, in preparation,
http://www.iap.fr/users/fioc/PEGASE.html
Girardi, L., Chiosi, C., Bertelli, G., Bressan, A., 1995, A&A, 298, 87
Lançon, A. 1999, in IAU Symp. 191, Asymptotic Giant Branch Stars, ed. T. Le Bertre, A. Lèbre & C. Waelkens (San Francisco: ASP), 579
Lançon, A., Mouhcine, M., Fioc, M., & Silva, D. 1999, A&A, 344, L21
Santos, J. F. C., Jr., & Frogel, J. A. 1997, ApJ, 479, 764
Tonry, J., Schneider, D. P. 1988, AJ, 96, 807
## Discussion
J. Gallagher: WR stars being intrinsically rare objects, how can there be so many WR clusters ?
A. Lançon: Actually, WR stars aren’t that rare, at least at solar metallicity. According to Schaerer & Conti (1998, ApJ, 497, 618), their numbers are of the same order as those of O stars over significant starburst age ranges and, with a solar neighbourhood initial mass function, one finds about one O star for every 10<sup>3</sup> M of newly formed stars (e.g. Leitherer & Heckman 1995, ApJS, 96, 9). Stochastic fluctuations must be considered in individual young clusters of less than $`10^5`$ M, in particular when measuring ratios of different types of WR stars. They will average out over the many clusters of a WR galaxy.
S. Portegies Zwart: Current dynamical simulations of clusters usually don’t exceed 10<sup>4</sup> stars. Are their predictions useless ?
A. Lançon: The good thing about numerical simulations is that they keep you aware of the number of stars you are dealing with, a point that one easily forgets when looking at the integrated photometric properties of a population from a distance! Maybe one should focus on the properties that don’t depend so much on small numbers of bright stars first; by the time we will have tested these predictions thoroughly, it is likely that computers will have improved enough to increase the sizes of simulations…
|
no-problem/0003/hep-th0003059.html
|
ar5iv
|
text
|
# Minimal surfaces and Reggeization in the AdS/CFT correspondence
## 1 Introduction
The theoretical calculation from “first principles” of high energy scattering amplitudes in the so-called “soft” regime of QCD is among the oldest and yet unsolved problem of strong interaction physics. The main reason is that it requires a good understanding of 4-dimensional gauge field theories at strong coupling which we do not possess till now. In view of the recent developments of the AdS/CFT correspondence it is thus natural to address this problem in the new setting proposed in this way. An exact correspondence for QCD is not yet known, however useful information can be obtained from known realizations for confining theories.
We would like to discuss relevant physical properties of scattering amplitudes at high energy expected from the S-Matrix theory of strong interactions . In particular, Reggeization of scattering amplitudes is expected to occur, i.e. high-energy two-body amplitudes behaving as $`A(s,t)=s^{\alpha (t)}\times (prefactors),`$ where $`s,t`$ are the well-known Mandelstam variables. $`\alpha (t)`$ is the Regge trajectory corresponding to singularities of partial waves at $`j=\alpha (t)`$ in the t-channel. Unitarity, analyticity and crossing relations implied by the S-Matrix theory impose constraints on $`\alpha (t).`$ In particular the Froissart bound implies that $`\alpha (t=0)1`$ and the prefactors of the amplitude are at most like $`log^2s`$. Note that the Froissart bound assumes an underlying confining field theory, or at least a mass gap, since the scale of the bound is fixed by the particle of smallest mass (e.g. the pion).
In , we considered large impact parameter and high energy scattering of colourless states for $`SU(N)`$ supersymmetric gauge theories in the strong coupling, large $`N`$ limit using the AdS/CFT correspondence. The gauge theory scattering amplitude is linked with a correlation function of tilted Wilson loops elongated along the light-cone directions . In the AdS/CFT correspondence, these correlation functions are related to minimal surfaces in the $`AdS_5`$ geometry which have the Wilson loops as boundaries. The case considered in our previous paper Ref. involved disjoint minimal surfaces and thus the necessity of including supergravity field exchanges between the two corresponding string worldsheets. The dominant contributions were identified and all correspond to real phase shifts, i.e. purely elastic scattering. In particular, the contribution of the bulk graviton gives an unexpected “gravity-like” $`s^1`$ behaviour of the gauge theory phase shift in a specific range of energies and (very) large impact parameters.
The main but stringent difficulty which limited the scope of Ref. was that the weak field approximation in supergravity was shown to be broken unless the impact parameter $`L`$ was sufficiently large, namely $`\frac{L}{a}s^{2/7},`$ where $`a`$ is the transverse extension of the Wilson loop. If the above condition is not met, the produced gravitational field in the dual AdS theory becomes strong, preventing perturbative calculations to be done in this background.
We will concentrate on a situation where the difficulty with supergravity field exchanges does not arise, since there exists a single connected minimal surface which gives the dominant contribution to the scattering amplitude in the strong coupling regime, i.e. when $`\alpha ^{}0`$. This will allow us to extend our study to small impact parameters, where inelastic channels are expected to play an important rôle.
In this approach we will start by considering the correlation function of two Wilson lines elongated along the two light-cone directions, a configuration which can be used for the description of high-energy quark-quark or quark-antiquark amplitudes in gauge theories . The rôle of the quarks in the AdS/CFT correspondence will be played, as in , by the massive $`W`$ bosons arising from breaking $`U(N+1)U(N)\times U(1)`$. The case of IR finite correlators of Wilson loops will be dealt with in a second stage.
The plan of our paper is as follows: in section 2, we will analyze the correlation function of Wilson lines leading to an evaluation of $`q\overline{q}`$ and $`qq`$ scattering amplitudes at high energy. This will be done in the context of the black hole geometry in AdS space (static Wilson loops were first studied in this background in ), where one can use a flat metric as a good approximation scheme near the horizon. We analyze the factorizable structure of the IR divergences and isolate a cut-off independent inelastic amplitude leading to reggeization. In section 3, we consider the so-called “conformal” case of the AdS/CFT correspondence for $`𝒩=4`$ supersymmetric $`SU(N)`$ gauge theory, where the $`AdS_5`$ metric gives rise to a different minimal surface solution. The problem of the cancelation of the infra-red divergences is analyzed by considering Wilson loop correlators in section 4, leading to the (approximate) derivation of scattering amplitudes between colourless states, while the conclusions and open problems are pointed out in the final section.
## 2 Wilson lines and minimal surfaces in “quasi-flat” geometry
Let us start by defining an appropriate gauge theory observable for $`q\overline{q}`$ scattering amplitudes $`A(s,t)`$. It is convenient to pass from transverse momentum $`t=q^2`$ to impact parameter space
$$\frac{1}{s}A(s,t)=\frac{i}{2\pi }d^2le^{iql}\stackrel{~}{A}(s,l)$$
(1)
where $`l`$ is the 2-dimensional impact parameter (in the following we will denote its modulus by $`L`$), and $`\stackrel{~}{A}`$ is the amplitude in the impact parameter space.
In the eikonal approximation the impact parameter space amplitude for $`q\overline{q}`$ scattering is given by a correlation function of two Wilson lines which follow the classical straight line quark trajectories $`W_1x_1^\mu =p_1^\mu \tau `$ and $`W_2x_2^\mu =x_{}^\mu +p_2^\mu \tau `$, with $`|x_{}|=L,`$ see Fig.1. The IR cut-off will correspond to a fixed temporal extent of the lines $`T<\tau <+T`$.
The AdS/CFT correspondence gives a recipe for calculating this correlation function through
$$W_1W_2\stackrel{~}{A}(s,l)=e^{\frac{1}{2\pi \alpha ^{}}A_{minimal}}$$
(2)
where $`W_1W_2`$ is the Wilson line correlator<sup>1</sup><sup>1</sup>1The free propagation of the $`q`$ and $`\overline{q}`$ states is not included in the correlator $`W_1W_2`$, which is thus implicitly normalized by $`1/W_1W_2`$., $`\alpha ^{}=1/\sqrt{2g_{YM}^2N}`$ in units of the AdS radius, and $`A_{minimal}`$ is the area of the minimal surface in the appropriate background geometry (e.g. $`AdS_5\times S^5`$ for the conformal $`𝒩=4`$ SYM, an $`AdS`$ black hole among other geometries for confining theories) bounded by the Wilson line segments limited by the cut-off $`T.`$ A different approach to discuss the minimal surface problem in the conformal $`AdS_5`$ was considered in , which concentrated on the elastic part of the amplitude.
Since the disjoint contour formed by the two Wilson line segments is not closed, the procedure for finding a minimal surface is ambiguous. We will adopt a prescription for finding the minimal surface for infinitely long lines and then truncating it to a finite temporal extent parameterized by the IR cutoff $`T`$. This implicitly consists of forming a “big” Wilson loop closed at large temporal distance by curves drawn on the infinite minimal surface.
In turn, this procedure defines the appropriate colour decomposition of the associated amplitude. Using the well known colour decomposition $`t_{ij}^at_{kl}^a=1/2N\delta _{ij}\delta _{kl}+1/2\delta _{il}\delta _{jk}`$, we have
$$\stackrel{~}{A}(s,l)N\left\{\stackrel{~}{A}_0(s,l)+\frac{1}{2}\stackrel{~}{A}_{N^21}(s,l)\right\}$$
(3)
where $`\stackrel{~}{A}_0`$ (resp. $`\stackrel{~}{A}_{N^21}`$) are the amplitudes in the singlet (resp. adjoint) representations.
Using the same strategy as in our first paper , we will perform the calculation with euclidean signature for Wilson lines in the boundary $`^4`$ forming a relative angle $`\theta `$ in the longitudinal plane and then we will make an analytical continuation into Minkowski space by rotating the euclidean time coordinate clockwise and the angle anticlockwise (see in this context):
$`\theta `$ $``$ $`i\chi i\mathrm{log}{\displaystyle \frac{s}{m^2}}`$
$`T`$ $``$ $`iT.`$ (4)
Note that a priori there is an ambiguity in making the analytical continuation depending on the precise choice of the path. This phenomenon did not appear in the context of large impact parameter near forward scattering discussed in since there, the $`WW`$ correlation function had only simple poles in the complex $`\theta `$ plane. In the case considered in this paper the analyticity structure contains branch cuts in the complex plane which have to be taken into account.
### $`AdS`$ black hole solution and its flat space approximation
In a proposal was made that a confining gauge theory is dual to string theory in an $`AdS`$ black hole (BH) background the relevant part of which can be written as
$$ds_{BH}^2=\frac{16}{9}\frac{1}{f(z)}\frac{dz^2}{z^2}+\frac{\eta _{\mu \nu }dx^\mu dx^\nu }{z^2}+\mathrm{}$$
(5)
where $`f(z)=z^{2/3}(1(z/R_0)^4)`$ and $`R_0`$ is the position of the horizon<sup>2</sup><sup>2</sup>2Compared to standard coordinates we used $`U=z^{4/3}`$ and $`U_T=R_0^{4/3}`$.. Although it was later found that the $`S^1`$ KK states do not strictly decouple in the interesting limits , we will use this background to study the interplay between the confining nature of gauge theory and its reggeization properties. Actually the qualitative arguments and approximations should be generic for most confining backgrounds<sup>3</sup><sup>3</sup>3 Two other geometries for (supersymmetric) confining theories have been discussed recently . They have the property that for small $`z`$, i.e. close to the boundary, the geometry looks like $`AdS_5\times S^5`$ (in up to logarithmic corrections related to asymptotic freedom) giving a coulombic $`q\overline{q}`$ potential. For large $`z`$ the geometry is effectively flat. In all cases there is a scale, similar to $`R_0`$ above, which marks a transition between the small $`z`$ and large $`z`$ regimes., as already discussed in Ref. .
In order to calculate the scattering amplitude, we have to evaluate the correlation function (2). Therefore we put the two tilted lines depicted in Fig. 1 on the boundary at $`z=0`$. Next we have to find the minimal surface in the appropriate geometry which has the two lines as its boundaries. The relative angle (tilt) in the $`t`$-$`y`$ plane and the separation in the transverse direction $`x`$ (impact parameter) therefore define the boundary conditions for the geodesic equations for the string.
As is well known for the Plateau problem of minimal surfaces the boundary conditions determine the solutions. Although an exact solution for the minimal surface spanned by the tilted Wilson lines is unknown for the metric (5), the properties of the black hole (BH) geometry allow for quite a good approximation scheme.
Two salient features of the metric (5) are (i) the standard AdS prefactor $`1/z^2`$ close to the boundary ($`z=0`$), (ii) the existence of a horizon which limits from above the values of $`z`$. A consequence of (i) is that it is most efficient for a minimal surface to perform the “twisting” between the two Wilson lines as far away from the boundary as possible. Property (ii) effectively induces this twisting to occur near the horizon as we shall show below.
The appropriate minimal surface in the BH geometry will look as follows. Due to property (i), the minimal surface between well separated lines rises “vertically” in the $`z`$ direction up to the horizon without sizable motion in the other $`^4`$ coordinates (see a schematic representation in Fig. 2). The metric at the horizon is effectively flat
$$ds_{horizon}^2\frac{1}{R_0^2}(\eta _{\mu \nu }dx^\mu dx^\nu ),$$
(6)
and the motion in the $`z`$ direction is “frozen out”. Now near the horizon, following property (ii), the minimal surface performs the “twisting” (not displayed in Fig. 2) corresponding to the tilt angle $`\theta `$ between the initial Wilson lines. At this stage we thus have to find a minimal surface between the lines at an angle $`\theta `$ in the flat space metric (6). Finally the surface falls off again vertically towards the boundary. The area of the “vertical” pieces is removed by the standard subtractions , so the resulting area which enters the formula for the amplitude (2) may be approximated by the area of the “flat space” piece.
We will now substantiate this intuitive picture with a more quantitative study of the geodesic equation for the string. Let us determine under what conditions the minimal surface spanned by the two tilted Wilson lines is indeed predominantly flat and concentrated near the horizon following the general line of discussion of .
The minimal surface equations follow from the Nambu-Goto action.
$$S=\frac{1}{2\pi \alpha ^{}}_T^T𝑑\tau _{l(\tau )/2}^{l(\tau )/2}\sqrt{deth_{ab}},$$
(7)
where the induced metric on the worldsheet is
$$h_{ab}G_{ij}\frac{X^i(\sigma ,\tau )}{v^a}\frac{X^j(\sigma ,\tau )}{v^b}.$$
(8)
$`X^i`$ stands for general coordinates $`(z,x^\mu )`$ in (5), $`G_{ij}`$ is the background metric, $`v^0\sigma `$, $`v^1\tau `$, and $`l(\tau )=\sqrt{L^2+\theta ^2\tau ^2}`$ is the euclidean distance between points on the two Wilson lines with the same value of the time coordinate $`\tau `$.
As a first remark we note that using the background metric (5) the terms in the induced metric $`h_{\sigma \sigma }`$ corresponding to the twisting are of the form
$$\frac{1}{z^2}\left[\left(\frac{y}{\sigma }\right)^2+\left(\frac{t}{\sigma }\right)^2\right].$$
(9)
Hence, near the boundary ($`z0`$), the minimization will not change noticeably the twist angle. Thus the boundary conditions are “frozen” and transported to the vicinity of the horizon.
For further discussion we shall make an approximation (similar to ) of neglecting explicit $`\tau `$ dependence in the Euler-Lagrange equations following from (7) and leaving it only in the implicit dependence on the boundary conditions through $`l(\tau )`$. Within this approximation the estimate of , made for the case of the static $`q\overline{q}`$ potential may be directly applied to our problem.
In reference a distance $`d`$ is defined, for all metrics giving confinement, which measures the transverse distance (on the boundary) over which the string worldsheet significantly deviates from being flat. In all cases the ratio $`d/l(\tau )0`$ when $`l(\tau )\mathrm{}`$.. Depending on the confining metric considered, $`d`$ behaves as a logarithm or a power of $`l(\tau )`$ smaller than one. It is interesting to note that the condition $`d/l(\tau )1`$ leads to a lower bound on the impact parameter $`L`$ since the above condition is most restrictive for the smallest value of $`l(\tau )`$, which is equal to $`L`$.
The precise dependence of $`d(l(\tau ))`$ on the horizon scale $`R_0`$ depends on the metric considered. For instance for the metric (5) rescaling arguments lead to a dependence $`d(l(\tau ))R_0^{2/3}\mathrm{log}l(\tau )`$. Therefore as long as the impact parameter is large with respect to $`R_0`$ the approximations considered in this section should be valid.
However, it may of course happen that the impact parameter distance between the two Wilson lines becomes much smaller than $`R_0.`$ In this case (see Fig. 2) the minimal surface problem becomes less affected by the black hole geometry (or the large $`z`$ behaviour of the different metrics ) and will just probe the small $`z`$ region of the geometry.
The precise behaviour at these shorter distances will depend on the type of gauge theory and, in particular, on the small $`z`$ limit of the appropriate metric. In this paper we will consider the generic case (from the 4D (S)YM point of view) when this limit resembles the original $`AdS_5\times S^5`$ geometry . We will consider this conformal (non confining) regime in detail in a further section. We note that the same behaviour can be equivalently obtained through rescaling, by keeping the impact parameter fixed and putting the scale $`R_0\mathrm{}`$.
Let us concentrate in the following on the case when the impact parameter is larger than the scale $`R_0`$. To summarize the discussion, the string is then to a large degree concentrated in the region near the horizon (6) with the boundary conditions essentially transported from $`z=0`$. We are thus led first to calculate the area of the minimal surface bounded by the tilted lines in the flat geometry (6) at the horizon. We will first perform the calculation in euclidean signature and then perform the analytical continuation (2).
### Helicoid geometry
The basic building block of our construction is a minimal surface spanned by two straight line segments of length $`2T,`$ corresponding to the two Wilson lines separated by a distance $`L`$ in the “transverse” direction $`x`$ and with a relative angle $`\theta `$ in the “longitudinal” plane:
$$L_1:\tau (\tau ,0,0,0)L_2:\tau (\tau \mathrm{cos}\theta ,\tau \mathrm{sin}\theta ,0,L).$$
(10)
It is well-known that in the flat $`^4`$ geometry the minimal surface with infinite boudaries $`\tau =\mathrm{}\mathrm{}+\mathrm{}`$ is a helicoid. We will also be interested by the “truncated” helicoid<sup>4</sup><sup>4</sup>4For finite cut-off $`T`$ in flat space, the truncated helicoid obviously remains a solution if one adds the boundary helices at $`\tau =T,T`$ as new boundaries. Note, however, that with these boundaries the helicoid may be an unstable minimum for a too large value of the cut-off. We will not consider this problem in the present paper. where $`\tau =T\mathrm{}T`$.
Let us recall the minimal surface solution in flat space. The helicoid is the only regulated (spanned by straight lines) minimal surface. The truncated helicoid solution may be parametrized by
$`t`$ $`=`$ $`\tau \mathrm{cos}{\displaystyle \frac{\theta \sigma }{L}}`$
$`y`$ $`=`$ $`\tau \mathrm{sin}{\displaystyle \frac{\theta \sigma }{L}}`$
$`x`$ $`=`$ $`\sigma `$ (11)
where $`\tau =T\mathrm{}T`$ and $`\sigma =0\mathrm{}L`$ and $`\theta `$ is the total twisting angle.
Its area is given by the formula
$`AreaS(T)={\displaystyle _0^L}𝑑\sigma {\displaystyle _T^T}𝑑\tau \sqrt{1+{\displaystyle \frac{\tau ^2\theta ^2}{L^2}}}=`$
$`=LT\sqrt{1+{\displaystyle \frac{T^2\theta ^2}{L^2}}}+{\displaystyle \frac{L^2}{2\theta }}\mathrm{log}{\displaystyle \frac{\sqrt{1+\frac{T^2\theta ^2}{L^2}}+\theta \frac{T}{L}}{\sqrt{1+\frac{T^2\theta ^2}{L^2}}\theta \frac{T}{L}}}.`$ (12)
Let us now perform the analytical continuation (2), which links euclidean correlation functions in gauge theories with minkowskian ones directly related to scattering amplitudes. A naive continuation of the area formula (2) leads to a pure phase factor in (2):
$$\mathrm{exp}\left\{\frac{\sqrt{2g_{YM}^2N}}{2\pi R_0^2}i\left[LT\sqrt{1+\frac{T^2\chi ^2}{L^2}}+\frac{L^2}{2\chi }\mathrm{log}\frac{\sqrt{1+\frac{T^2\chi ^2}{L^2}}+\chi \frac{T}{L}}{\sqrt{1+\frac{T^2\chi ^2}{L^2}}\chi \frac{T}{L}}\right]\right\},$$
(13)
where $`1/2\pi \alpha ^{}`$ in (2) has been replaced by the factor $`\sqrt{2g_{YM}^2N}/(2\pi R_0^2)`$ coming from the flat metric (6).
However the analytic structure of the euclidean area (2) involves cuts in the complex $`T`$, $`\theta `$ planes and thus leads to an ambiguity coming from the branch cut of the logarithm. In fact when performing the analytical continuation we have to specify the Riemann sheet of the logarithm (i.e. $`loglog+2\pi in`$). This leads to an additional real multiplicative factor in (2):
$$\mathrm{exp}\left\{n\frac{\sqrt{2g_{YM}^2N}}{\chi }\frac{L^2}{2R_0^2}\right\},$$
(14)
the form of which is uniquely fixed by the euclidean expression (2) up to a choice of the integer $`n`$. Within the classical approximation which we have been using it is not possible to determine the value of $`n`$. On a more physical ground, in section 4, we will relate the analogue of the label $`n`$ which appears in the calculation of Wilson loop correlators with multivalued saddlepoint minima of a minimization equation and thus to different classical solutions. The determination of the relative weights of the various contributions goes beyond the classical approximation used throughout this paper<sup>5</sup><sup>5</sup>5We also note the close similarity of the $`n`$, $`L`$ and $`\chi `$ dependence in (14) with an analogous factor $`\mathrm{exp}(nL^2/\pi \chi )`$ in the imaginary part of the D brane scattering amplitude where $`n`$ labels the poles of the appropriate string partition function between the branes..
As can be seen the contribution (14) is cut-off independent.
Another useful way of deriving the above factor (14) can be directly obtained from the integral leading to (2). This method can be generalized to more complicated background geometries, for instance to the conformal case, which we will consider later, for which we lack an exact expression of the form (2).
Let us perform only the first part of the analytical continuation (2) $`\theta i\chi `$ but otherwise remain with the time variable $`T`$ in Euclidean space. This procedure yields the expression:
$$_{\frac{L}{\chi }}^{\frac{L}{\chi }}𝑑\tau _0^L𝑑\sigma \sqrt{1\frac{\tau ^2\chi ^2}{L^2}}=\frac{\pi L^2}{2\chi }.$$
(15)
We see that the imaginary part may be obtained by integrating ($`n`$ times) around the branch cut of the square root. A convenient reinterpretation of the above formula follows from performing the change of variables $`\sigma \sigma ^{}=\sigma \sqrt{1\frac{\tau ^2\chi ^2}{L^2}}`$. Then we get
$$2in_{\frac{L}{\chi }}^{+\frac{L}{\chi }}𝑑\tau _0^{\sqrt{1\frac{\tau ^2\chi ^2}{L^2}}L}𝑑\sigma ^{}=ni\pi \frac{L^2}{\chi }$$
(16)
which is effectively twice ($`\times in`$) the area of a minimal surface bounded by a ‘half-elipse’ of radii $`L`$ and $`L/\chi .`$ This $`T`$-independent imaginary part is unaffected by the second part of the analytical continuation (2), and leads directly to the factor (14).
### Reggeization in quark-(anti)quark scattering
Our result for the Wilson line correlation function for the $`AdS`$ BH geometry gives rise to the following contributions
$`\stackrel{~}{A_n}`$ $`=`$ $`\mathrm{exp}\left\{{\displaystyle \frac{\sqrt{2g_{YM}^2N}}{2\pi R_0^2}}i\left[LT\sqrt{1+{\displaystyle \frac{T^2\chi ^2}{L^2}}}+{\displaystyle \frac{L^2}{2\chi }}\mathrm{log}{\displaystyle \frac{\sqrt{1+\frac{T^2\chi ^2}{L^2}}+\chi \frac{T}{L}}{\sqrt{1+\frac{T^2\chi ^2}{L^2}}\chi \frac{T}{L}}}\right]\right\}`$ (17)
$`\times \mathrm{exp}\left\{n{\displaystyle \frac{\sqrt{2g_{YM}^2N}}{\chi }}{\displaystyle \frac{L^2}{2R_0^2}}\right\}.`$
There is a divergent phase in the above amplitude when the temporal length of the lines $`T`$ goes to infinity. We interpret this divergence as reflecting the expected IR divergence of the $`q\overline{q}`$ scattering amplitude . A consistent way to eliminate this cut-off dependence is to consider an IR finite physical quantity like scattering of two $`q\overline{q}`$ pairs (see section 4). In the present case of Wilson lines, the specific factorized form of (17) allows for a determination of an IR finite contribution, which can be interpreted as an effect of inelastic channels on the Wilson line correlator.
It is known since a very long time that the superposition of long range and short range potentials in the Schroedinger equation leads to a factorization formula for the relevant S matrix elements for each partial wave . For instance in nuclear physics, the superposition of long range coulombic and short range interactions leads to a factorization into the elastic coulombic S matrix element and a short range amplitude modified by the long-range background. The elastic S matrix may be treated as a redefinition of the asymptotic initial and final states. The amplitude reads
$$A(l,s)=e^{2i\delta (l,s)}T(l,s)$$
(18)
where $`\delta `$ is the real phase shift due to the elastic long range interactions and $`T`$ is the short range part of the amplitude. For instance in the QED result for electron scattering , the real phase shift exhibits a divergence which can be written as
$$e^{2i\delta (l,s)}\mathrm{exp}\left\{i\frac{e^2}{4\pi }\mathrm{coth}\chi \mathrm{log}\left(\frac{L^2}{4T^2}\right)\right\},$$
(19)
where $`1/T`$ has been substituted for a fictitious photon mass (IR regulator).
In hadronic interaction physics , a similar factorization appears for the S matrix elements for 2-body channels in terms of an elastic contribution and an amplitude $`T(l,s)`$ which, by unitarity of the S matrix, arises from the contribution of many inelastic channels to the 2-body S matrix . In this context the amplitude (18) can be related to the inelasticity (overlap matrix) in the scattering namely
$$T(l,s)=\frac{1\sqrt{12f(l,s)}}{2}$$
(20)
where the overlap matrix elements $`f(l,s)`$ are defined from the 2-body S matrix contribution to unitarity $`|S(l,s)|^212f(l,s)`$.
We are led to interpret our resulting amplitude (17) in the same way. The factor (13) can be treated as redefining the initial and final $`q\overline{q}`$ states due to long range interactions<sup>6</sup><sup>6</sup>6It is clear that there remains a freedom in attributing a finite real phase shift either to the redefinition of the states or to the interaction. Here we adopt the convention that $`T(l,s)`$ is purely real and thus contains information only on the inelasticity.. Naturally this phase is IR divergent. The analogous inelastic contribution $`T_n(l,s)`$ is obtained to be the cut-off independent factors (14). Note that this physical interpretation requires the integer $`n`$ to be positive. We will return to the discussion of the $`n`$ dependence in a further section.
Let us discuss both factors of the amplitude (17). The contribution of the real phase shifts behaves in the large $`T`$ limit like
$$\mathrm{exp}\left\{\frac{\sqrt{2g_{YM}^2N}}{2\pi R_0^2}i\left(T^2\chi +\frac{L^2}{\chi }\mathrm{log}\left(\frac{2\sqrt{e}\chi T}{L}\right)\right)+O(1/T^2)\right\}$$
(21)
The appearance of the IR divergent $`T^2`$ and $`L^2\mathrm{log}T`$ terms in the phase shift can be linked with the linear confining potential of the theory.
The effect of the confining potential is expected to generate inelastic channels through the phenomenon of string breaking and/or closed string emission. Within the above framework, where we select initial and final $`q\overline{q}`$ states, this contribution is expected to appear as an inelastic real factor in the amplitude, while the phase factor diverges with $`T\mathrm{}`$.
The inelastic $`q\overline{q}`$ interaction amplitude at level $`n`$ is
$$T_n(l,s)=\mathrm{exp}\left\{n\frac{\sqrt{2g_{YM}^2N}}{\chi }\frac{L^2}{2R_0^2}\right\}$$
(22)
where the initial and final states are both $`q\overline{q}`$. It can be easily fourier transformed into transverse momentum space giving
$$T_n(s,t)=\frac{iR_0^2\mathrm{ln}s}{n\sqrt{2g_{YM}^2N}}s^{1+\frac{R_0^2}{2n\sqrt{2g_{YM}^2N}}t}.$$
(23)
This contribution is thus reggeized with a linear Regge trajectory with unit intercept and the slope given by the string tension related to the horizon distance $`R_0^2`$.
It is worthwhile to consider what changes in the preceeding discussion if we go from $`q`$-$`\overline{q}`$ scattering to $`q`$-$`q`$ scattering. In geometric terms, this corresponds to changing the orientation of one of the lines, and since the string worldsheet spanned on the Wilson lines is oriented, the twisting angle of the helicoid changes as
$$\theta \theta \pi $$
(24)
Upon analytical continuation this means that $`\chi \mathrm{log}s`$ changes to $`\chi i\pi \mathrm{log}se^{i\pi }`$, as required by crossing properties, which are seen to have here a very simple geometric interpretation. We note that in the asymptotically high energy limit $`\mathrm{log}s1`$, one obtains the same factors (17) for both $`qq`$ and $`q\overline{q}`$ channels. Keeping the next to leading correction corresponding to $`\mathrm{log}s\mathrm{log}se^{i\pi }`$ preserves the crossing relations between those channels.
Finally let us compare our result with the general structure of Wilson line correlators at weak gauge coupling. Indeed, the large $`T`$ dependence of the $`q\overline{q}`$ amplitudes we discuss reflects IR divergences which appear already in perturbative (weak coupling) calculations of the same quantities.
For instance in the case of QED the whole dynamics is contained in the infinite phase factor (19) and the divergence is logarithmic.
The (renormalon improved) 1-loop QCD result for $`q\overline{q}`$ scattering is
$$\mathrm{exp}\left\{\frac{1}{\chi }\frac{\alpha _s}{\pi }\mathrm{log}\left(\frac{T}{L}\right)\frac{\rho }{\pi }\mathrm{\Lambda }^2\frac{L^2}{\chi }\right\}$$
(25)
where $`\rho `$ is an undetermined nonperturbative parameter. We note the compatibility between the nonperturbative cut-off independent piece in (25) and an analogous term in our result (22). Our nonperturbative result gives a hint on the scale and coupling dependence.
If the intercept one common to all contributions $`T_n(s,t)`$ (see formula (23)) is not spoiled by the different weights corresponding to fluctuations of the worldsheet around classical solutions, it would be a candidate for the intercept one trajectories (the so-called pomeron and odderon) which are expected to emerge from a confining strongly interacting gauge theory.
## 3 Conformal case
The flat metric approximation which we have used to derive the resulting area (2) assumed that the impact parameter $`L`$ is sufficiently large with respect to the scale set by the horizon radius $`R_0`$ (or a similar scale in the backgrounds interpolating between a confining geometry at large $`z`$ and approximately $`AdS_5\times S^5`$ near the boundary $`z=0`$). In this regime the dominant contribution to the amplitudes came from the part of the string worldsheet stretched near the horizon.
If we go to smaller impact parameters $`L<R_0`$ (and also for $`T<R_0`$), the minimal surface would only penetrate into a limited region near the boundary $`z=0`$, see Figure 2. In the scenarios which behave better at short distances than the original BH proposal, the metric becomes closer and closer to the conformal $`AdS_5`$ case. We note that the $`AdS_5\times S^5`$ setting is directly related to scattering in the $`𝒩=4`$ SYM. This different geometry leads to a qualitatively new behaviour which we now analyze.
### The conformal $`AdS_5`$ case
In the case of $`𝒩=4`$ SYM corresponding to the $`AdS_5\times S^5`$ background we do not know yet the exact generalization of the helicoid, and some approximation scheme is needed. As in the previous case, we will concentrate on extracting the inelastic contribution which appears also here to be independent of the IR temporal cut-off $`T`$. We use the method outlined in section 2 leading to formulae (15)-(16).
Within a variational approximation approach, we will look for a minimal solution in a restricted set of surfaces (“generalized helicoids”) parameterized by
$`t`$ $`=`$ $`\tau \mathrm{cos}{\displaystyle \frac{\theta \sigma }{L}}`$ (26)
$`y`$ $`=`$ $`\tau \mathrm{sin}{\displaystyle \frac{\theta \sigma }{L}}`$ (27)
$`x`$ $`=`$ $`\sigma `$ (28)
$`z`$ $`=`$ $`z(\sigma ,\tau ).`$ (29)
Evaluation of the induced metric gives rise to the following area functional:
$$_T^T𝑑\tau _0^L𝑑\sigma \frac{1}{z^2}\sqrt{\left(1+\frac{\tau ^2\theta ^2}{L^2}\right)(1+z_\tau ^2)+z_\sigma ^2}.$$
(30)
We perform a change of variables $`\sigma \sigma ^{}=\sigma \sqrt{1+\frac{\tau ^2\theta ^2}{L^2}}`$ which yields
$$\frac{1}{2\pi \alpha ^{}}_T^T𝑑\tau _0^{L\sqrt{1+\frac{\tau ^2\theta ^2}{L^2}}}𝑑\sigma ^{}\frac{1}{z^2}\sqrt{1+z_\tau ^2+z_\sigma ^{}^2}.$$
(31)
As in the previous section the cut-off independent part is obtained from the branch cut structure of the area functional (31). The analytic continuation $`\theta i\chi `$ changes the boundary conditions for the minimal surface to be a half elipse of width $`L/\chi `$ and height $`L`$ (the upper integration limit in (31) then becomes $`L\sqrt{1\tau ^2\chi ^2/L^2}`$). Due to conformal invariance we know that the minimal area has the following form:
$$A_{minimal}=f(L/ϵ,\chi )+g(\chi )$$
(32)
where $`ϵ`$ is the “$`5^{th}`$” AdS coordinate where we put the D3 brane probe. $`ϵ`$ translates directly into the mass of the $`W`$ bosons which play here the role of quarks. We do not expect higher poles in $`ϵ`$ than first order, which are in the standard way subtracted out , so we have at most a logarithmic behaviour in $`L/ϵ`$.
It is possible to obtain an approximate result in the high energy $`\chi \mathrm{}`$ limit from known properties of Wilson loop expectation values . The half-elipse has two cusps each with an angle $`\pi /2`$, whose contribution to $`A_{minimal}`$ can be obtained from the results of . This leads to the following logarithmic terms:
$$2\frac{1}{2\pi }F(\pi /2)\mathrm{log}\frac{L}{ϵ\chi }$$
(33)
where $`F(\mathrm{\Omega })`$ is a complicated function calculated in ($`F(\pi /2)0.3\pi `$). The $`ϵ`$ independent term $`g(\chi )`$ in (32) can be approximated by noting that at high energies the half-elipse is very much elongated and looks like parallel lines of length $`L`$, roughly $`2L/\chi `$ apart. An approximate evaluation is then given by integrating the coulombic potential :
$$c_0^L\frac{d\sigma ^{}}{\frac{2}{\chi }\sqrt{L^2\sigma ^2}}=c\frac{\pi }{4}\chi $$
(34)
where $`c=8\pi ^3/\mathrm{\Gamma }^4(1/4)`$ is the coefficient in front of the (screened) coulombic potential. So we get
$$T_n(l,s)\left(\frac{L}{ϵ\mathrm{log}s}\right)^{n\frac{F(\pi /2)}{\pi }\frac{\sqrt{2g_{YM}^2N}}{2\pi }}s^{n\frac{2\pi ^4}{\mathrm{\Gamma }(1/4)^4}\frac{\sqrt{2g_{YM}^2N}}{2\pi }}.$$
(35)
Here, as in the case of the confining theory, the values of $`n`$ and the weights of the different components $`T_n(l,s)`$ are not specified.
Let us comment on the behaviour of the various components. In all cases we obtain a factorized energy behaviour with no moving Regge trajectories. We note that for $`n`$ positive a similar energy dependence (i.e. with intercept greater than 1 and a (nearly) flat Regge trajectory) is obtained by resumming the leading $`\mathrm{log}s`$ terms in the perturbative expansion at weak coupling for the singlet exchange amplitude. In the conformal case, there remains a non-perturbative screening effect (already present for the static $`q\overline{q}`$ potential ) which appears as the change $`g_{YM}^2N\sqrt{g_{YM}^2N}`$ in the exponent of $`s`$.
Considering the impact parameter dependence and its Fourier transform to momentum space, in the window of convergence (the exponent of $`L`$ in (35) between $`2`$ and $`3/2`$), we get
$$T_n(s,t)is^{1+n\frac{2\pi ^4}{\mathrm{\Gamma }(1/4)^4}\frac{\sqrt{2g_{YM}^2N}}{2\pi }}\left(\frac{1}{t}\right)^{1+n\frac{F(\pi /2)}{2\pi }\frac{\sqrt{2g_{YM}^2N}}{2\pi }}$$
(36)
Otherwise one observes either an UV divergence (for exponent values less than $`2`$) or an IR one (for values larger than $`3/2`$). For positive values of $`n`$ the IR divergence requires a careful treatment which is beyond the scope of this paper. Note that, in the case of $`𝒩=4`$ SYM, it can lead to infra-red divergent pieces also in the inelastic amplitude, as is the case already in the perturbative limit .
### Conformal/non-conformal transition
As already mentioned, the result (36) obtained for the pure $`AdS_5\times S^5`$ case should give the dominant behaviour also for the confining theory for impact parameters small with respect to the horizon scale<sup>7</sup><sup>7</sup>7We need also a sufficiently small $`T`$ parameter. $`R_0`$ (or more generally an analogous transition scale in the geometries ). Indeed this $`R_0`$ provides a natural value of the impact parameter cut-off $`L_0`$. Thus even in the confining theory, when the impact parameter is decreased and gets smaller than $`R_0`$, we expect a transition from the set of components (22) to the results (35), as long as the relevant geometry for small $`z`$ is similar to $`AdS_5\times S^5`$.
This process can be observed by noting that both (22) and (35) were derived from a minimal surface spanned on a semielipse. The result for impact parameters $`LO(R_0)`$ was obtained by using the area law for Wilson loops, while the conformal case (corresponding here to $`LR_0`$) used an approximation using coulombic potential. The solution of the appropriate minimal surface problem in the full geometry would lead to an interpolation between the two extreme cases.
## 4 Wilson loop correlators and scattering amplitudes
We saw that an inherent feature of the $`q`$-$`\overline{q}`$ scattering amplitude is its IR divergence. In order to remedy this, and also to show a context where the finite behaviour of the inelastic amplitudes calculated in the previous section appears directly without the infinite phases, we are led to consider the scattering of two $`q\overline{q}`$ pairs of transverse size $`a`$, and impact parameter distance $`L`$. This process is interesting to study in itself, since it gives some information on the scattering amplitudes between colourless states in gauge theories at strong coupling.
For this setup we have to calculate the correlation function of two Wilson loops , where the loops are choosen to be elongated along the “time” direction and have a large but arbitrary temporal length $`T`$ (the exact analogue for Wilson loops of $`T`$ considered in the previous section). However, the cut-off dependence on $`T`$ is expected to be removed together with the related IR divergence which was present for the case of Wilson lines.
For large positive and negative times the minimal surface will be well approximated by two seperate copies of the standard minimal surfaces for each loop separately. When we come to the interaction region, and for $`L`$ sufficiently small, one can lower the area by forming a “tube” joining the two worldsheets. Since we want to calculate the normalized correlator $`W_1W_2/W_1W_2`$, the contributions of the regions outside the tube will cancel out (in a first approximation neglecting deformations near the tube). Therefore we have just to find the area of the tube, and subtract from it the area of the two independent worldsheets. It is at this stage that we see that the result does not depend on the maximal length of the Wilson loops $`T`$, and hence is IR finite. The whole contribution to the amplitude will just come from the area of the tube.
Since we cannot obtain an exact minimal surface for these boundary conditions, let us perform a variational approximation. Namely we will consider a family of surfaces forming the tube, parameterized by $`T_{tube}`$, which has the interpretation of an “effective” time of interaction. Then we will make a saddle point minimization of the area as a function of this parameter.
Suppose that the tube linking the two Wilson lines is formed in the region of the time parameter $`t(T_{tube},T_{tube})`$. In our approximation its two “sides” are formed by sheets of the helicoid solution (of area $`S(T_{tube}),`$ see (2) for the euclidean case and (13) for the minkowskian one). The front and back will be each approximated by strips of area $`aL\sqrt{1+\frac{T_{tube}^2\theta ^2}{L^2}}`$ (we assume $`a,LR_0`$).
The total area corresponding to the two Wilson loops is then given by
$$Area(T_{tube})=2L_{T_{tube}}^{T_{tube}}𝑑\tau \sqrt{1+\frac{\tau ^2\theta ^2}{L^2}}+2aL\sqrt{1+\frac{T_{tube}^2\theta ^2}{L^2}}4aT_{tube},$$
(37)
where $`2aT_{tube}`$ is the contribution of each individual Wilson loop to the normalization $`1/W_1W_2`$ of the Wilson loop correlation function.
Analytically continuing the area formula (37) to the Minkowskian case and using a convenient change of variables, the Minkowskian area can be put in the following simple form
$$Area(T_{tube})=\frac{2L^2}{\chi }\left\{\varphi +\frac{\mathrm{sin}2\varphi }{2}+\rho \chi \mathrm{cos}\varphi 2\rho \mathrm{sin}\varphi \right\},$$
(38)
where $`\rho a/L`$ and $`\mathrm{sin}\varphi =i\chi T_{tube}/L`$ is the new variational parameter.
In the strong coupling limit ($`\alpha ^{}=1/\sqrt{2g_{YM}^2N}0`$) the parameter $`\varphi `$ is dynamically determined from the saddle point equation:
$$0=\frac{Area(\varphi )}{\varphi }=\mathrm{cos}\varphi (\mathrm{cos}\varphi \rho )\frac{\rho \chi }{2}\mathrm{sin}\varphi $$
(39)
It is easy to realize that for large enough energy, there exists a solution with $`\varphi \pm n\pi `$. Inserting this solution into the area (38) we find
$$Area(\varphi )=\frac{2L^2}{\chi }n\pi +2aL(1)^n$$
(40)
where we retain the physical solutions with $`n`$ positive integer. We thus find a set of solutions very similar to the inelastic factor obtained in section 2. The modification due to the front-back contribution $`2aL`$ is negligible in the Fourier transformed amplitude for momentum transfer $`\sqrt{t}a/R_0^2`$. Also this term is probably more dependent on the treatment of the front-back parts of the tube in our approximation.
It is interesting to note that the minimization (39) gives rise in a natural way to a similar set of solutions parameterized by integers as found from the branch cut arguments in section 2. Each value of $`n`$ corresponds to a saddle point i.e. a classical solution. The determination of the weights of each component to the total scattering amplitude is beyond the reach of the classical approximation.
For completeness, let us briefly discuss the general saddle point solution. For lower energies, there are families of solutions also leading to reggeized behaviour but with distorted trajectories. For $`\chi `$ small there exist solutions with $`\varphi `$ imaginary and thus leading to elastic parts of the amplitude. The study of these solutions is beyond the scope of the present paper. For too large impact parameters we may enter the purely elastic regime found in which does not correspond to connected minimal surfaces (the Gross-Ooguri transition ).
As a word of caution (and incentive for further study) we note that the saddle point in terms of $`T_{tube}`$ is mainly driven to complex values. This indicates that a complete treatment and an investigation of the Gross-Ooguri transition requires a more refined study of the tube minimal surface.
Let us analyze the properties of the resulting amplitude. Recalling that charge conjugation acting on one of the $`q\overline{q}`$ pairs is equivalent to considering the transformation $`\chi \chi i\pi `$, it is convenient to analyze the components of definite signature with the even and odd contributions given by
$$\stackrel{~}{T}_n^\pm (l,s)=e^{n\frac{\sqrt{2g_{YM}^2N}}{\chi }\frac{L^2}{R_0^2}}\pm e^{n\frac{\sqrt{2g_{YM}^2N}}{\chi i\pi }\frac{L^2}{R_0^2}}.$$
(41)
Note the relative factor of 2 in the exponent in comparison with (22) due to the two sheet structure of the minimal surface.
Using the Fourier transform (1) we finally get
$$T_n^\pm (s,t)=\frac{iR_0^2\mathrm{ln}s}{2n\sqrt{2g_{YM}^2N}}s^{\alpha _n(t)}\frac{iR_0^2\mathrm{ln}(s)}{2n\sqrt{2g_{YM}^2N}}(s)^{\alpha _n(t)},$$
(42)
where
$$\alpha _n(t)=1+\frac{R_0^2}{4n\sqrt{2g_{YM}^2N}}t.$$
Let us consider the contribution with $`n=1`$ which is dominant at large $`L`$. It is easy to realize that the amplitude (42) corresponds to specific Regge singularities in the S-matrix framework, namely double Regge poles whose trajectory is given by $`\alpha _1(t).`$ Indeed, using the usual Mellin transform $`s^\alpha \frac{s^jdj}{2i\pi (j\alpha )},`$ it can be written in the following equivalent forms:
$`T_1^\pm (s,t)`$ $`=`$ $`{\displaystyle \frac{iR_0^2}{2\sqrt{2g_{YM}^2N}}}{\displaystyle \frac{}{\alpha }}\left\{s^{\alpha _1(t)}(s)^{\alpha _1(t)}\right\}`$ (43)
$`=`$ $`{\displaystyle \frac{R_0^2}{2\sqrt{2g_{YM}^2N}}}{\displaystyle _𝒞}{\displaystyle \frac{dj}{\pi }}{\displaystyle \frac{e^{i\pi j/2}s^j}{(j\alpha _1(t))^2}}\{\begin{array}{c}i\mathrm{sin}\left(\frac{\pi j}{2}\right)\\ \mathrm{cos}\left(\frac{\pi j}{2}\right)\end{array}\},`$ (46)
where the complex contour $`𝒞`$ can be taken around the Regge (di)pole trajectory $`\alpha _1(t)`$ and the signature factors are either $`\mathrm{sin}\pi j/2`$ or
$`i\mathrm{cos}\pi j/2`$ depending on the positive or negative signature.
Let us discuss the contributions $`T_n`$ to the amplitude with $`n>1`$. In the absence of a direct determination of their relative weights, it is interesting to note that unitarization of Regge amplitudes in the S matrix framework leads to a similar decomposition where the $`T_n`$ correspond to Regge pole/cut singularities. In particular, the overlap matrix formalism , see (20), leads to a specific model for the relative weights of the $`T_n`$’s, if we assume a gaussian distribution $`f(l,s)f_0\mathrm{exp}\left(\frac{\sqrt{2g_{YM}^2N}}{\chi }\frac{L^2}{2R_0^2}\right)`$ for the inelasticity. In this framework unitarity is fulfilled whenever $`0<f_0<1/2`$. However the derivation of the Wilson line/loop correlation function does not allow us to give model-independent predictions for these weights in the total amplitude.
Finally let us comment on the relation of our results on the trajectory $`\alpha (t)`$ with the glueball spectrum calculations . An extrapolation of the trajectory (4) to positive $`t`$ leads to masses of the form
$$M^2=4n(J1)\frac{\sqrt{2g_{YM}^2N}}{R_0^2}$$
(47)
where $`J`$ is the spin and $`n`$ labels the different trajectories. Because of the appearance of coupling constant dependence it is easy to see that these states correspond to massive string states and not to supergravity fields associated with the glueballs found in . Indeed the latter states have masses proportional just to $`1/R_0^2`$ and spin limited by $`J2`$. The appearence of massive string states is not surprising in our case as we consider an extended string worldsheet between the two Wilson loops instead of a supergravity field exchange. The transition between both situations and thus the relation between both sets of states remains an open problem.
We should note that our approximations for calculating the Wilson loop correlator (which is the channel relevant for glueballs) are rather crude and become problematic at small $`t`$ (consider the discussion after (40)). Therefore the extrapolation of the linear trajectory into the glueball regime can easily break down. Unfortunately the complexity of the minimal surface problem with the Wilson loop boundary conditions does not allow us to make more quantitative estimates.
## 5 Conclusions and outlook
Let us give our main conclusions. By computing Wilson line and Wilson loop correlation functions in the framework of the AdS/CFT correspondence we show a relation between minimal surface problems in $`AdS_5`$ metrics and reggeization in gauge field theory at strong coupling.
For Wilson line correlators, we isolate in certain cases IR finite inelastic amplitudes coming from the branch cut structure of the analytical continuation of helicoid-like surfaces i.e. minimal surfaces with straight line boundary conditions corresponding to classical trajectories in Minkowski space.
We considered three cases: (i) flat metric approximation of an $`AdS`$ black hole metric giving rise to Regge amplitudes with linear trajectories, (ii) an approximate evaluation for the conformal $`AdS_5\times S^5`$ geometry leading to flat Regge trajectories<sup>8</sup><sup>8</sup>8A remaining IR divergence in the inelastic amplitude is still present in the absence of confinement. and (iii) evidence for a transition, in a confining theory, from behaviour of type (i) to (ii) when the impact parameter decreases below the interpolation scale set by the horizon radius. In this case, confinement provides a natural IR cut-off scale.
In a second stage we considered the correlation function of two Wilson loops elongated along the light cone directions for the confining geometry. This configuration corresponds to a high energy scattering amplitude between colourless $`q\overline{q}`$ states. We use a variational approximation where the minimal surface is constructed from two helicoidal sheets. As expected, the obtained amplitude is free from IR divergences and gives rise to reggeization with a linear trajectory with unit intercept. For high energies the amplitude is imaginary and thus mainly reflects the inelasticity of the process.
These results call for some comments.
We note that the structure of our resulting amplitudes for the confining case (in particular the $`n`$, $`\chi `$ and $`L`$ dependence) matches the calculations of the imaginary part of flat space D-brane scattering amplitudes and some specific Wilson loop correlators<sup>9</sup><sup>9</sup>9We have extracted the imaginary part from the formulae in Ref. , along the lines of . , when the “effective” string length $`\sqrt{\alpha ^{}}`$ is taken to be set by the horizon radius in our case. It is interesting to note that the imaginary part in those calculations is generated from the singularities of the string amplitudes which are an infinite set of poles. The slopes of the trajectories are the same as in our case (4), while the intercepts are different. However the geometrical configurations in is quite different from the one we considered in section 4. Even in the flat space approximation it would be useful to have a direct string calculation of the tube configuration.
Beyond the flat space approximation, we want to emphasize the interest of solving exactly the well defined mathematical problem of finding the generalization of the helicoid for various AdS metrics, i.e. the minimal surface spanned between infinite lines forming an angle $`\theta `$ at the boundary. Another goal is to go beyond the classical approximation in order to derive the $`n`$-dependent weights to the scattering amplitudes.
Indeed the generalization of the helicoidal geometry in AdS space seems to be a building block for high energy scattering amplitudes in gauge theories at strong coupling.
### Acknowledgements
RJ was partially supported by KBN grants 2P03B00814, 2P03B08614. We thank T. Garel and B. Giraud for useful remarks.
|
no-problem/0003/hep-ph0003055.html
|
ar5iv
|
text
|
# CP Violations in Lepton Number Violation Processes and Neutrino Oscillations
## I Introduction
From the recent neutrino oscillation experiments it becomes affirmative that neutrinos have masses. The present and near future experiments enter into the stage of precision tests for masses and lepton mixing angles. In this situation the investigation of the $`CP`$ violation effects in the lepton sector has become more and more important. By taking account of the possible leptonic $`CP`$ violating phases for Majorana neutrinos, we have obtained the constraints on the lepton mixing angles from the neutrinoless double beta decay ($`(\beta \beta )_{0\nu }`$), the $`\mu ^{}`$-$`e^+`$ conversion and the K decay, $`K^{}\pi ^+\mu ^{}\mu ^{}`$. In this paper, we propose graphical representations of the $`CP`$ violating phases which appear in those lepton number violating processes. By using those representations, we derive the allowed regions on the leptonic $`CP`$ violating phases from $`(\beta \beta )_{0\nu }`$ without using any constraints on the mixing angles. We also try to determine the magnitude of the $`CP`$ violating phases by combining the constraints on the neutrino masses and mixing matrix elements from the recent Super Kamiokande atmospheric neutrino experiment , the solar neutrino experiment , the recent CHOOZ reactor experiment , and the future KamLAND reactor experiment with those from the lepton number violating processes such as $`(\beta \beta )_{0\nu }`$.
The amplitudes of those three lepton number violating processes are, in the absence of right-handed weak couplings, proportional to the ”averaged” masses $`m_\nu _{ee}`$, $`m_\nu _{\mu e}`$ and $`m_\nu _{\mu \mu }`$ . The ”averaged” mass $`m_\nu _{ee}`$ defined from $`(\beta \beta )_{0\nu }`$ is given by
$$m_\nu _{ee}=|\underset{j=1}{\overset{3}{}}U_{ej}^2m_j|.$$
(1)
Similarly, the ”averaged” masses $`m_\nu _{\mu e}`$ defined from $`\mu ^{}`$-$`e^+`$ conversion and $`m_\nu _{\mu \mu }`$ defined from the lepton number violating K decay, $`K^{}\pi ^+\mu ^{}\mu ^{}`$ are given by
$`m_\nu _{\mu e}`$ $`=`$ $`|{\displaystyle \underset{j=1}{\overset{3}{}}}U_{\mu j}U_{ej}m_j|,`$ (2)
$`m_\nu _{\mu \mu }`$ $`=`$ $`|{\displaystyle \underset{j=1}{\overset{3}{}}}U_{\mu j}^2m_j|,`$ (3)
respectively. The $`CP`$ violating effects are included in the ”averaged” masses $`m_\nu _{ee}`$, $`m_\nu _{\mu e}`$ and $`m_\nu _{\mu \mu }`$ defined in Eqs.(1) $``$ (3). Here $`U_{aj}`$ is the Maki-Nakagawa-Sakata (MNS) left-handed lepton mixing matrix which combines the weak eigenstate neutrino ($`a=e,\mu `$ and $`\tau `$) to the mass eigenstate neutrino with mass $`m_j`$ ($`j`$=1,2 and 3). The $`U`$ takes the following form in the standard representation :
$$U=\left(\begin{array}{ccc}c_1c_3& s_1c_3e^{i\beta }& s_3e^{i(\rho \varphi )}\\ (s_1c_2c_1s_2s_3e^{i\varphi })e^{i\beta }& c_1c_2s_1s_2s_3e^{i\varphi }& s_2c_3e^{i(\rho \beta )}\\ (s_1s_2c_1c_2s_3e^{i\varphi })e^{i\rho }& (c_1s_2s_1c_2s_3e^{i\varphi })e^{i(\rho \beta )}& c_2c_3\end{array}\right).$$
(4)
Here $`c_j=\mathrm{cos}\theta _j`$, $`s_j=\mathrm{sin}\theta _j`$ ($`\theta _1=\theta _{12},\theta _2=\theta _{23},\theta _3=\theta _{31}`$). Three $`CP`$ violating phases, $`\beta `$ , $`\rho `$ and $`\varphi `$ appear in $`U`$ for Majorana neutrinos . In this paper we introduce the graphical representations of the complex masses $`_{j=1}^3U_{ej}^2m_j,_{j=1}^3U_{\mu j}U_{ej}m_j,`$ and $`_{j=1}^3U_{\mu j}^2m_j`$. Then, using these representations, we derive the constraints on the $`CP`$ violating phases which appear in the lepton number violating processes.
This article is organized as follows. In section 2 we introduce the graphical representations of the complex masses and the $`CP`$ violating phases. In section 3 we present constraints on the $`CP`$ violating phases from $`(\beta \beta )_{0\nu }`$. Constraints from $`(\beta \beta )_{0\nu }`$ and the neutrino oscillation experiments are discussed in section 4. Section 5 is devoted to summary.
## II graphical representations of the complex masses and $`CP`$ violating phases
We now rewrite the complex mass $`_{j=1}^3U_{ej}^2m_j`$ by using the phase convention in Eq.(4) as
$`{\displaystyle \underset{j=1}{\overset{3}{}}}U_{ej}^2m_j`$ $`=`$ $`c_1^2c_3^2m_1+s_1^2c_3^2e^{2i\beta }m_2+s_3^2e^{2i(\rho \varphi )}m_3`$ (5)
$`=`$ $`|U_{e1}|^2m_1+|U_{e2}|^2e^{2i\beta }m_2+|U_{e3}|^2e^{2i(\rho \varphi )}m_3`$ (6)
$``$ $`|U_{e1}|^2\stackrel{~}{m_1}+|U_{e2}|^2\stackrel{~}{m_2}+|U_{e3}|^2\stackrel{~}{m_3}`$ (7)
Here we have defined the complex masses $`\stackrel{~}{m_i}(i=1,2,3)`$ by
$`\stackrel{~}{m_1}`$ $``$ $`m_1`$ (9)
$`\stackrel{~}{m_2}`$ $``$ $`e^{2i\beta }m_2`$ (10)
$`\stackrel{~}{m_3}`$ $``$ $`e^{2i\rho ^{}}m_3,\rho ^{}\rho \varphi .`$ (11)
We also rewrite the complex mass $`_{j=1}^3U_{\mu j}U_{ej}m_j`$ by using the above $`\stackrel{~}{m_i}(i=1,2,3)`$ as follows:
$`{\displaystyle \underset{j=1}{\overset{3}{}}}U_{\mu j}U_{ej}m_j`$ $`=`$ $`U_{e1}U_{\mu 1}m_1+U_{e2}U_{\mu 2}m_2+U_{e3}U_{\mu 3}m_3`$ (12)
$`=`$ $`U_{e1}^{}U_{\mu 1}\stackrel{~}{m}_1+U_{e2}^{}U_{\mu 2}\stackrel{~}{m}_2+U_{e3}^{}U_{\mu 3}\stackrel{~}{m}_3`$ (13)
$`=`$ $`U_{e2}^{}U_{\mu 2}(\stackrel{~}{m}_2\stackrel{~}{m}_1)+U_{e3}^{}U_{\mu 3}(\stackrel{~}{m}_3\stackrel{~}{m}_1).`$ (14)
Here we have used the unitarity constraint that $`_{j=1}^3U_{ej}^{}U_{\mu j}=0`$. Furthermore, using $`U_{\mu 1}|U_{\mu 1}|e^{i(\phi _{21}\beta )}`$, $`U_{\mu 2}|U_{\mu 2}|e^{i\phi _{22}}`$, $`U_{\mu 3}=|U_{\mu 3}|e^{i(\rho \beta )}`$, $`U_{e2}=|U_{e2}|e^{i\beta }`$ and $`U_{e3}=|U_{e3}|e^{i(\rho \beta )}`$ with $`\phi _{21}\text{arg}(s_1c_2c_1s_2s_3e^{i\varphi })`$ and $`\phi _{22}\text{arg}(c_1c_2s_1s_2s_3e^{i\varphi })`$ , we obtain
$`{\displaystyle \underset{j=1}{\overset{3}{}}}U_{\mu j}U_{ej}m_j`$ $`=`$ $`e^{i(\varphi \beta )}\left(|U_{e2}^{}U_{\mu 2}|e^{i(\phi _{22}\varphi )}(\stackrel{~}{m}_2\stackrel{~}{m}_1)+|U_{e3}^{}U_{\mu 3}|(\stackrel{~}{m}_3\stackrel{~}{m}_1)\right),`$ (16)
$`{\displaystyle \underset{j=1}{\overset{3}{}}}U_{\mu j}^2m_j`$ $`=`$ $`|U_{\mu 1}|^2e^{2i(\phi _{21}\beta )}m_1+|U_{\mu 2}|^2e^{2i\phi _{22}}m_2+|U_{\mu 3}|^2e^{2i(\rho \beta )}m_3`$ (17)
$`=`$ $`e^{2i(\phi _{21}\beta )}\left(|U_{\mu 1}|^2\stackrel{~}{m}_1+|U_{\mu 2}|^2e^{2i(\phi _{22}\phi _{21})}\stackrel{~}{m}_2+|U_{\mu 3}|^2e^{2i(\varphi \phi _{21})}\stackrel{~}{m}_3\right).`$ (18)
Therefore, the $`m_\nu _{ee}`$, $`m_\nu _{\mu e}`$, and $`m_\nu _{\mu \mu }`$ defined in Eqs.(1) $``$ (3) are reexpressed by the absolute values of averaged complex masses as
$`m_\nu _{ee}`$ $`=`$ $`|M_{ee}|,`$ (20)
$`m_\nu _{\mu e}`$ $`=`$ $`|M_{\mu e}|,`$ (21)
$`m_\nu _{\mu \mu }`$ $`=`$ $`|M_{\mu \mu }|.`$ (22)
Here the averaged complex masses $`M_{ee}`$, $`M_{\mu e}`$ ,and $`M_{\mu \mu }`$ are defined by
$`M_{ee}`$ $``$ $`|U_{e1}|^2\stackrel{~}{m_1}+|U_{e2}|^2\stackrel{~}{m_2}+|U_{e3}|^2\stackrel{~}{m_3},`$ (23)
$`M_{\mu e}`$ $``$ $`|U_{e2}^{}U_{\mu 2}|e^{i(\phi _{22}\varphi )}(\stackrel{~}{m}_2\stackrel{~}{m}_1)+|U_{e3}^{}U_{\mu 3}|(\stackrel{~}{m}_3\stackrel{~}{m}_1),`$ (24)
$`M_{\mu \mu }`$ $``$ $`|U_{\mu 1}|^2\stackrel{~}{m}_1+|U_{\mu 2}|^2e^{2i(\phi _{22}\phi _{21})}\stackrel{~}{m}_2+|U_{\mu 3}|^2e^{2i(\varphi \phi _{21})}\stackrel{~}{m}_3.`$ (25)
Now let us introduce graphical representations of the complex value of the $`M_{ee}`$, $`M_{\mu e}`$ and $`M_{\mu \mu }`$ in a complex mass plane in order to investigate the magnitude of the $`CP`$ violating phases in them. The $`M_{ee}`$ is the ”averaged” complex mass of the masses $`\stackrel{~}{m_i}(i=1,2,3)`$ weighted by three mixing elements $`|U_{ej}|^2(j=1,2,3)`$ with the unitarity constraint $`_{j=1}^3|U_{ej}|^2=1`$. Therefore, the position of $`M_{ee}`$ in a complex mass plane is within the triangle formed by the three mass points $`\stackrel{~}{m_i}(i=1,2,3)`$ if the magnitudes of $`|U_{ej}|^2(j=1,2,3)`$ are unknown, which is shown in Fig. 1(a-i). Hereafter we refer this triangle as the complex-mass triangle. This triangle is different from that defined by Fogli et al. in the sense that ours incorporates the $`CP`$ violating phases and masses.
The three mixing elements $`|U_{ej}|^2(j=1,2,3)`$ indicate the division ratios for the three portions of each side of the triangle which are divided by the parallel lines to the side lines of the triangle passing through the $`M_{ee}.`$ (Fig. 1(a-ii)). The $`CP`$ violating phases $`2\beta `$ and $`2\rho ^{}`$ represent the rotation angles of $`\stackrel{~}{m_2}`$ and $`\stackrel{~}{m_3}`$ around the origin, respectively.
Likewise, the constraints on the positions of $`M_{\mu e}`$ and $`M_{\mu \mu }`$ are depicted in Figs.1(b) and 1(c). The position of $`M_{\mu e}`$ is given as Fig. 1(b). The position of $`M_{\mu \mu }`$ in a complex mass plane is within the triangle formed by the three mass points $`\stackrel{~}{m_1}`$, $`e^{2i(\phi _{22}\phi _{21})}\stackrel{~}{m_2}`$, and $`e^{2i(\varphi \phi _{21})}\stackrel{~}{m}_3`$ which is shown in Fig. 1(c).
## III constraints on the $`CP`$ violating phases from $`(\beta \beta )_{0\nu }`$
Among the lepton number violation processes such as $`(\beta \beta )_{0\nu }`$, the $`\mu ^{}e^+`$ conversion and the K decay, $`K^{}\pi ^+\mu ^{}\mu ^{}`$ , the $`(\beta \beta )_{0\nu }`$ gives us most restrictive constraints on the $`CP`$ violating phases. Therefore, hereafter we concentrate on the $`(\beta \beta )_{0\nu }`$ and derive constraints on the $`CP`$ violating phases from the experimental upper bound on $`m_\nu _{ee}`$ (we denote it $`m_\nu _{\text{max}}`$, i.e., $`m_\nu _{ee}<m_\nu _{\text{max}}`$). Since $`m_\nu _{ee}=|M_{ee}|`$, the present experimental upper bound on $`m_\nu _{ee}`$ obtained from the $`(\beta \beta )_{0\nu }`$ forms the circle in the complex plane and this circle must include the point $`M_{ee}`$ inside of it. Namely, the allowed region for $`M_{ee}`$ is the intersection of the inside of the circle of radius $`m_\nu _{\text{max}}`$ around the origin and the inside of the complex-mass triangle which was discussed in section 2.
In the case of $`m_1>m_\nu _{\text{max}}`$, we can obtain the constraints on the $`CP`$ violating phases from the allowed region for $`M_{ee}`$ without using any constraints on the mixing elements $`|U_{ej}|^2(j=1,2,3)`$ as follows. In order to obtain the conditions for the allowed $`M_{ee}`$, it is more convenient to survey the forbidden regions for $`M_{ee}`$. It is easily understood from Fig 2(a) that the complex-mass triangle does not overlap with the circle $`m_\nu _{\text{max}}`$ only if the following conditions are satisfied for all $`i`$ and $`j`$.
$$|\text{arg}(\stackrel{~}{m_j}/\stackrel{~}{m_i})|<\alpha _{ij}.$$
(26)
Here $`\alpha _{ij}`$ is defined by $`\alpha _{ij}\mathrm{cos}^1(m_\nu _{\text{max}}/m_i)+\mathrm{cos}^1(m_\nu _{\text{max}}/m_j)`$. Therefore, the allowed region for $`M_{ee}`$ is the area where, at least, one of the inequalities of Eq. (26) is violated. Since we have $`|\text{arg}(\stackrel{~}{m_2}/\stackrel{~}{m_1})|=|2\beta |`$ , $`|\text{arg}(\stackrel{~}{m_3}/\stackrel{~}{m_1})|=|2\rho ^{}|`$ , and $`|\text{arg}(\stackrel{~}{m_2}/\stackrel{~}{m_3})|=|2\beta 2\rho ^{}|`$, with $`2\beta `$ and $`2\rho ^{}`$ in the interval of $`(\pi ,\pi )`$, we find that Majorana $`CP`$ violating phases $`\beta `$ and $`\rho ^{}`$, must satisfy the following conditions:
$`\alpha _{12}<|2\beta |\text{or}\alpha _{13}<|2\rho ^{}|\text{or}\alpha _{23}<|2\rho ^{}2\beta |.`$ (27)
The allowed region in the $`2\beta `$ vs $`2\rho ^{}`$ plane obtained from Eq.(27) is depicted in Fig. 2(b).
Eq.(27) is also useful in the case where the three neutrino masses are almost degenerate and $`m_\nu _{\text{max}}<m_1m_2m_3m`$. In this case, Eq.(27) reduces to
$`\alpha <|2\beta |\text{or}\alpha <|2\rho ^{}|\text{or}\alpha <|2\rho ^{}2\beta |`$ (28)
with $`\alpha 2\mathrm{cos}^1(m_\nu _{\text{max}}/m)`$ and the allowed region Fig. 2(b) to Fig. 2(c).
## IV constraints on the $`CP`$ violating phases from $`(\beta \beta )_{0\nu }`$ and the neutrino oscillation experiments
Now, we consider the constraints on $`|U_{ej}|^2`$ from the CHOOZ reactor experiment, the recent Super-Kamiokande atmospheric neutrino experiment, solar neutrino experiments and the future KamLAND reactor experiment. Then, by combining these constraints on $`|U_{ej}|^2`$ with one from $`(\beta \beta )_{0\nu },`$ we derive the possible constraints on the $`CP`$ violating phases by using our graphical representation. In the following discussions we consider three cases for the neutrino mass hierarchy, i.e., case(A): two quasi-degenerate neutrino with $`m_1m_2m_3`$, case(B): two quasi-degenerate neutrino with $`m_1m_2m_3`$ and case(C): three quasi-degenerate neutrino with $`m_1m_2m_3=m`$ .
### A two quasi-degenerate neutrino with $`m_1m_2m_3`$
In the case, the oscillation probability for reactor neutrinos in the three-generation model, $`P(\overline{\nu }_e\overline{\nu }_e)`$ is given by
$`P(\overline{\nu }_e\overline{\nu }_e)`$ $`=`$ $`14|U_{e3}|^2(1|U_{e3}|^2)\mathrm{sin}^2\left({\displaystyle \frac{\mathrm{\Delta }m_{13}^2L}{4E}}\right)`$ (29)
$`=`$ $`14s_3^2c_3^2\mathrm{sin}^2\left({\displaystyle \frac{\mathrm{\Delta }m_{13}^2L}{4E}}\right)`$ (30)
if $`\mathrm{\Delta }m_{13}^2L/(4E)1.`$ The present CHOOZ experiment gives a severe restriction on the mixing angle:
$$\mathrm{sin}^22\theta 0.1.$$
(31)
In this case, since $`\theta =\theta _3,`$ we obtain
$$0s_3^20.026\text{ or }0.97s_3^21.$$
(32)
On the other hand, the oscillation probability for the atmospheric neutrinos, $`P(\nu _\mu \nu _\mu )`$ is
$`P(\nu _\mu \nu _\mu )`$ $`=`$ $`14|U_{\mu 3}|^2(1|U_{\mu 3}|^2)\mathrm{sin}^2\left({\displaystyle \frac{\mathrm{\Delta }m_{13}^2L}{4E}}\right)`$ (33)
$`=`$ $`14c_3^2s_2^2(1c_3^2s_2^2)\mathrm{sin}^2\left({\displaystyle \frac{\mathrm{\Delta }m_{13}^2L}{4E}}\right).`$ (34)
The atmospheric $`\nu _\mu `$ deficit in the Super Kamiokande experiment indicates that $`0.84c_3^2s_2^2(1c_3^2s_2^2)1,`$ namely we obtain
$$\frac{0.28}{1s_3^2}s_2^2\frac{0.72}{1s_3^2}.$$
(35)
From these two constraints, Eqs.(32) and (35), we obtain
$$0.28s_2^20.74,s_3^20.026.$$
(36)
Eq.(36) imposes the restriction on $`|U_{e3}|^2`$. Therefore, when combined with the allowed region in the complex mass plane discussed in the section 2, the position of $`M_{ee}`$ in our graphical representation is restricted by the CHOOZ and Super Kamiokande experiments as shown in Fig. 3. We also have the constraints on the mixing angle from the solar neutrino experiments. They give several separate allowed regions for the position of $`M_{ee}`$ in our graphical representation as shown in Fig. 3. Whether the mixing angle for solar neutrinos is large or small can be determined by the future KamLAND reactor experiment . The future KamLAND experiment will also lead to the constraint on the $`|U_{e1}|^2`$ and $`|U_{e2}|^2`$. Since the KamLAND experiment has the chance to observe a lower order mass difference, $`\mathrm{\Delta }m^210^5\text{eV}^2,`$ we can’t neglect the term depend on $`\mathrm{\Delta }m_{12}^2\text{ in }P(\overline{\nu }_e\overline{\nu }_e).`$ So we rewrite Eq.(30) as follows:
$`P(\overline{\nu }_e\overline{\nu }_e)`$ $`=`$ $`14\left[{\displaystyle \frac{|U_{e3}|^2(1|U_{e3}|^2)}{2}}+|U_{e1}|^2|U_{e2}|^2\mathrm{sin}^2\left({\displaystyle \frac{\mathrm{\Delta }m_{12}^2L}{4E}}\right)\right]`$ (37)
$`=`$ $`1\left[{\displaystyle \frac{2s_3^2c_3^2}{\mathrm{sin}^2\left(\frac{\mathrm{\Delta }m_{12}^2L}{4E}\right)}}+4s_1^2c_1^2c_3^4\right]\mathrm{sin}^2\left({\displaystyle \frac{\mathrm{\Delta }m_{12}^2L}{4E}}\right).`$ (38)
$``$ $`1\mathrm{\Xi }^2\mathrm{sin}^2\left({\displaystyle \frac{\mathrm{\Delta }m_{12}^2L}{4E}}\right).`$ (39)
Here we have used the following conditions,
$$\mathrm{sin}^2\left(\frac{\mathrm{\Delta }m_{13}^2L}{4E}\right)\frac{1}{2},\mathrm{sin}^2\left(\frac{\mathrm{\Delta }m_{23}^2L}{4E}\right)\frac{1}{2},$$
(40)
because of their frequent oscillations. Let us combine Eq.(39) with the constraint given in Eq.(36) which is obtained from the CHOOZ and Super Kamiokande experiments. Then the KamLAND experiment will give the constraints on $`|U_{e1}|^2`$ and $`|U_{e2}|^2`$, which will restrict the allowed region for the position of $`M_{ee}`$ as shown in Fig. 4.
Now, with use of our graphical representation, we proceed to discuss the main subject in this paper: If we have non zero value of $`m_\nu _{ee}`$, how can we determine the magnitude of Majorana $`CP`$ phases $`\beta \text{ or }\rho ^{}`$ ?
First we discuss the simple case in which $`|U_{e3}|^2`$ is approximately zero and the large mixing angle solution(LMA), $`0.2|U_{e1}|^20.8`$, is adopted for the solar neutrino problem. In this case, we have
$`m_\nu _{ee}`$ $``$ $`|U_{e1}|^2\stackrel{~}{m}_1+|U_{e2}|^2\stackrel{~}{m}_2`$ (41)
$``$ $`|U_{e1}|^2\stackrel{~}{m}_1+(1|U_{e1}|^2)\stackrel{~}{m}_2.`$ (42)
Given the values of $`m_\nu _{ee},m_1,m_2,|U_{e1}|^2\text{ and }|U_{e2}|^2`$, the $`CP`$ violating phase $`\beta `$ is easily obtained from the graphical representation of Eq.(42). It goes from Fig.1(a-ii) that the complex-mass triangle gets degenerate to a straight line $`\stackrel{~}{m_1}\stackrel{~}{m_2}`$ for $`|U_{e3}|^2=0`$ case and that the position of $`M_{ee}=|U_{e1}|^2\stackrel{~}{m_1}+|U_{e2}|^2\stackrel{~}{m_2}`$ moves along the circle with a radius of $`|U_{e2}|^2m_2(=(1|U_{e1}|^2)m_2)`$ from the point $`|U_{e1}|^2m_1`$ for changing $`\beta `$. On the other hand, the measurement of the $`m_\nu _{ee}`$ restricts $`M_{ee}`$ on the circle with a radius of $`m_\nu _{ee}`$ from the origin. Therefore, the $`\beta `$ is determined by the intersection of the above two circles as shown in Fig.5. Applying the cosine formula to $`\mathrm{}OAB`$ in Fig.5, we find
$$m_\nu ^2=|U_{e1}|^4m_1^2+|U_{e2}|^4m_2^2+2|U_{e1}|^2|U_{e2}|^2m_1m_2\mathrm{cos}2\beta .$$
(43)
Therefore, we obtain
$$\mathrm{cos}2\beta =\frac{m_\nu ^2|U_{e1}|^4m_1^2|U_{e2}|^4m_2^2}{2|U_{e1}|^2|U_{e2}|^2m_1m_2}.$$
(44)
It goes from Eq.(44) with the use of $`1\mathrm{cos}2\beta 1`$ that $`m_\nu _{ee}`$ has the lower and upper limits as $`m_\nu _{\text{lower}}m_\nu _{ee}m_\nu _{\text{upper}}`$, which is shown in Fig.6 with the definitions of
$`m_\nu _{\text{lower}}=|m_1|U_{e2}|^2(m_1+m_2)|`$ (45)
$`m_\nu _{\text{upper}}=m_1+|U_{e2}|^2(m_2m_1).`$ (46)
On the other hand, for the case where $`|U_{e3}|^20`$ and the small mixing angle solution(SMA) is adopted for the solar neutrino problem, i.e., $`\theta _1=0\text{ or }\pi /2,`$ we can not obtain any information about $`\beta `$, since we have $`m_\nu _{ee}=|m_1|=m_1`$ for $`\theta _1=0`$ or $`m_\nu _{ee}=e^{2i\beta }m_2=m_2`$ for $`\theta _1=\pi /2`$.
Second we consider the case where $`|U_{e3}|^20`$ and the LMA solution, $`0.2|U_{e1}|^20.8`$, is adopted for the solar neutrino problem. In this case, we have
$$M_{ee}|U_{e3}|\stackrel{~}{m_3}=|U_{e1}|^2m_1+|U_{e2}|^2\stackrel{~}{m_2}.$$
(47)
The graphical representation of Eq.(47) is shown in Fig.7. In Fig.7(a) we consider the case in which the circle of radius $`|U_{e2}|^2m_2`$ around the point $`(|U_{e1}|^2m_1,0)`$ (which we refer as $`A`$ or $`\stackrel{}{OA}`$) intersects with the circles of radius $`m_\nu _{ee}\pm |U_{e3}|^2m_3`$ around the origin at the points $`B_1`$ and $`B_2`$. We find that $`2\beta `$ is ranging from the argument of $`\stackrel{}{AB_1}`$ to that of $`\stackrel{}{AB_2}`$ as seen in Fig.7(a). The relation between $`\beta `$ and $`\rho ^{}`$ is also derived from Eq.(47): For fixed $`2\beta `$, the $`\rho ^{}`$ has two solution $`\rho _1^{}`$ and $`\rho _2^{}`$ which are determined by the points $`C_1`$ and $`C_2`$ as shown in Fig.7(b). Here the $`C_1`$ and $`C_2`$ are the intersections of the circle of radius $`|U_{e3}|^2m_3`$ around the point $`|U_{e1}|^2\stackrel{~}{m_1}+|U_{e2}|^2\stackrel{~}{m_2}`$ (which we refer as $`B`$) with the circle of radius $`m_\nu _{ee}`$ around the origin since $`\stackrel{}{OA}+\stackrel{}{AB}+\stackrel{}{BC}=M_{ee}`$ from Eq.(47). Thus we obtain the relation between $`\beta `$ and $`\rho ^{}`$. We depict this relation in Fig. 8. The other cases may occur but they can be treated analogously.
### B two quasi-degenerate neutrino with $`m_1m_2m_3`$
In this case, the CHOOZ experiment and the atmospheric neutrino deficit experiment indicates $`|U_{e1}|^20`$ as is seen in Fig. 9. Therefore, we can discuss this case with the same way as the case(A) only by replacing $`m_1`$, $`|U_{e1}|^2`$, and $`\beta `$ with $`m_3`$, $`|U_{e3}|^2`$, and $`\beta \rho ^{}`$, respectively.
### C three quasi-degenerate neutrino with $`m_1m_2m_3=m`$
We assume that all three neutrino masses are almost degenerate, then we have
$$M_{ee}=m\left(|U_{e1}|^2+|U_{e2}|^2e^{2i\beta }+|U_{e3}|^2e^{2i\rho ^{}}\right).$$
(48)
The constraints on the $`CP`$ violating phases from this $`M_{ee}`$ is obtained from the similar discussions as in Fig. 7 with only taking $`m_1=m_2=m_3=m`$ in it. It should be noted that for the case where $`\mathrm{\Delta }m_{12}^2\mathrm{\Delta }m_{13}^2`$ and $`|U_{e3}|^2`$ is approximately zero, we find
$`\mathrm{sin}^2\beta `$ $`=`$ $`{\displaystyle \frac{m^2m_\nu _{ee}^2}{4|U_{e1}|^2(1|U_{e1}|^2)m^2}}`$ (49)
$`=`$ $`{\displaystyle \frac{m^2m_\nu _{ee}^2}{4s_1^2(1s_1^2)m^2}}.`$ (50)
which is the same result as one obtained from Eq.(44) with replacing $`m_i(i=1,2,3)`$ with $`m`$. We find from Eq.(50) that the following lower limit of $`\mathrm{sin}^2\beta `$ is obtained for the large mixing angle solution(LMA), $`0.2|U_{e1}|^2=s_1^20.8`$, of the solar neutrino problem,
$$\mathrm{sin}^2\beta 1\left(\frac{m_\nu _{ee}}{m}\right)^2,$$
(51)
where the lower limit is realized at $`s_1^2=0.5`$.
## V summary
We have introduced graphical representations of the complex masses, $`M_{ee}`$, $`M_{\mu e}`$ and $`M_{\mu \mu }`$ whose absolute magnitudes are experimentally observable ”averaged” masses, $`m_\nu _{ee}`$, $`m_\nu _{\mu e}`$ and $`m_\nu _{\mu \mu }`$ of the lepton number violation processes such as neutrinoless double beta decay, the $`\mu ^{}e^+`$ conversion and the K decay, $`K^{}\pi ^+\mu ^{}\mu ^{}`$ , respectively. By using those graphical representations, we have investigated how to determine the magnitude of the $`CP`$ violating phases from the analysis of the neutrinoless double beta decay. First we have discussed without using any constraint on the mixing elements $`|U_{ej}|^2(j=1,2,3)`$ and obtained the constraints on the Majorana $`CP`$ violating phases, Eqs.(27) and (28) if $`m_\nu _{\text{max}}<m_1`$, from which the allowed region in the $`2\beta \text{ vs }2\rho ^{}`$ plane have been derived and shown in Fig. 2. Of course, we have no constraint on the Majorana $`CP`$ phases if $`m_\nu _{\text{max}}>m_1`$. Still, Eq.(28) is useful if the three neutrino masses are almost degenerate and $`m_\nu _{\text{max}}<m_1m_2m_3=m`$. Next by using the constraints on the mixing elements $`|U_{ej}|^2(j=1,2,3)`$ obtained from the recent Super Kamiokande atmospheric neutrino experiment, the solar neutrino experiment, the recent CHOOZ reactor experiment, and the future KamLAND reactor experiment, we have further discussed the possible constraints on the Majorana $`CP`$ violating phases for three cases for the neutrino mass hierarchy, i.e., case(A): two quasi-degenerate neutrino with $`m_1m_2m_3`$, case(B): two quasi-degenerate neutrino with $`m_1m_2m_3`$ and case(C): three quasi-degenerate neutrino with $`m_1m_2m_3=m`$ . In the case(A), we have obtained the expression of $`\mathrm{cos}2\beta `$, Eq.(44), in terms of $`m_1,m_2,m_\nu _{ee}`$, $`|U_{e1}|^2`$, and $`|U_{e2}|^2`$ for the simple case where $`|U_{e3}|^20`$ with the use of the large mixing angle solution(LMA) for the solar neutrino problem. The $`m_\nu _{ee}\text{ vs }\mathrm{cos}2\beta `$ relation is shown in Fig. 5. Using $`1\mathrm{cos}2\beta 1`$, we have found that the $`m_\nu _{ee}`$ has the lower and upper limits as given in Eq.(46). We have also obtained the relation between $`2\beta \text{ and }2\rho ^{}`$ for $`|U_{e3}|^20`$ case which is shown in Figs. 7 and 8. We can discuss the case(B) using the same way as the case(A) by replacing $`m_1`$, $`|U_{e1}|^2`$, and $`\beta `$ with $`m_3`$, $`|U_{e3}|^2`$, and $`\beta \rho ^{}`$, respectively. In the case(C), we have obtained the expression of $`\mathrm{sin}^2\beta `$ given in Eq.(50) for the simple case where $`\mathrm{\Delta }m_{12}^2\mathrm{\Delta }m_{13}^2`$ and $`|U_{e3}|^20`$ with the use of the LMA solution for the solar neutrino problem. From this relation we have found that the lower limit of $`\mathrm{sin}^2\beta `$ is given by $`\mathrm{sin}^2\beta 1(m_\nu _{ee}/m)^2`$ for the LMA solution.
Acknowledgement
We are greatly indebted to O.Yasuda for useful discussions.
FIG.1 Graphical representations of the $`CP`$ violating phases and the complex masses $`M_{ee}`$, $`M_{\mu e}`$ and $`M_{\mu \mu }`$ defined in Eqs.(23)-(25). (a-i) The complex-mass triangle for $`M_{ee}`$ is formed by the three points $`\stackrel{~}{m_i}(i=1,2,3)`$ defined in Eqs.(9)-(11). The allowed position of $`M_{ee}`$ is in the intersection (shaded area) of the inside of this triangle and the inside of the circle of radius $`m_\nu _{\text{max}}`$ around the origin. (a-ii) The relations between the position of $`M_{ee}`$ and $`U_{ei}(i=1,2,3)`$ components of MNS mixing matrix. (b) The position of $`M_{\mu e}`$. The position of $`M_{\mu e}`$ is at the vertex of the parallelogram of which the other vertexes are at $`\stackrel{~}{m_1}`$, $`|U_{e2}^{}U_{\mu 2}|e^{2i(\phi _{22}\phi _{21})}(\stackrel{~}{m_2}\stackrel{~}{m_1})`$, and $`|U_{e3}^{}U_{\mu 3}|e^{2i(\varphi \phi _{21})}(\stackrel{~}{m}_3\stackrel{~}{m_1})`$. Namely, rotate $`\stackrel{~}{m_2}`$ clockwise by $`\varphi \phi _{22}`$ around $`\stackrel{~}{m_1}`$ and scale down by $`|U_{e2}^{}U_{\mu 2}|`$. From this point extend the line parallel to the side of $`\stackrel{~}{m_1}\stackrel{~}{m_3}`$ by $`|U_{e3}^{}U_{\mu 3}||\stackrel{~}{m_3}\stackrel{~}{m_1}|`$, then we obtain the position of $`M_{\mu e}`$. (c) The complex-mass triangle for $`M_{\mu \mu }`$ (thick lines). The allowed position of $`M_{\mu \mu }`$ is within the triangle formed by the three points $`\stackrel{~}{m_1}`$, $`e^{2i(\phi _{22}\phi _{21})}\stackrel{~}{m_2}`$, and $`e^{2i(\varphi \phi _{21})}\stackrel{~}{m}_3`$.
FIG.2 The restrictions of CP violating phases $`2\beta `$ and $`2\rho ^{}`$ from $`(\beta \beta )_{0\nu }`$ with arguments independent of $`|U_{ej}|^2`$. (a) The allowed region of $`M_{ee}`$ is inside of the complex-mass triangle overlapped with the inside of the circle of radius $`m_\nu _{\text{max}}`$. The case where the conditions $`|\alpha _{ij}|>|\text{arg}(\stackrel{~}{m_j}/\stackrel{~}{m_i})|`$ are satisfied for all $`i`$ and $`j`$ is excluded since the triangle and the circle can not overlap each other. Here we define $`\alpha _{ij}\mathrm{cos}^1(m_\nu _{\text{max}}/m_i)+\mathrm{cos}^1(m_\nu _{\text{max}}/m_j)`$, $`|\text{arg}(\stackrel{~}{m_2}/\stackrel{~}{m_1})||2\beta |<\pi `$ , $`|\text{arg}(\stackrel{~}{m_3}/\stackrel{~}{m_1})||2\rho ^{}|<\pi `$ , and, therefore, $`|\text{arg}(\stackrel{~}{m_2}/\stackrel{~}{m_3})||2\beta 2\rho ^{}|<2\pi `$. (b) The allowed region (shaded area) in the $`2\beta `$ vs $`2\rho ^{}`$ plane for $`m_\nu _{\text{max}}<m_1<m_2<m_3`$ case. (c) The special case of Fig.2 (b) for the case in which three neutrinos have almost degenerate masses with $`m_\nu _{\text{max}}<m_1m_2m_3`$. Here $`\alpha 2\mathrm{cos}^1(m_\nu _{\text{max}}/m)`$.
FIG.3 The allowed region (shaded area) of $`M_{ee}`$ for the case (A) from the CHOOZ, the atmospheric $`\nu _\mu `$ deficit and the solar neutrino experiments. The CHOOZ and the atmospheric $`\nu _\mu `$ deficit experiments restrict the position of $`M_{ee}`$ in the neighborhood of the side $`\stackrel{~}{m_1}\stackrel{~}{m_2}`$ . The large mixing angle (LMA) and the small mixing angle (SMA) solutions for the solar neutrino problem give separate allowed regions.
FIG.4 Constraints from KamLAND experiments. The contours of P$`(\overline{\nu _e}\overline{\nu _e})`$ in our complex-mass triangle. ($`\mathrm{\Xi }`$ is defined in Eq.(4.7).) They are plotted at the interval of $`0.2`$ of $`\mathrm{\Xi }^2`$ for the typical values of $`\mathrm{sin}^2\frac{\mathrm{\Delta }m_{12}^2}{4E}L`$. (a); $`\mathrm{sin}^2\frac{\mathrm{\Delta }m_{12}^2}{4E}L=0.1`$. (b); $`\mathrm{sin}^2\frac{\mathrm{\Delta }m_{12}^2}{4E}L=0.5`$. (c); $`\mathrm{sin}^2\frac{\mathrm{\Delta }m_{12}^2}{4E}L=1`$.
FIG.5 The determination of $`\beta `$ for $`|U_{e3}|^2=0`$ in the case(A). The $`\beta `$ is determined from the point $`B`$ which is the intersection of two circles; the circle of radius $`m_\nu _{ee}`$ around the origin and that of radius of $`|U_{e2}|^2m_2`$ around $`(|U_{e1}|^2m_1,0)`$ (which we refer as $`A`$). The line $`AB`$ is parallel to the line $`O\stackrel{~}{m_2}`$. Here $`|OA|=|U_{e1}|^2m_1`$, $`|OB|=m_\nu _{ee}`$, $`|AB|=|U_{e2}|^2m_2`$.
FIG.6 The relation between $`\mathrm{cos}2\beta `$ and $`m_\nu _{ee}`$ which is obtained from Eq.(4.11). The solid line is for the case, $`U_{e3}=0`$. For $`U_{e3}0`$ case, the relation has a band structure (shaded region). Here we define $`m_\nu _{\text{lower}}|m_1|U_{e2}|^2(m_1+m_2)|`$ and $`m_\nu _{\text{upper}}m_1+|U_{e2}|^2(m_2m_1)`$.
FIG.7 The constraint on $`\beta `$ and $`\rho ^{}`$ for $`|U_{e3}|^20`$ in the case(A). (a) The $`2\beta `$ is ranging from the argument of $`\stackrel{}{AB_1}`$ (angle between $`\stackrel{}{AB_1}`$ and the horizontal axis) to that of $`\stackrel{}{AB_2}`$. Here $`OA=|U_{e1}|^2m_1`$ and $`|AB|=|U_{e2}|^2m_2`$. In this diagram we consider the case where the circle of radius $`|U_{e2}|^2m_2`$ around $`A`$ intersects with the circles of radius $`m_\nu _{ee}\pm |U_{e3}|^2m_3`$ at $`B_1`$ and $`B_2`$. (b) The $`\rho ^{}`$ has two solution $`\rho _1^{}`$ and $`\rho _2^{}`$ for fixed $`2\beta `$ since $`\stackrel{}{OA}+\stackrel{}{AB}+\stackrel{}{BC}=M_{ee}`$. The dotted line is the circle of radius $`|U_{e3}|^2m_3`$ around $`B`$ which intersects the circle of radius $`m_\nu _{ee}`$ around the origin at $`C_1`$ and $`C_2`$. Here we refer the point $`|U_{e1}|^2\stackrel{~}{m_1}+|U_{e2}|^2\stackrel{~}{m_2}`$ as $`B`$.
FIG.8 The relation between $`\mathrm{cos}2\beta `$ and $`\mathrm{cos}2\rho ^{}`$. The $`2\rho ^{}`$ has always two solutions for fixed $`\beta `$ as shown in Fig. 7(b). The $`a_\pm `$ and $`b_\pm `$ are given by $`a_\pm \frac{m_\nu _{ee}^2(|U_{e1}|^2m_1\pm |U_{e3}|^2m_3)^2|U_{e2}|^4m_2^2}{2|U_{e2}|^2m_2(|U_{e1}|^2m_1\pm |U_{e3}|^2m_3)}`$ and $`b_\pm \frac{(m_\nu _{ee}\pm |U_{e3}|^2m_3)^2|U_{e1}|^4m_1^2|U_{e2}|^4m_2^2}{2|U_{e1}|^2|U_{e2}|^2m_1m_2}`$.
FIG.9 The allowed region of $`M_{ee}`$ for the case (B) from the CHOOZ and the Super Kamiokande experiments. The position of $`M_{ee}`$ is restricted in the shaded area which is in the neighborhood of the edge $`\stackrel{~}{m_2}\stackrel{~}{m_3}`$.
|
no-problem/0003/hep-th0003163.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
Indy: I’m going after that \[$`AdS`$\]. Sallah: How? Indy: I don’t know. I’m making this up as I go. “Raiders of the Lost Ark”
Recently, there has been great interest in the $`AdS/CFT`$ correspondence (-, and many others). Although there has been a great deal of work related to the study of $`AdS_3`$, $`AdS_4`$, $`AdS_5`$ and $`AdS_7`$, there has been relatively little study of the $`AdS_2`$ case. Type II supergravity exhibits solutions of the form $`AdS_2\times S^2\times T^6`$, $`AdS_2\times S^3\times T^5`$ and other solutions related to Calabi-Yau compactification.
On one hand, the $`AdS_2/CFT_1`$ correspondence has the potential to be extremely fascinating. A weakly curved $`AdS_2\times S^2`$ space will be an approximately flat 4 dimensional space. As a result, the $`AdS_2/CFT_1`$ correspondence naively may be able to make non-trivial statements about 4 dimensional quantum gravity. On the other hand, there are potential obstacles to a true understanding or complete formulation of the $`AdS_2/CFT_1`$ correspondence, such as the fragmentation of $`AdS_2`$, etc.
Nevertheless, a clearer picture of the relationship between $`AdS_2`$ and $`CFT_1`$ has recently emerged -. In particular, it was noted in that any theory of quantum gravity in the background of $`AdS_2`$ contains not only an $`SL(2,R)`$ symmetry, but also the symmetries of a Virasoro algebra. This may imply that any boundary theory holographically dual to $`AdS_2`$ should also contain the symmetries of a Virasoro algebra. In , it was shown that any classical scale-invariant mechanics of one variable exhibited not only conformal invariance, but also the symmetries of a Virasoro algebra. In a few instances, this result was extended to a quantum mechanical statement of the commutation relations between generators (as opposed to a statment regarding Poisson bracket relations between generator functions). However, a general statement regarding multi-particle systems and operator algebras was still missing.
In this paper we demonstrate that, under certain conditions, a theory of conformal quantum mechanics (of an arbitrary number of variables) exhibits the symmetries of two half- Virasoro algebras. If a further condition holds, then these two algebras can be combined to form a single Virasoro algebra.
### 1.1 Classical Generators
The conformal group in $`0+1`$ dimensions is $`SL(2,R)`$. The algebra can be written as
$$[D,H]=ı\mathrm{}H[D,K]=ı\mathrm{}K[H,K]=2ı\mathrm{}D$$
(1)
If one makes the identification
$$L_1=\frac{ı}{\mathrm{}}HL_0=\frac{ı}{\mathrm{}}DL_1=\frac{ı}{\mathrm{}}K,$$
(2)
then the conformal algebra is identical to the $`SL(2,R)`$ algebra formed by the global subalgebra of the Virasoro algebra (note that there are other ways of embedding the conformal algebra in the Virasoro algebra).
In it was shown that if the conformal algebra of a theory of classical conformal mechanics is embedded in the global subalgebra of the Virasoro algebra as shown above, then generators of a full Virasoro algebra can be found.
In the examples discussed in , one can see that the Virasoro generators can be written in the form <sup>2</sup><sup>2</sup>2We thank Sangmin Lee for making this point.
$$L_m=L_0^{1+m}L_1^m.$$
(3)
One can easily show at the level of Poisson brackets that if $`\{L_0,L_1\}_{PB}=L_1`$, then the generators defined above satisfy the algebra
$`\{L_m,L_n\}_{PB}`$ $`=`$ $`(1+m)(n)L_0^{1+m+n}L_1^{mn1}\{L_0,L_1\}_{PB}`$ (4)
$`+(1+n)(m)L_0^{1+m+n}L_1^{mn1}\{L_1,L_0\}_{PB}`$
$`=`$ $`[(1+m)(n)(1+n)(m)]L_{m+n}`$
$`=`$ $`(mn)L_{m+n}.`$
Thus, any system of classical mechanics with scale-invariance also exhibits the symmetries of a full Virasoro algebra (at the level of Poisson brackets). This is a generalization of the work done in , as it applies to the case of an arbitrary number of variables. However, the analysis is still only classical in nature. In order to make a statment about conformal quantum mechanics, one must also understand the issues related to the normal ordering of operators.
### 1.2 Quantum Generators
Suppose that the generators $`L_0`$, $`L_1`$ and $`L_1`$ satisfy the $`SL(2,R)`$ algebra given by
$$[L_0,L_1]=L_1[L_0,L_1]=L_1[L_1,L_1]=2L_0.$$
(5)
If the operator $`L_1`$ is invertible, then we may consistently define the operator $`L_1^1`$. This operator has the commutation relation $`[L_0,L_1^1]=L_1^1`$. We then make the ansatz
$$L_m=L_0(L_1^1L_0)^m=(L_0L_1^1)^mL_0m0.$$
(6)
One can easily see that
$`[L_m,L_n]`$ $`=`$ $`L_0(L_1^1L_0)^mL_0(L_1^1L_0)^nL_0(L_1^1L_0)^nL_0(L_1^1L_0)^m`$ (7)
$`=`$ $`(mL_{m+n}+L_0L_{m+n})(nL_{m+n}+L_0L_{m+n})`$
$`=`$ $`(mn)L_{m+n},`$
where $`m`$,$`n0`$. Similarly, one finds that
$`[L_m,L_1]`$ $`=`$ $`L_0(L_1^1L_0)^mL_1L_1L_0(L_1^1L_0)^m`$ (8)
$`=`$ $`L_0(L_1^1L_0)^{m1}(1+L_0)L_0^2(L_1^1L_0)^{m1}+L_0(L_1^1L_0)^{m1}`$
$`=`$ $`2L_{m1}+(m1)L_{m1}`$
$`=`$ $`(m+1)L_{m1}.`$
It is thus clear that, given a system of conformal mechanics with $`SL(2,R)`$ symmetry, the invertibility of $`L_1`$ implies the existence of a half-Virasoro algebra consisting of all Virasoro modes $`L_m`$ with $`m1`$. In particular, a scale-invariant theory will also exhibit conformal invariance (with $`K=DH^1D`$), provided that there are no zero-energy states. This can be seen by defining $`L_1=\frac{ı}{\mathrm{}}H`$ and $`L_0=\frac{ı}{\mathrm{}}D`$ and applying the above results. Since $`H`$ is hermitian, $`L_1`$ will be invertible if $`H`$ has no eigenvalues which are zero. From the above construction one will find a half-Virasoro algebra with $`L_1=\frac{ı}{\mathrm{}}K`$.
It is not entirely clear how this statement is reconciled with , where it was shown that a scale-invariant Hamiltonian (with only quadratic momentum dependence) exhibits conformal invariance only if it admits a closed homothety. However, studied the case where the special conformal symmetry generator $`K`$ depended only on the position operators $`X`$, and not on their momentum conjugates. In the construction given above, however, it is clear that $`K`$ will generically depend on momentum.
In an exactly analogous manner, one can consider the situation where $`L_1`$ is invertible (such that one can consistently define $`L_1^1`$). In this case, we make the ansatz
$$L_m=L_0(L_1^1L_0)^m=(L_0L_1^1)^mL_0m0.$$
(9)
One finds that these operators yield a half- Virasoro algebra
$$[L_m,L_n]=(mn)L_{m+n}$$
(10)
which closes for $`m1`$.
We might then ask whether it is possible for these two half-Virasoro algebras to be united into a single full Virasoro algebra. The key to this question is the overlap of these algebras, namely the generators $`L_1`$, $`L_0`$ and $`L_1`$. We will demand that these generators are the same in both half-Virasoro algebras. This implies the conditions
$$L_0L_1^1L_0=L_1L_0L_1^1L_0=L_1.$$
(11)
If $`L_0`$ is also invertible, then these two conditions are actually identical.
When (11) is satisfied, we can actually find a full Virasoro algebra whose quantum generators are given by
$`L_m`$ $`=`$ $`L_0(L_1^1L_0)^m=(L_0L_1^1)^mL_0m0`$
$`L_m`$ $`=`$ $`L_0(L_1^1L_0)^m=(L_0L_1^1)^mL_0m0`$ (12)
One would like to show that the generators defined in this way satisfy the full quantum Virasoro algebra $`[L_m,L_n]=(mn)L_{m+n}+f(m)\delta _{mn}`$. For the case where either $`m`$,$`n0`$ or $`m`$,$`n0`$, the algebra is obviously satisfied. Consider the case $`m>0`$, $`n<0`$, $`m+n>0`$.
$`[L_n,L_m]`$ $`=`$ $`[(L_0L_1^1)^nL_0,L_0(L_1^1L_0)^m]`$ (13)
$`=`$ $`(L_0L_1^1)^{n1}L_1L_0(L_1^1L_0)^m(L_0L_1^1)^mL_0L_1(L_1^1L_0)^{n1}`$
$`=`$ $`L_{n+1}L_{m1}L_{m1}L_{n+1}(L_0L_1^1)^{n1}L_{m1}`$
$`L_{m1}(L_1^1L_0)^{n1}`$
$`=`$ $`[L_{n+1},L_{m1}]2L_{m+n}.`$
After anchoring the recursion relation with $`[L_1,L_m]=(m+1)L_{m1}`$, one finds that the Virasoro algebra is satisfied for all $`m`$ and $`n`$ in the range of interest. A similar argument shows that this is also true for $`m+n<0`$. Thus, one sees that any theory of conformal quantum mechanics which satisfies the above conditions also has the symmetries of a full Virasoro algebra. In addition, it is clear from the above calculation that the algebra has no central charge.
Given a theory with operators $`L_0`$ and $`L_1`$ which satisfy the appropriate commutation relations, where $`L_1`$ is invertible, one can simply define $`L_1=L_0L_1^1L_0`$. The invertiblity of $`L_1`$ implies the existence of two half-Virasoro algebras, and the invertibilty of $`L_0`$ would further imply that these two algebras combine to form a single Virasoro algebra.
Note that, under the (2), $`L_{1,0,1}`$ are anti-hermitian. It is clear that if this is the case, then the $`L_m`$’s defined by (1.2) are all anti-hermitian. However, in the context of $`1+1`$ conformal field theory one usually defines $`L_m`$’s which satisfy the hermiticity property $`L_n^{}=L_n`$. If this property is satisfied by $`L_{1,0,1}`$, then the $`L_m`$’s defined by (1.2) satisfy it as well.
#### 1.2.1 A simple example
Consider the Hamiltonian of a non- relativistic free particle in one dimension, $`H=\frac{1}{2}p^2`$ (where the coordinate has been rescaled in order to absorb the mass into the conjugate momentum). If one writes the standard dilatation operator $`D=\frac{1}{4}(rp+pr)`$, one finds that $`H`$ and $`D`$ satisfy the standard commutation relation
$$[D,H]=ı\mathrm{}H.$$
(14)
We may project out of the Hilbert space all states whose wavefunctions are even under $`rr`$ (this is, in fact, exactly what we would do if we treated this as the Hamiltonian for the radial wavefunction of a free particle in three dimensions with no angular momentum). The remaining energy eigenstates have wavefunctions of the form $`\psi (r)=A\mathrm{sin}(kr)`$ with energies $`E=\frac{\mathrm{}^2k^2}{2m}`$. One finds that there is a continuum of eigenstates with arbitrary positive energy (as scale-invariance demands), but there is no normalizable zero-energy state (as the wavefunction for such a state would vanish everywhere). Therefore, one may invert the Hamiltonian and define the operator
$$K=DH^1D=\frac{1}{2}r^2\frac{3}{8}\mathrm{}^2\frac{1}{p^2}.$$
(15)
This operator differs from the usual special conformal symmetry generator ($`K=\frac{1}{2}r^2`$), but nevertheless is well-defined and allows the conformal algebra to close. By writing $`K`$ as a momentum-space operator (substituting $`r=ı\mathrm{}\frac{}{p}`$), one can easily show that $`K`$ also has no normalizable eigenstates with zero eigenvalue. Using the embedding defined in (2), one can verify straightforwardly that the conditions (11) are satisfied. This means that one can write the quantum generators of a full Virasoro algebra (1.2).
In many constructions of conformal quantum mechanics it is common for the operator $`H^{}=\frac{1}{2}(HK)`$ (in our conventions) to be used as the Hamiltonian, due to the fact that it has a discrete spectrum. In the example given above, one finds
$$H^{}=\frac{1}{2}(HK)=\frac{1}{4}p^2+\frac{1}{4}r^2+\frac{3}{16}\frac{\mathrm{}^2}{p^2}.$$
(16)
Under the phase space rotation $`\stackrel{~}{r}=p`$, $`\stackrel{~}{p}=r`$, we see that this is the Hamiltonian for a bound particle (indeed, it is a Hamiltonian of the Calogero-Moser form), and thus has a discrete spectrum which is bounded from below.
## 2 Relation to $`AdS_2`$
Perhaps there is some vital bit of evidence which eludes us. Belloq, “Raiders of the Lost Ark”
#### 2.0.1 Calogero models
The naive $`AdS/CFT`$ correspondence suggests that there is a boundary conformal quantum mechanics which is dual to quantum gravity on the space $`AdS_2\times S^2\times T^6`$. In , it was suggested that this bosonic part of the boundary conformal quantum mechanics is actually given by the $`N`$ particle Calogero model with Hamiltonian
$$H=\frac{1}{2}\underset{i=0}{\overset{N}{}}p_i^2+\underset{i<j}{}\frac{\lambda }{|r_jr_i|^2}.$$
(17)
It is well-known that this system has conformal symmetry. It is also well-known that the Hamiltonian for this system has no ground state. The results above thus indicate that the $`N`$ particle Calogero Model respects the symmetries of a half-Virasoro algebra. To determine if the other half of the Virasoro algebra is also present, one would have to calculate $`L_1=L_0L_1^1L_0`$ and determine its spectrum, a more complicated task which will not be attempted in this work.
The Calogero model itself is a limit of the more general Calogero-Moser model, whose Hamiltonian is given by
$$H=\frac{1}{2}\underset{i=0}{\overset{N}{}}p_i^2+\underset{i<j}{}\frac{\lambda }{|r_jr_i|^2}+\frac{k}{2}\underset{i=0}{\overset{N}{}}r_i^2.$$
(18)
It is already known that the Calogero-Moser model exhibits the symmetries of a Virasoro algebra, with the embedding $`L_0=H`$. Note that this is not the embedding which we discussed earlier. In fact, the Calogero-Moser model is not conformally invariant, as its Hamiltonian does not have a continuous spectrum. But in the limit $`k0`$, the Calogero-Moser model reduces to the Calogero model in question. However, in that limit generators of the Virasoro algebra used in become infinite (as they contain negative powers of the constant $`k`$). This is not unexpected (when $`H`$ is related to $`L_0`$) because the Calogero-Moser model has a discrete spectrum, whereas the Calogero model has a continuous spectrum.
However, in the construction given by (2) and (1.2), the generators of the half-Virasoro algebra are well-defined. If the Calogero model is indeed related to the boundary conformal theory which is dual to quantum gravity on $`AdS_2\times S^2\times T^6`$, then it is interesting to note that it contains at least the symmetries of a half-Virasoro algebra, whereas it is known that quantum gravity on $`AdS_2`$ contains the symmetries of a Virasoro algebra.
#### 2.0.2 Probing with a test particle
One may also consider the quantum mechanics describing a test-particle in the background of $`adS_2`$. The Hamiltonian for such a particle was discussed in , where it was found that in the non- relativisitic near-horizon limit the bosonic part of the Hamiltonian reduced to that of DFF conformal quantum mechanics ,
$$H=\frac{p^2}{2}+\frac{2J^2}{r^2}.$$
(19)
It is clear that this Hamiltonian has no normalizable zero-energy eigenstates, thus indicating that it respects the symmetries of a half-Virasoro algebra. Determining the invertibility of $`L_1`$ (or equivalently, $`K`$) is more complicated, and will not be attempted here. However, in the case where $`J=0`$, the system simply reduces to the free particle example discussed earlier, where it is clear that $`K`$ is invertible, and thus that the other half of the Virasoro algebra can also be constructed. One should note that the form of the quantum generators found here is identical to the form of the classical generators of the Virasoro algebra found in for the same system (up to the terms related to normal ordering). We see now that these generators not only generate a classical symmetry under Poisson brackets, but also generate a full quantum symmetry under commutators.
## 3 Discussion and Further Research
Eaton: We have top men working on it right now. Indy: Who? Eaton: … Top men. “Raiders of the Lost Ark”
We have shown that a scale-invariant quantum mechanical system with no zero-energy states exhibits not only $`0+1`$ conformal invariance, but also the symmetries of a half-Virasoro algebra (defined as the algebra $`L_m`$ for $`m1`$). If the operator $`L_1`$ defined by this half-Virasoro algebra is also invertible, then the system also exhibits the symmetries of another half-Virasoro algebra (given by the generators $`L_m`$ with $`m1`$). If these two half-Virasoro algebras have identical generators $`L_0`$, $`L_1`$ and $`L_1`$, then the two half-Virasoro algebras in fact form a single Virasoro algebra with no central charge.
It is noteworthy that these constructions of the Virasoro algebra exhibit zero central charge. From the $`AdS_2/CFT_1`$ correspondence, one would expect superconformal quantum mechanics to be the boundary theory dual to string theory on $`AdS_2\times S^2\times T^6`$. The gravity theory on $`AdS_2`$ has the symmetries of a Virasoro algebra, as it is a 2D theory of gravity on a strip. This Virasoro algebra should not have a central charge when the effects of ghosts are also included. It had been speculated that this might imply that the dual superconformal quantum mechanics has the symmetries of a Virasoro algebra with no central charge. But in , it was shown that the construction of the Virasoro algebra given in was actually contained in a larger $`w_{\mathrm{}}`$ algebra. Calogero models have also been shown to exhibit the symmetries of a $`w_{\mathrm{}}`$ algebra; the properties of these systems were studied extensively in . For the case of a particle in the background of $`AdS_2`$, it was shown in that the central charge associated with the Virasoro algebra of coordinate diffeomorphisms of $`AdS_2`$ is replicated by the unique central extension of the $`w_{\mathrm{}}`$ algebra. However, our result seems to be more general because it holds for conformal theories which are not necessarily connected to $`AdS_2`$. But it seems not unreasonable to expect that a fuller examination of theories of conformal quantum mechanics of an arbitrary number of variables will also show that the Virasoro algebras found here can be extended to $`w_{\mathrm{}}`$ algebras. If so, perhaps a study of the central extensions of this $`w_{\mathrm{}}`$ algebra (and a comparison of this with the central charge of the diffeomorphism algebra of $`AdS_2`$) will shed more light on the connection between $`AdS_2`$ and conformal quantum mechanics.
There are several directions in which further work can proceed. It would be very interesting to understand the circumstances under which (11) held more systematically. So far, work has focused only on the bosonic theory; further investigation of supersymmetric generalizations of these constructions is required. It is also important to better understand the physical significance of the half-Virasoro algebras which have been found here. Finally, the connections found here between conformal quantum mechanics and the Virasoro algebra further strengthen the notion that there is a deep connection between conformal quantum mechanics and $`1+1`$ conformal field theory . The relationship between this connection and the $`AdS/CFT`$ correspondence should be the subject of future work.
Acknowledgements
We gratefully acknowledge S. Cullen, S. Kachru, S. Lee, M. Schulz, S. Shenker, A. Strominger and N. Konstantine Toumbas for useful discussions. We would also like to thank Caltech (where this work was first conceived) for its hospitality. This work has been supported by NSF grant PHY-9870115.
|
no-problem/0003/hep-ph0003089.html
|
ar5iv
|
text
|
# Inflation in Models with Large Extra Dimension Driven by a Bulk Scalar Field
## I Introduction
Theories with extra dimensions where our four dimensional world is a hypersurface (3-brane) embedded in a higher dimensional space (the bulk) have been the focus of intense scrutiny during the last two years. It is generally assumed that in this picture the standard model particles are in the brane whereas gravity and perhaps other standard model singlets propagate in the bulk. The main motivation for these models comes from string theories where the Horava-Witten solution of the non perturbative regime of the $`E_8\times E_8`$ string theory provided one of the first models of this kind (although from a phenomenological point of view the idea was discussed early on by several authors ). Additional interest arose from the observation that the bulk size could be as large as a millimeter leading to new observable deviations from Newton’s inverse square law at the millimeter scale, where curiously enough Newton’s law remains largely untested.
A key formula that relates the string scale to the radius of the large extra dimension in these models is:
$$M^{2+n}R^n=M_P\mathrm{}^2,$$
(1)
where $`R`$ is the common radius of the $`n`$ extra dimensions, $`M`$ is the string scale and $`M_P\mathrm{}`$ is the Planck scale. For $`R`$ millimeter, $`M`$ can be as low as few TeV thereby providing another resolution of the long standing hierarchy problem. This has been another motivation for these theories.
While this picture leads to many interesting consequences for collider and other phenomenology, it seems to require drastic rethinking of the prevailing view of cosmology . In particular, one runs into a great deal of difficulty in implementing the standard pictures of inflation. For instance, if the inflaton is required to be a brane field, its mass becomes highly suppressed, making it difficult to understand the reheating process. Also, a wall inflaton makes it hard to understand the density perturbations observed by COBE .
As a way to solve these problems, Arkani-Hamed et al proposed a scenario where it was assumed that inflation occurs before the stabilization of the internal dimensions. With the dilaton field playing the role of the inflaton field, they argued that early inflation, when the internal dimensions are small, can successfully overcome the above complications. Another possible way out was proposed in Ref. , where it was suggested that the brane could be out of its stable point at early times, and inflation is induced on the brane by its movement through the extra space. Still other ideas are found in .
The common point of the first two scenarios is that they share the same basic assumption of an unstable extra dimension. However, it is still possible that, due to some dynamical mechanism, the extra dimension gets stabilized long before the Universe exited from inflation, as in some scenarios in Kaluza-Klein (KK) theories, where the stabilization potential is generated by the Casimir force . Other possible sources for this stabilizing potential could be present in brane-bulk theories; for instance, the formation of the brane at very early times may give rise to vacuum energy that plays a role in eventually stabilizing the extra dimension. It therefore appears to us that it would be of interest to seek inflationary scenarios where stabilization occurs before inflation ends. Clearly in this case, one cannot expect the dilaton field to play the role of the inflaton field and we need to find a new way to generate inflation, that can solve the problems faced by the brane-inflaton.
With this background, we work in the framework of the Arkani-Hamed-Dimopoulos-Dvali (ADD) scenario where stabilization of the internal dimensions occurred long before the end of inflation. The main new ingredient of our work is that the inflaton is a bulk field rather than a field in the brane. We give qualitative arguments to show that this provides a different way out to the problems introduced with a brane inflaton. We should stress that while inflation proceeds, in principle, as in the former KK theories, the postinflationary era has a different behaviour mainly due to the fact that reheating must take place on the boundaries where all matter resides. This raises interesting questions regarding the reheating process since, naively, one might expect a bulk inflaton to reheat the bulk instead of the brane by releasing all its energy into the internal space in the form of gravitons. However, as we will discuss, bulk heating is much less efficient than brane heating, thereby circumventing this possibility.
The paper is organized as follows, in section 2, we review the problems with brane inflation and comment on the possible solutions, leaving the analysis of the present proposal (a bulk inflaton with stable bulk) for section 3. In section 4, we discuss density perturbation. In section 5, we address some details of the reheating era to explore the puzzles introduced by the possible production of gravitons. We will close our discussion with some remarks.
## II Brane inflation
To see how letting the inflaton arise from the brane fields leads to problems , let us consider a typical chaotic inflation scenario . If the highest scale in the theory is $`M`$, during inflation, the inflaton potential can not be larger than $`M^4`$, regardless of the number of extra dimensions. Since successful inflation (the slow roll condition) requires that the inflaton mass be less than the Hubble parameter, which is given as
$$H\sqrt{\frac{V(\varphi )}{3M_P\mathrm{}^2}}$$
(2)
we have the inequality
$$mHM^2/M_P.$$
(3)
For $`M1`$ TeV, one then gets the bound $`m10^3`$ eV, which is a severe fine tuning constraint on the parameters of the theory. It further implies that inflation occurs on a time scale $`H^1`$ much grater than $`M^1`$. As emphasized by Kaloper and Linde , this is conceptually very problematic since it requires that the Universe should be large and homogeneous enough from the very beginning so as to survive the large period of time from $`t=M^1`$ to $`t=H^1`$.
Moreover, for chaotic inflation with $`V(\varphi )=\frac{1}{2}m^2\varphi ^2`$, we get for the density perturbations
$$\frac{\delta \rho }{\rho }50\frac{m}{M_P\mathrm{}}10^{31}.$$
(4)
For the case where $`\lambda \varphi ^4`$ term dominates the density one gets the same old fine tunning condition $`\frac{\delta \rho }{\rho }\lambda ^{1/2}`$. Assuming Hybrid inflation , with the potential $`V(\varphi ,\sigma )=\frac{1}{4\lambda }\left(M^2\lambda \sigma ^2\right)^2+\frac{1}{2}m^2\varphi ^2+g^2\varphi ^2\sigma ^2`$, does not improve those results , since it needs either a value of $`m`$ six orders of magnitude smaller or a strong fine tunning on the parameters, to match the COBE result $`\frac{\delta \rho }{\rho }10^5`$.
There are two possible ways to overcome this theorem. First, as emphasized in Ref. , we can imagine that during inflation era the extra dimensions were as small as $`M^1`$. Thus, instead of Eq. (2) we will get
$$H^2=\frac{V(\varphi )}{3M^2};$$
(5)
which naturally removes the suppression (3). However, one can not allow the extra dimension to grow considerably during inflation since large changes on the internal size will significantly affect the scale invariance of the density perturbations. Therefore the radius of the extra dimension must remain essentially static while the Universe expands. After inflation ends, the extra dimension should grow to its final size, which may, however, produce a contraction period on the brane . In this scenario, the radion should slow roll during inflation and could be identified as the inflaton. Nevertheless, it also poses some complications for the understanding of reheating since the radion is long lived, and its mass could be very small (its mass is lower bounded by $`10^3`$ eV).
Here, we consider the second possibility where the dynamics of the radion stabilizes the radius of the extra dimension before inflation ends ($`\tau _{stab}\tau _{inf}`$). There are examples of some KK inspired theories where this happens. In such a theory, a different way around the above problems is needed. Notice that, in this case, the scale invariance of density perturbations requires that stabilization occurs long before the last 80 e-foldings or so. Clearly, in this case the radion can not play the roll of the inflaton since it will not slow roll. As we will discuss in the next section, if a bulk scalar field plays the role of the inflaton, a simple solution to the above problems may be given. One must however investigate the question of reheating carefully in this model. Another important question in this model is the origin of stabilization of the extra dimension. We do not address this difficult question here but simply assume the condition that $`\tau _{stab}\tau _{inf}`$.
## III Inflation with a bulk scalar field
Let us now discuss the picture of inflation, when the inflaton is a higher dimensional scalar field. To keep things simple, we will assume only a single extra dimension. However, we stress that our results hold for any number of extra dimensions as long as the inflaton propagates in all of them.
Let us start by assuming that the extra dimensions are already stabilized by some (yet unknown) dynamical mechanism . As the inflaton is now a bulk field, we will further assume that it is homogeneous along the extra dimensions, just as in the former KK theories. This is another way to state the perfect fluid assumption for the $`\mathrm{\Phi }`$ field in five dimensions (i.e. $`T_{05}=0`$ where $`T_{05}`$ is one of the components of the energy momentum tensor). Obviously this will make our theory of inflation similar to those models. However, what will make our theory different from the usual KK theories is the fact that matter is attached to the branes and that will affect the inflaton decay in an essential manner. Notice that the condition $`T_{05}=0`$ makes the inflaton a zero mode, which is also a necessary condition if we want to reproduce the ADD scenario at late times (a flat and factorizable geometry). Therefore, in the effective four dimensional theory, the inflaton field $`\stackrel{~}{\varphi }`$ and the bulk field $`\mathrm{\Phi }`$ are related by $`\stackrel{~}{\varphi }=\sqrt{R}\mathrm{\Phi }_0`$. Notice that this assumptions are consistent as long as the brane densities remain smaller than $`M^4`$. If brane densities were large, one would have to consider the branes as sources for the metric in the Einstein equations and one will depart from the ADD picture towards a Randall-Sundrum type nonfactorizable geometry. We will not consider such scenarios here. Some ideas on this regard can be found in Ref. .
Once the extra dimension is stable, one gets, in the effective four dimensional theory, the usual form of the Hubble parameter as
$$H^2=\frac{V_{eff}}{3M_{Pl}^2};$$
(6)
where the effective four dimensional potential is defined by
$$V_{eff}=RV_{5D}=\left(\frac{M_P\mathrm{}^2}{M^3}\right)V_{5D}.$$
(7)
As we will see, this condition translates into the scaling of the inflaton as mentioned above. Now we focus on the implications of this formula.
First, since the inflaton is now a bulk field, the upper bound on the five dimensional potential is $`M^5`$ (instead of $`M^4`$ for the case of the brane inflaton) and the effective potential has the upper bound $`RM^5=M^2M_P\mathrm{}^2`$. Therefore, one gets $`mHM`$, which does not require a superlight inflaton. This also keeps the explanation of the flatness and horizon problems as usual, since now, the time for inflation could be as short as in the standard theory. Thus, our bulk field naturally overcome the problem noted by Lyth and Kaloper and Linde.
To proceed further, let us assume the following five dimensional potential for the bulk field:
$$V_{5D}(\mathrm{\Phi })=\frac{1}{2}m^2\mathrm{\Phi }^2+\frac{\lambda }{4M}\mathrm{\Phi }^4.$$
(8)
The effective potential that drives inflation on the brane can be derived from the above equation to be
$$V_{eff}(\stackrel{~}{\varphi })=\frac{1}{2}m^2(\stackrel{~}{\varphi })^2+\frac{\stackrel{~}{\lambda }}{4}(\stackrel{~}{\varphi })^4.$$
(9)
where $`\stackrel{~}{\lambda }:=\frac{M^2}{M_P\mathrm{}^2}\lambda `$ is a naturally suppressed coupling. Now, as in the old fashioned chaotic scenario, inflation will start in those small patches of size $`H^1`$ where the effective inflaton reaches an homogeneous value $`\stackrel{~}{\varphi }_cM_P\mathrm{}`$. However, because of scaling, this requires that for the bulk field, we must have $`\mathrm{\Phi }_c(0)M^{3/2}`$, which is a natural value in our picture.
We wish to note parenthetically, that if the $`\mathrm{\Phi }^4`$ term dominates the energy density (i.e. $`mM`$), then $`\stackrel{~}{\varphi }`$ could develope a vacuum expectation value in which case an interesting connection between the scales $`M,m`$ and $`v_{ew}`$ can emerge as follows. We can have the inflaton couple to matter fields in the brane (which is needed to reheat the brane Universe), via the following term:
$$hM^{\frac{1}{2}}\mathrm{\Phi }\chi ^2\delta (y),$$
(10)
where $`h`$ is a dimensionless coupling constant and $`\chi `$ is the brane higgs field. When $`\mathrm{\Phi }`$ develops a vacuum expectation value, the last term will contribute to the mass term, $`\mu _0^2`$, in the Higgs potential, which should be of order of the weak scale. From the potential given in (8), we get then the constraint
$$\mu _0^2=\frac{h}{\lambda ^{\frac{1}{2}}}mMv_{ew}^2.$$
(11)
Now assuming that $`h,\lambda 1`$ and $`m10`$ GeV $`M`$, we get $`M10^2TeV`$, which is consistent with the strongest experimental limit .
## IV Density perturbation
The calculation of the density perturbation proceeds as in the usual four dimensional theories. Let us write it down in terms of the five dimensional potential:
$$\frac{\delta \rho }{\rho }\frac{\left(V_{eff}(\stackrel{~}{\varphi }_c)\right)^{3/2}}{M_P\mathrm{}^3\left(\frac{V_{eff}}{\stackrel{~}{\varphi }_c}\right)}=\left(\frac{1}{M_P\mathrm{}M^3}\right)\frac{\left(V_{5D}(\mathrm{\Phi }_c)\right)^{3/2}}{\left(\frac{V_{5D}}{\mathrm{\Phi }_c}\right)}.$$
(12)
Because the quartic term is suppressed for small values of $`M`$, we first assume that the mass term drives the inflation. Nevertheless, as expected we get $`\frac{\delta \rho }{\rho }m/M_P\mathrm{}`$, which is again very small for $`mM`$. This result is similar to what one obtains in the brane inflaton models. On the other hand, if the quartic term dominates the density, then
$`{\displaystyle \frac{\delta \rho }{\rho }}\stackrel{~}{\lambda }^{1/2}=\left({\displaystyle \frac{M}{M_P\mathrm{}}}\right)\lambda ^{1/2}.`$and models with only large values of $`M`$ would be satisfactory. Since our interest here is in models with large extra dimensions, we consider $`M`$ in the multi-TeV range and therefore we must seek ways to solve this problem. In any case it is gratifying that a single bulk scalar field seems to solve two of the major problems faced by the brane inflaton models.
In order to improve the situation with respect to $`\frac{\delta \rho }{\rho }`$ in this model, we extend it to include an extra scalar field $`\sigma `$ and considering bulk potential to have the same form as is used in implementing hybrid inflation picture:
$$V(\varphi ,\sigma )=\frac{M}{4\lambda }\left(M^2\frac{\lambda }{M}\sigma ^2\right)^2+\frac{m_0^2}{2}\varphi ^2+\frac{g^2}{M}\varphi ^2\sigma ^2.$$
(13)
It is easy to check that inflation will require $`\varphi _c^2M^3/2g^2`$. Therefore, our effective inflaton should be $`\stackrel{~}{\varphi }_cM_P\mathrm{}/\sqrt{2}g`$, just as expected. One then uses (12) to get
$$\frac{\delta \rho }{\rho }\left(\frac{g}{2\lambda ^{3/2}}\right)\frac{M^3}{m_0^2M_P\mathrm{}}.$$
(14)
If for instance, we set in the last equation the values $`M10^2TeV`$ and $`m_0m10GeV`$ we find
$$\frac{\delta \rho }{\rho }\left(\frac{g}{2\lambda ^{3/2}}\right)\times 10^5,$$
(15)
which is right the COBE result.
Let us now compare our model with the case where one has hybrid inflation in the wall. In order to explain the density pertubation in the wall hybrid inflation models , an unpleasant fine tuning of either the the mass of the inflaton field, (to the level of $`m_0=10^{10}eV`$), or of the coupling constant, (to the level of $`\lambda =10^8`$) is essential. On the other hand as we just showed, if the inflaton is a bulk field, no such fine tuning is required. We find this to be perhaps an interesting advantage of models with large extra dimensions over the conventional four dimensional inflation models.
Let us also remark that in the case of more than one extra dimension, the only change on our above results come form the substitution of $`R`$ for the volume of the extra space $`V_n`$ in all the analysis. This does not affect the results of our analysis, since the effective theory is still given in terms of the same effective coupling constants, although the effective inflaton will be changed into $`\sqrt{V_n}\mathrm{\Phi }`$. This rescaling does not affect our main expression in (12) nor (14)
## V Reheating
The epoch of the Universe soon after inflation is called reheating. During this era, the inflaton is supposed to decay into matter populating the Universe and reheating it to a temperature $`T_R`$, called the reheating temperature. Since many important phenomena of cosmology e.g. baryogenesis, depend on the Universe being very hot, the value of $`T_R`$ is important. One also has to watch out for any unwanted particles that may be produced during the reheating, since they may create problems for the subsequent evolution of the Universe (as for instance is familiar from the study of gravitino production in supergravity theories). Reheating is followed by thermalization of particles produced, so that conventional Friedman expansion can begin subsequently. Again, thermalization is also dependent on $`T_R`$. Clearly, therefore reheating is a very important aspect of any model of inflation. In this section, we discuss how it works in our model.
In our discussion, we will use the elementary theory of reheating, which is based on perturbation theory and where the reheating temperature $`T_R`$ can be expressed in terms of the total decay rate of the inflaton, $`\mathrm{\Gamma }`$ and the Hubble parameter as follows: $`T_R0.1\sqrt{\mathrm{\Gamma }M_P\mathrm{}}`$ . It has been pointed out that this approach faces some limitations in terms of efficiency and possible improvements have been suggested using a first stage of heating (called preheating) based on parametric resonance. We will not be concerned here with these extra subtleties and use perturbative reheating to get a crude idea of the postinflationary phase of the Universe and discuss how the Friedman Universe emerges following the end of the reheating period.
An obvious problem that could possibly arise is that the strong coupling of the KK modes of graviton, could induce a faster decay channel for the inflaton than matter. If this happens, the bulk would reheated while the brane would remain empty of matter giving rise to a non-standard, undesirable Universe. As we show below, luckily this is not the case for our model.
When hybrid inflation ends, the field $`\sigma `$ quickly goes to one of its minima $`\sigma _\pm =\pm M^{3/2}/\sqrt{\lambda }`$. As a result the mass term of the inflaton field receives a contribution $`g^2M^2/\lambda `$ which dominates over $`m`$. This leads us to conclude that the inflaton field, which will oscillate around its minimum and generate reheating, has a mass about an order of magnitude below $`M`$ (using the set of parameter chosen to explain density perturbations). Therefore, the decay $`\varphi \chi \chi `$ is allowed, where $`\chi `$ is the Higgs field, for typical Higgs field masses in the 100-200 GeV range. Let us stress that this process will take place only on the boundaries of the extra dimension (i.e. in the brane), making it physically different from the former KK theories where the production of matter through inflaton decays occurs everywhere.
Following the steps of perturbative reheating theory, we can estimate the reheating temperature by calculating the decay rate of the inflaton field into two Higgses. That is
$$\mathrm{\Gamma }_{\varphi \chi \chi }\frac{M^4}{32\pi M_P\mathrm{}^2m_\varphi }.$$
(16)
With an inflaton mass $`m_\varphi `$ around $`0.1M`$ we estimate the reheating temperature to be $`T_R>100`$ MeV. As would be desirable, this temperature is above that required for successful big bang nucleosynthesis.
The next point that needs to be investigated is the generation of gravitons by the excited modes of $`\varphi `$ and $`\sigma `$ fields. Because, if excessive graviton production drained away the energy stored in the inflaton field, it would lead to lower matter density compared to graviton density and matter may not reach a state of equilibrium. In the five dimensional language, this would lead to an expansion of the bulk rather than the brane. Notice that such processes are not dangerous in KK theories where the compactification scale is very small and the excited modes decay preferentially into matter (on the bulk). Now, that the radius of the extra space is large, a large number of KK gravitons, $`h`$, could be produced by a KK inflaton mode decaying into another excited mode plus a graviton: $`\varphi _n\varphi _lh_{nl}`$, where $`n`$ and $`l`$ are the KK numbers, which are conserved. To estimate the amount of gravitons produced, we have to first estimate the production rate for $`\varphi _n`$, the excited modes of the inflaton, since only $`\varphi _n`$ decay can produce gravitons via the decay process just mentioned. The $`\varphi _n`$ modes are produced through via collision processes, $`\varphi _0\varphi _0\varphi _n\varphi _n`$ and the rate for this process is given by
$$\sigma _{\varphi _0\varphi _0\varphi _n\varphi _n}\lambda ^2\frac{M^2}{M_{Pl}^4};$$
(17)
for $`\sqrt{s}M`$. This has to be compared with $`\mathrm{\Gamma }_{\varphi \chi \chi }`$ above (16). Due to the very different Planck mass dependence, it is easy to see that the $`\varphi _n`$ production is highly suppressed compared to the $`\varphi _0`$ decay to Higgs bosons. Once an excited mode has been produced, it will preferably decay into gravitons. Although the rate for this procces is very small:
$$\mathrm{\Gamma }_{\varphi _n\varphi _lh_{nl}}\frac{m_nm_l^2}{12\pi M_P\mathrm{}^2};$$
(18)
where $`m_n^2=m_\varphi ^2+n^2/R^2`$ is the mass of the excited mode, the presence of a large number of accessible modes in the final state will enhance this value up to
$$\mathrm{\Gamma }_{\varphi _n,Total}=\underset{l}{}\mathrm{\Gamma }_{\varphi _n\varphi _lh_{nl}}\frac{m_n^3}{12\pi M^2};$$
(19)
making the excited mode very short lived. We should stress, however, that the final products of the shower induced by a KK mode decaying on lighter modes will always include $`\varphi _0`$’s, which as we stated already only decay on the brane to Higgs fields and hence to matter. As a result, the KK excitations of the $`\varphi _0`$ will not be around to overclose the Universe and the associated graviton production is also unlikely to be significant.
Thus, the final scenario that emerges is as follows: after exiting inflation the inflaton will start moving relativistically, eventually producing both Higgs fields as well as its excited modes due to the possible (suppressed) four points self interactions. As the rate estimates above show, most of the reheat energy will pass to the brane in the form of matter and a very small part will pass to the bulk in the form of gravitons produced through the fast decaying KK inflaton modes. As the Universe is still rapidly moving at that stage, we could imagine that the density of those gravitons will be substantially diluted. The Higgs bosons produced by the inflaton decay will quickly decay to quarks and leptons, which will attain equilibrium, via their strong and weak interactions. Friedman expansion will resume, albeit starting with a lower temperature ($`T0.1`$ GeV) compared to the conventional grand unified theories.
We also point out that this scenario, naively, will not be affected by the presence of the $`\sigma `$ field of the hybrid inflation model.
## VI Remarks and discussion
We now conclude with a brief summary of our main results. Choosing the bulk scalar field as the source of the brane inflaton field leads to several advantages: first the fundamental scale $`M`$ can be in the multi-TeV range, which turns out to be the natural bound for inflaton mass $`m_\varphi `$ and Hubble constant $`H`$, in contrast with the brane inflaton models where $`m_\varphi `$ and $`H`$ are oversuppressed. Assuming hybrid inflation and $`M`$ just above the current experimental limits, the COBE observation of $`\frac{\delta \rho }{\rho }`$ is also successfully explained without any fine tuning. Finally, we also note that even though the bulk inflaton has KK modes, the reheating process leads to Universe not dominated by their mass but rather by the standard model particles in equilibrium. Finally, we mention that all the results of this paper remain unchanged when more extra dimensions are involved, provided that the inflaton propagates in all the bulk, and that the Friedmann equation (Eq. (2)) holds.
Acknowledgements. The work of RNM is supported by a grant from the National Science Foundation under grant number PHY-9802551. The work of APL is supported in part by CONACyT (México). The work of CP is supported by Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP). We like to thank S. Nussinov for several stimulating discussions on the reheating process.
|
no-problem/0003/cond-mat0003264.html
|
ar5iv
|
text
|
# Compositional Inversion Symmetry Breaking in Ferroelectric Perovskites
\[
## Abstract
Ternary cubic perovskite compounds of the form (A<sub>1/3</sub>A$`{}_{1/3}{}^{}{}_{}{}^{}`$A$`{}_{1/3}{}^{}{}_{}{}^{\prime \prime }`$)BO<sub>3</sub> and A(B<sub>1/3</sub>B$`{}_{1/3}{}^{}{}_{}{}^{}`$B$`{}_{1/3}{}^{}{}_{}{}^{\prime \prime }`$)O<sub>3</sub>, in which the differentiated cations form an alternating series of monolayers, are studied using first-principles methods. Such compounds are representative of a possible new class of materials in which ferroelectricity is perturbed by compositional breaking of inversion symmetry. For isovalent substitution on either sublattice, the ferroelectric double-well potential is found to persist, but becomes sufficiently asymmetric that minority domains may no longer survive. The strength of the symmetry breaking is enormously stronger for heterovalent substitution, so that the double-well behavior is completely destroyed. Possible means of tuning between these behaviors may allow for the optimization of resulting materials properties.
\]
In the last decade, the extensive use of first-principles theoretical methods to study ferroelectric perovskite oxides has led to a greatly expanded understanding of the ferroelectric and piezoelectric properties of this important class of materials. Theoretical investigations of electronic, dynamical, and structural properties have been shown to be in good accord with experimental observations for the simple ABO<sub>3</sub> perovskites . Moreover, these studies provide microscopic insight into the ferroelectric instabilities, their relation to the long-range Coulomb interactions, and related questions about the origins of the piezoelectric response.
However, the materials of most interest for technological applications are generally not the simple ABO<sub>3</sub> perovskites, but solid solutions with stoichiometric substitutions of A or B metal atoms. Examples include PZT (PbZr<sub>1-x</sub>Ti<sub>x</sub>O<sub>3</sub>), currently one of the most widely used ferroelectrics, and PZN (PbZn<sub>1/3</sub>Nb<sub>2/3</sub>O<sub>3</sub>) and PMN (PbMg<sub>1/3</sub>Nb<sub>2/3</sub>O<sub>3</sub>) and their solid solutions with PbTiO<sub>3</sub>, which have recently been shown to have enormous piezoelectric response in single-crystal form . In fact, this class of materials shows great promise for the development of new materials having improved dielectric and electromechanical properties. Not only is there an enormous space of chemical compositions to explore, but it may also be possible to optimize the desired material properties by tuning the degree and type of compositional order for the desired application .
In particular, Eckstein has recently suggested that the artificial atomic-layer growth of compositionally ordered structures that break inversion symmetry might be especially exciting and fruitful in this regard. The resulting asymmetry of the ferroelectric double-well potential in such a material suggests the prospect of qualitatively new behavior, e.g., “self-poling” materials with tailored piezoelectric or dielectric properties.
In this Letter, we explore this exciting prospect by carrying out ab-initio theoretical calculations for several prospective model structures of this type. Specifically, we envision the artificial growth of materials of overall composition (A<sub>1/3</sub>A$`{}_{1/3}{}^{}{}_{}{}^{}`$A$`{}_{1/3}{}^{}{}_{}{}^{\prime \prime }`$)BO<sub>3</sub> or A(B<sub>1/3</sub>B$`{}_{1/3}{}^{}{}_{}{}^{}`$B$`{}_{1/3}{}^{}{}_{}{}^{\prime \prime }`$)O<sub>3</sub>, in which the three different cations alternate layer-by-layer along the ferroelectric direction as illustrated in Figs. 1(a-b) for A-site and B-site modulation respectively. We will assume that such atomic-layer control will become possible , and investigate the energy landscapes and ferroelectric properties of the resulting materials. As will be shown below, we find that the asymmetry may easily be strong enough for self-poling to occur, even in the case of isovalent substitution. Moreover, we find a surprisingly strong variation of the strength of the symmetry breaking with the strength of the compositional perturbation, allowing for a very wide tunability of ferroelectric properties. Experimentally, such tunability might be exploited by growing alloy structures with a cyclic modulation of the concentration variable. We thus find very strong motivations for the development of such materials, and hope that our work will encourage experimental efforts directed toward their synthesis.
We adopt bulk BaTiO<sub>3</sub> as a prototypical parent compound and construct a series of model systems that allow us to test for the effects of A-site vs. B-site and isovalent vs. heterovalent substitution. Specifically, we consider (Ba<sub>1/3</sub>Sr<sub>1/3</sub>Ca<sub>1/3</sub>)TiO<sub>3</sub> as an example of an “A-iso” system (isovalent substitution on the A site), Ba(Sc<sub>1/3</sub>Ti<sub>1/3</sub>Nb<sub>1/3</sub>)O<sub>3</sub> as an example of a “B-hetero” system (heterovalent substitution on the B site), and Ba(Ti<sub>1/3</sub>Zr<sub>1/3</sub>Hf<sub>1/3</sub>)O<sub>3</sub> as an intermediate “B-iso” case. We construct 15-atom supercells by tripling the primitive unit cell along the $`z`$ direction and cyclically alternating the identity of the A or B atom, as shown in Figs. 1(a-b). We assume that any ferroelectric order develops only along the $`z`$-direction, so that the material remains tetragonal, and only $`z`$ displacements need be considered. (It may be possible to realize this situation by appropriate choice of lattice-mismatched substrate for epitaxial growth, but it is not our purpose here to investigate this possibility.) We also assume perfect control of layer-by-layer composition, resulting in the ideal stacking of Figs. 1(a-b), and carry out our theoretical studies only at zero temperature. Much future work clearly remains to be done in relaxing these assumptions, but the study of simple prototypical systems is a natural first step.
The ab-initio calculations are carried out using the Vanderbilt ultra-soft pseudopotential scheme in the local-density approximation. Details of the pseudopotentials can be found in Ref. . Good $`k`$-point convergence is obtained using a (6,6,2) Monkhorst-Pack mesh, corresponding to the bulk (6,6,6) mesh, and a 25-Ry plane-wave cutoff is used throughout. Because experimental lattice constants $`a`$ and $`c`$ are not available for the ordered compounds of interest, the calculated theoretical equilibrium values were usually used. However, in some cases the $`c/a`$ ratio was fixed and only the cell volume was optimized, as detailed below.
In the presence of the broken inversion symmetry, it is not always easy to locate both local energy minima corresponding to the two ferroelectric ground states. We have found that the following procedure is quite reliable for finding both minima, if they exist. First, we place the atoms at the ideal cubic coordinates and calculate the pattern of forces, thus defining a “cubic force direction” $`\widehat{\xi }_{\mathrm{cf}}`$ in the 15-dimensional configuration space. We then use $`\widehat{\xi }_{\mathrm{cf}}`$ as the search direction for a line minimization, and denote the minimum along this line $`𝐫=u_{\mathrm{cf}}\widehat{\xi }_{\mathrm{cf}}`$ to be $`X`$, as shown in Fig. 1(c). Next, starting from $`X`$, we carry out a second line minimization along the line $`𝐫=𝐫_X+u_0\widehat{\xi }_0`$, where $`\widehat{\xi }_0`$ corresponds to the ferroelectric mode unit vector of bulk BaTiO<sub>3</sub>. As shown in Fig. 1(d), two scenarios can be identified. (i) If two minima $`A^{}`$ and $`B^{}`$ are found along this line, then we carry out a steepest-descent minimization from each, obtaining two distinct local minima $`A`$ and $`B`$. Fig. 1(e) shows the perturbed double-well potential plotted along a line connecting points $`A`$ and $`B`$ (direction $`\widehat{\xi }`$) in such a case. (ii) If only one minimum is found, as for the dashed line of Fig. 1(d), then the double-well potential has been destroyed, and a subsequent steepest-descent minimization identifies the unique minimum.
We first consider the application of this procedure to the A-iso system (Ba<sub>1/3</sub>Sr<sub>1/3</sub>Ca<sub>1/3</sub>)TiO<sub>3</sub>. The relaxation from the ideal cubic structure to the point $`X`$ illustrated in Fig. 1(c) is found to be dominated by ionic size effects: one sees a simple shift of the atoms toward smaller A-site cations and away from larger ones. Figure 1(d) then illustrates the minimization along $`𝐫=𝐫_X+u_0\widehat{\xi }_0`$ for one particular fixed cell volume (1343 a.u.<sup>3</sup>), indicating a modest asymmetry between minima $`A^{}`$ and $`B^{}`$. However, since it is well established that the presence or absence of a ferroelectric instability depends strongly on cell volume in perovskites , we have repeated this step for a series of fixed atomic volumes in Fig. 2. (The curves in Fig. 2 could thus be roughly interpreted as corresponding to a series of related compounds having differing tendencies towards ferroelectric instability.) The results illustrate the typical evolution of the double-well potential with increased ferroelectric tendency, from a slightly asymmetric single-well minimum ($`V`$=1123 a.u.<sup>3</sup>) to a slightly asymmetric double-well potential ($`V`$=1463 a.u.<sup>3</sup>) . It happens that the theoretical equilibrium volume $`V`$=1193 a.u.<sup>3</sup> is a nearly borderline case; we find only a single minimum, but there are still two inflection points. In such a case, we modify the procedure of the previous paragraph by carrying out a steepest-descent minimization from each inflection point, and by doing so we succeed in finding the two distinct minima $`𝐫_A`$ and $`𝐫_B`$ in Fig. 1(e). For this case ($`V`$=1193 a.u.<sup>3</sup>), we find a soft-mode amplitude of 0.3 a.u. and an average well depth (relative to the saddle point $`S`$) of 0.48 mHa per 5-atom cell, fairly close to the corresponding values of 0.25 a.u. and 0.43 mHa , respectively, for tetragonal bulk BaTiO<sub>3</sub>. The energy difference between the twin wells is modest, $`1520\%`$ of the well depth.
Applying the same approach to the B-iso system Ba(Ti<sub>1/3</sub>Zr<sub>1/3</sub>Hf<sub>1/3</sub>)O<sub>3</sub>, we observe very similar behavior: a strong volume-dependence of the ferroelectric tendency, and a modest inversion asymmetry for both single-well and double-well volumes. In this case, the structure does not develop a double-well potential until the volume is increased to 1468 a.u.<sup>3</sup>, but the degree of asymmetry is similar as for the A-iso case. Thus, we conclude that the choice of A vs. B site for an isovalent chemical substitution does not strongly affect the strength of the asymmetry or the qualitative behavior of the system.
While the asymmetries may appear small in the case of isovalent substitution, they are large by one important measure. We define an “effective electric field” $`_{\mathrm{eff}}=\mathrm{\Delta }E/\mathrm{\Delta }P_s`$, where $`\mathrm{\Delta }E`$ and $`\mathrm{\Delta }P_s`$ denote the energy and polarization differences between the two local minima. If this quantity is larger than the coercive field of the material, it means that the thermodynamic preference for the energetically preferred minimum is strong enough to overcome the pinning of the domain walls and to spontaneously switch the material into a single-domain state. Using the Berry-phase approach to calculate $`\mathrm{\Delta }P_s`$, we find $`_{\mathrm{eff}}`$=90 kV/cm at the equilibrium volume in the A-iso system. Since the typical coercive field of most perovskite ferroelectrics is closer to 15 kV/cm, we thus arrive at the important conclusion that the symmetry breaking may easily be strong enough to cause the material to self-pole, even in the case of isovalent substitution.
Turning now to the case of heterovalent chemical substitution, we find very different behavior in this case. Specifically, we consider the B-hetero system Ba(Sc<sub>1/3</sub>Ti<sub>1/3</sub>Nb<sub>1/3</sub>)O<sub>3</sub> in which the valence charges are +3, +4, and +5 on Sc, Ti, and Nb, respectively. Structural optimization results in a lattice constant of $`a=7.60`$ a.u. and $`c/a=3.023`$. The pattern of relaxation $`\widehat{\xi }_{\mathrm{cf}}`$ leading to configuration $`X`$ suggests that the electrostatic interaction is a dominant effect. Specifically, we observe bucklings of the AO and BO<sub>2</sub> planes that are consistent with a picture of static electric fields arising from the different B-atom valence charges.
Most importantly, for the heterovalent case we find only a single minimum when searching from $`X`$ along $`\widehat{\xi }_0`$, as shown by the dashed curve in Fig. 1(d). Various alternate search strategies failed to identify a second minimum; all trial structures were found to relax back to a unique structural ground state. The absence of the second minimum was not found to be sensitive to the cell volume, as was the case for isovalent substitutions. Thus, we find that the symmetry breaking is enormously stronger than for the isovalent case, and it is clear that resulting behavior is of a qualitatively different type.
In order to gain a better understanding of this behavior, and in particular to track the disappearance of the secondary minimum, we have developed a model system to study the effects of “turning on” the heterovalent symmetry-breaking perturbation gradually. We introduce a continuous variable $`\delta `$, and construct artificial atoms with fractional nuclear charges that deviate by $`\pm \delta `$ from those of Ti ($`Z`$=22). Constructing a crystal out of a cyclic series of Ti$`\delta `$, Ti, and Ti$`+\delta `$ atomic layers, we can continuously tune the system from ferroelectric BaTiO<sub>3</sub> ($`\delta `$=0) to a full-fledged heterovalent system Ba(Sc<sub>1/3</sub>Ti<sub>1/3</sub>V<sub>1/3</sub>)O<sub>3</sub> ($`\delta `$=1). The energies calculated along the $`\widehat{\xi }_0`$ direction from $`X`$ are plotted for a series of $`\delta `$ values in Fig. 3. As $`\delta `$ increases from 0 to 0.3, the asymmetry of the double well increases and the well depth decreases. At $`\delta =0.4`$, the curve exhibits only a single energy minimum, signaling the transition from double-well to single-well behavior in response to the increasingly strong symmetry-breaking perturbation. To confirm the disappearance of the secondary minimum more directly, we also tracked its evolution as $`\delta `$ was “turned on” in a sequence of small steps, using the relaxed structure at one $`\delta `$ as a starting guess for the next. At $`\delta =0.34`$, the minimum was confirmed to disappear, and subsequent relaxation led back to the principal (now global) minimum.
To gain more insight into the structural relaxations, we have found it useful to introduce a measure of the “strength of the symmetry breaking.” While it is tempting to choose a measure that is related to the energy difference between the two local minima, like the $`_{\mathrm{eff}}`$ introduced earlier, such a definition has the disadvantage of being ill-defined in the single-minimum case. Thus, we have adopted instead the following measure. For any curve such as that of Figs. 1(d-e), 2, or 3, we locate the point of minimum $`d^2E/du^2`$ (i.e., vanishing $`d^3E/du^3`$), and then define $`F_{\mathrm{sb}}`$ to be $`dE/du`$ evaluated at that point. We refer to $`F_{\mathrm{sb}}`$ as the “symmetry-breaking force” since it measures the strength of the symmetry breaking and has units of force. It turns out to have a similar behavior as the well-depth difference in the double-well case, but has the advantage of remaining well-defined in the single-well case.
Calculations of $`F_{\mathrm{sb}}`$ in the A-iso and B-iso cases indicate that $`F_{\mathrm{sb}}`$ has only a modest and smooth dependence on cell volume even when passing through the transition from single-well to double-well behavior, confirming that the “strength of the symmetry breaking” is not the variable parameter in those cases. However, returning to the virtual-atom B-hetero system, we find that $`F_{\mathrm{sb}}`$ is an extremely sensitive function of $`\delta `$, with numerical fits indicating a $`\delta ^3`$ dependence. It is not hard to see that $`F_{\mathrm{sb}}`$ must be an odd function of $`\delta `$, but its cubic behavior may seem surprising at first sight. However, the vanishing of the linear term can be deduced from simple symmetry arguments. An intuitive form of the argument is to note that the symmetry-breaking perturbation, which is of the form ($`+\delta `$, $`\delta `$, 0) in successive layers, can be regarded as a superposition of two perturbations ($`2\delta /3`$, $`\delta /3`$, $`\delta /3`$) and ($`\delta /3`$, $`2\delta /3`$, $`\delta /3`$) that do not break inversion symmetry. Thus, the principle of superposition prevents the occurrence of any symmetry-breaking response in linear order in $`\delta `$, and in particular $`F_{\mathrm{sb}}`$ must vanish. (A more systematic analysis may be made by considering the $`C_{3v}`$ symmetry group consisting of primitive translations along $`\widehat{z}`$ and mirrors $`M_z`$. The ferroelectric mode vector and the perturbation $`\delta `$ are found to belong to the $`A_2`$ and $`E`$ respresentations, repsectively, and thus cannot couple at linear order.)
Knowing the form of this extraordinarily strong cubic dependence of $`F_{\mathrm{sb}}`$ on $`\delta `$, we can now understand the pronounced qualitative differences that were observed for the cases of isovalent and heterovalent substitution. We find that $`F_{\mathrm{sb}}`$ is about the same, $``$0.25 mHa/a.u., in the B-hetero system with $`\delta =0.25`$ as in the A-iso system. Increasing $`\delta `$ from 0.25 to 0.4 increases $`F_{\mathrm{sb}}`$ by about a factor of four, enough to cause the transition to single-well behavior. A further increase of $`\delta `$ from 0.4 to 1.0 leads to a further increase of the symmetry-breaking force $`F_{\mathrm{sb}}`$ by a factor of about 15. Thus, in a fully-developed B-hetero system such as Ba(Sc<sub>1/3</sub>Ti<sub>1/3</sub>Nb<sub>1/3</sub>)O<sub>3</sub>, the strength of the symmetry breaking is more than an order of magnitude larger than needed to destroy the secondary minimum completely. It is hardly surprising, then, that we observed no secondary minimum in this case!
The enormous disparity between the behavior in the isovalent and heterovalent cases suggests that it may be of great interest to find a way of tuning the system continuously from one behavior to the other. One can imagine doing this by regarding $`\delta `$ not as a variable atomic number, but rather as representing a layer-by-layer composition variable. For example, one could conceive of the epitaxial growth of Ba(Sc<sub>1-y</sub>Nb<sub>y</sub>)O<sub>3</sub> in an alternating sequence of layers with $`y=0.5(1+\delta )`$, $`y=0.5`$, and $`y=0.5(1\delta )`$. By the same symmetry arguments, the effective strength of the symmetry breaking must again scale as $`\delta ^3`$. Thus, by controlling the concentration variable $`\delta `$, one can hope to tune the system over a very wide range of behavior.
One would clearly like to use this tunability to optimize the desired characteristics of the material, such as the piezoelectric response. It might naively be expected that increasing the strength of the inversion symmetry breaking will increase the piezoelectric response. (For example, if the parent material at $`\delta `$=0 were paraelectric, then the piezoelectric response would be expected to scale as $`\delta ^3`$.) On the other hand, the materials that have the largest piezoelectric coefficients are typically ferroelectrics, and since a very strong symmetry breaking suppresses the ferroelectric behavior, it might be counterproductive to make $`\delta `$ too large. Clearly, further theoretical investigation is needed in order to clarify these issues.
Many other questions remain open and need to be resolved. For example, the conditions under which the polarization will remain oriented parallel to the growth direction, the behavior of the system as a function of temperature, and the properties of materials with simultaneous A-site and B-site substitution, are obvious candidates for further study. The thermodynamic behavior of these materials, which are technically pyroelectric but interpolate to a ferroelectric limit as $`\delta 0`$, are also deserving of investigation. In the meantime, we hope that our theoretical investigations will stimulate attempts at experimental growth and characterization of novel perovskites with compositionally broken inversion symmetry.
We thank J. Eckstein for suggesting the direction of this study. Support for this work was provided by ONR Grant N00014-97-1-0048 and NSF Grant DMR-9981193. We thank M. Cohen and K. Rabe for useful discussions.
|
no-problem/0003/astro-ph0003370.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
Primordial and protogalactic magnetic fields have been introduced because of two main reasons: first, they can provide the seed fields for magnetohydrodynamic amplification mechanisms like galactic dynamos; and second, large scale fields could play a role in the formation of structures.
We have no observational knowledge about the first cosmical magnetic fields. Only indirect evidence is available. The magnetic fields in high redshift (2-3) objects like damped Lyman$`\alpha `$ systems is as strong as in nearby galaxies and galaxy clusters. Since they cannot have appeared immediately with that strength of several $`\mu `$Gauss they must have started at a considerably weaker strength. The central issue here is whether the magnetic fields are generated by intrinsic plasma processes in the protogalactic fluctuations summarized as battery effects or whether they are produced deep in the very early Universe by some symmetry breaking during phase transitions or even during the early epochs of inflation just after the Planck time. Battery effects use the different mass of electrons and protons and different collision frequencies of both, with neutral gas or photons. Thereby electric currents are driven, which induce magnetic fields at the first place. Further amplification is provided by magnetohydrodynamical processes, summarized as action of a dynamo. The initial values of the battery fields is of the order of $`10^{20}`$ G and can be increased by protogalactic collapse dynamics up to $`10^{14}`$G (Chiba and Lesch 1995).
Direct observations are at present restricted to typical redshifts of 3-4 at which the first quasars appear which permits the observation of the widespread intergalactic medium between them and the observer. This renders the subject as speculative as any other cosmological topic. Extragalactic magnetic fields are considered in the review by Kronberg (1994). Readers interested in observations of extragalactic fields are addressed to this review. These observations will be not considered in our contribution which is in part complementary to Kronberg’s. Other previous reviews dealing with this topic have been written by Rees (1987), Coles (1991), Enqvist (1997) and Olesen (1997). Closely related topics have been considered by Zweibel and Heiles (1997) (Magnetic fields in galaxies and beyond) and Lesch and Chiba (1997) (On the origin and evolution of galactic magnetic fields).
Theoretical models of the origin of magnetic fields were developed in order to satisfy the requirements of astrophysical observations and theories, which have recently been subject to alternative interpretations and improvements. An astrophysical analysis could restrict the large number of different theoretical models, what is the main goal of this review.
Models were developed having in mind that they should provide the seed required by the galactic dynamo, but the efficiency of this dynamo is now doubtful since the back reaction of the generated magnetic field onto the amplifying plasma flows sets tough constraints onto the maximal achievable field strengths, which are far off the values of several $`\mu `$G observed in high redshift objects and nearby galaxies (Kurlsrud and Anderson, 1992; Vainshtein and Cattaneo, 1992; Vainshtein, Parker and Rosner, 1993; Cattaneo, 1994; Kurlsrud et al., 1997; Lanzetta, Wolfe and Turmshek, 1995; Wolfe, Lanzetta and Oren, 1992; Kronberg, Perry and Zukowski, 1992; Perry, Watson and Kronberg, 1993; Kronberg, 1994).
Recent developments (Kulsrud et al. 1997, Lesch and Chiba 1995 Howard and Kulsrud 1997) indicate that protogalactic magnetic fields are created without any pregalactic seed fields by internal plasma mechanisms, after recombination. Therefore, the value for the pregalactic seed field required by the presence of today galactic magnetic fields may be as low as zero. This fact, together with a better knowledge of the big difficulties for small scale magnetic fields to be conserved and maintained along the radiation dominated era (see Section 3) could render many of the proposed magnetogenesis processes as interesting theoretical exercises without connection with the observable universe. The problem of large scale magnetic fields affecting the formation of large structures could therefore be unconnected with the origin of galactic magnetic fields. It is therefore necessary to precise as much as possible what the astrophysical requirements are at present.
In absence of loss and production or amplification mechanisms, the frozen-in condition of magnetic field lines would tell us:
$$\stackrel{}{B}_0=\stackrel{}{B}a^2$$
being $`\stackrel{}{B}_0`$ the present field and $`\stackrel{}{B}`$ the field when the cosmic scale factor was $`a`$, taking $`a_0=1`$. As shown by Battaner, Florido and Jimenez-Vicente (1997) this expression is more general, and holds even with no conductivity, under the condition of small perturbations on the Robertson-Walker metrics due to magnetic fields. A pure U(1) gauge theory with the standard Lagrangian is conformally invariant (not like a minimally coupled field), from which it follows that $`\stackrel{}{B}`$ always decreases following this equation even in absence of charge carriers (Turner and Widrow, 1988). This equation is of course not true along the whole evolution of the Universe, because generation, amplification and diffusive losses of the magnetic field became important at some epoch. We will however use equation (1) as a re-definition of $`\stackrel{}{B_0}`$, which will be therefore no longer the present magnetic field, and will no longer be a constant. This definition is justified because $`\stackrel{}{B}`$ is so much affected by expansion, that the use of $`\stackrel{}{B_0}`$ instead of $`\stackrel{}{B}`$, facilitates the comparison of fields in different epochs. We will call $`\stackrel{}{B}_0`$ the equivalent-to-present magnetic field strength. By adopting (1) we therefore do not pressume any conformally invariance nor any frozen-in condition, but just adopt a definition for $`\stackrel{}{B}_0`$.
Along the paper we will distinguish between large, intermediate and small scales. To be precise we will consider a critical scale $`\lambda _{cr}`$ defined by:
$$\lambda _{cr}=\frac{1}{mn_0}\sqrt{\frac{3\sigma T_0^4}{8\pi cG}}$$
where $`m`$ is the baryon mass, $`n_0`$ its present number density, $`\sigma `$ the Stephan-Boltzmann constant, $`T_0`$ the present Cosmic Micorowave Background (CMB) temperature, c the speed of light and $`G`$ the gravitation constant. This length is equivalent to few Mpc. The criterium is based on the result by Florido and Battaner (1997) who found a very different behaviour for $`\lambda <\lambda _{cr}`$ and for $`\lambda >\lambda _{cr}`$. Physically, $`\lambda _{cr}`$ corresponds to the size of an inhomogeneity becoming sub-horizon between Equality and Recombination. It is clear that this transition is very important for our purposes as large scale fields will not be inluenced by any microphysical effect along the radiation dominated era before recombination.
## 2 Origin
The different hypotheses investigating the generation of pregalactic magnetic fields can be classified into four classes, following the epoch of formation: a) during inflation, b) in a phase transition after inflation, c) during the radiation dominated era, and d) after recombination.
### 2.1 Magnetic fields generated during inflation
In general, magnetic fields are observed at all scales in the Universe, starting from smallest scales in the solar system, local interstellar medium up to intracluster scales of several Mpc (Kim et al….) Even if magnetic field inhomogeneities, or coherence cells, have not yet been observed at those large scales as the density structures and CMB anisotropies exhibit, it is natural to expect that magnetic fields these scales exist. As for the case of matter inhomogeneities and radiation anisotropies, inflation provides the most natural explanation of field inhomogeneity, as inflation permits causal connection between two points with a distance that was rather recently, at Equality or slightly later, smaller than the horizon.
Turner and Widrow (1988) first proposed an inflation scenario for the creation of primordial magnetic fields, showing its advantages and difficulties. A cloud with present size $`\lambda `$ has had at any epoch a size $`a\lambda `$. This must be compared with the horizon at that epoch, which is a function of $`a`$. During the first phase of inflation it is rather independent of $`a`$, becomes $`a^{3/2}`$ during reheating, $`a^2`$ during the radiation dominated era, and $`a^{3/2}`$ during the matter dominated era. Therefore, an inhomogeneity could be sub-horizon when it is produced, becomes super-horizon at a time within inflation and again becomes sub-horizon much later, at Equality, for instance.
These very-long-wavelength effects were then created by any physical process acting on scales less than the horizon, in practice less than the Hubble radius $`H^1`$. For a long period in comsic evolution they remained unaffected by local effects and emerged into the causal domain, as the only witness of very early events. Present physical processes may distort the original distribution but the information is not completely lost.
Inflation even provides the excitation mechanism of relative large wavelength electromagnetic waves out of quantum-mechanical fluctuations. When these waves reach $`\lambda >H^1`$, the oscillating electric and magnetic fields partially appear as static fields. This is an elegant interpretation of the generation of $`\stackrel{}{E}`$ and $`\stackrel{}{B}`$ in the theory of Turner and Widrow (1988). These fields remain independent of any plasma effect. The conductivity becomes high enough at some time during reheating and will eventually control the small scale magnetic fields.
As $`a(t)`$ is exponential during inflation, the initial sub-horizon scales becomes very large, increasing by a factor greater than $`10^{21}`$, which is very suitable for explaining the large structures and solving the famous horizon problem (e.g. Boerner, 1988). The exponential increase of $`a`$, presents the main difficulty of classical inflation models for the origin of magnetic fields, because the field decreases very quickly, following $`Ba^2=constant`$, which is also valid along inflation, irrespective of plasma effects, if the U(1) gauge theory is conformally invariant. Under this invariance, the magnetic strength is reduced by factors of about $`10^{104}`$.
Some mechanisms must be assumed to avoid the exponential dilution of magnetic fields. Among other possibilities proposed or analyzed, Turner and Widrow (1988) studied in detail that conformal invariance of electromagnetism is broken through gravitational coupling of the photon. This coupling gives the photon a mass, of the order of $`10^{33}eV`$, therefore undetectable. Very interestingly, they were able to predict $`B_05\times 10^{10}`$ at scales of about $`1Mpc`$.
Ratra (1992) considered the coupling of the scalar field responsible for inflation (inflaton) and the Maxwell field, obtaining $`B_0`$ even as large as $`10^9G`$ at scales of $`H^1/1000`$, of about $`5Mpc`$, which is also a very promising result, even if the hypothesis is unrealistic in the context of string theory (Lemoine and Lemoine, 1992). Garretson, Field and Carrol (1992) invoked a pseudo-Goldstone-boson coupled to electromagnetism, obtaining very low values ($`B_0<10^{21}G`$ at $`\lambda =1Mpc`$). Dolgov (1993) proposed the breaking of conformal invariance through the so called ”phase anomaly”, a mechanism that would not work in the supersymmetric theory (Gasperini, Giovannini and Veneziano, 1995b; Lemoine and Lemoine, 1995). Dolgov and Silk (1993) considered a spontaneous break of the gauge symmetry of electromagnetism that produced electrical currents with non-vanishing curl.
The model by Davis and Dimopoulos (1995) is based on the creation of magnetic fields at the GUT phase transition (and therefore has much in common with models commented in the next section, but this transition could take place during the inflation period). They predicted values as high as $`10^{11}G`$ at galactic scales.
Considering the earlier Planck-scale Universe could help in discriminating this profusion of theoretical viable models of inflationary magnetogenesis (Lemoine and Lemoine, 1995). In the inflactionary ”pre-big-bang” scenario based on the superstring theory (Veneziano, 1991; Gasperini and Veneziano, 1993a,b, 1994) the electromagnetic field is coupled not only to the metric but also to the dilaton background. COBE anisotropies emerge from electromagnetic vacuum fluctuations (Gasperini, Giovannini and Veneziano, 1995a), involving scales of the order of $`100Mpc`$. For some values of arbitrary parameters, these models provide large enough values of (inter)galactic magnetic fields, even in the absence of galactic dynamos (Gasperini, Giovannini and Veneziano, 1995b). They are in fact able to explain a possible equipartition of energy between the CMB radiation and magnetic fields. (See however section 4, where it is argued that this equipartition, if real, has been reached later). Hence, the ”pre-big-bang” scenario is able to provide strengths and scales as required by present astrophysical observations.
### 2.2 Magnetic fields generated in phase transitions
Hogan (1983) gave the basic arguments to consider phase transitions of first order as potential mechanisms for the generation of primordial magnetic fields. The phase transition would not take place simoultaneously in all places of the Universe, but in causal bubbles. At the rim of the bubbles very high gradients of the temperature, or any other order quantity characterizing the phase transition, such as the Higgs vacuum expectation value, would be stablished. These high gradients would produce a thermoelectric mechanism akin to the Biermann battery (Biermann, 1950; Biermann and Schlueter, 1951; Kemp, 1982). When bubbles collide the fields from each bubble are stitched to those of their neighbors by magnetic reconnection and the magnetic field lines execute a Brownian walk, related to the future spectrum of magnetic fields.
The electroweak phase transition has been considered by Vachaspati (1991), Enqvist and Olesen (1993, 1994), Davidson (1995), Grasso and Riotto (1997), Tornkvist (1997), Hindmarsh and Everett (1997) and others. The QCD phase transition has been considered by Quashnock, Loeb and Spergel (1989), Cheng and Olinto (1994), Sigl, Olinto and Jedamzik (1996) and others. The GUT phase transition has been considered by Brandenberger et al. (1992), Enqvist and Olesen (1994), Davis and Dimopoulos (1995), Martins and Shellard (1997) and others. There is a large variety of points of views and treatments other than the original of Hogan less less less less lesn (1983) and Vachaspati (1991). Vachaspati and Vilenkin (1991) considered cosmic strings formed in phase transitions with wiggly motions which created vorticity and then magnetic fields. Baym, Boedeker and McLerran (1996) showed how second order phase transitions also can generate magnetic fields. Kibble and Vilenkin (1995) considered the possibility that magnetic fields could also be generated in the intersecting region of two colliding bubbles. See also the review by Enqvist (1977).
It is interesting to note that the predicted present spectrum of magnetic field inhomogeneities is rather independent of the nature and time of the phase transition. Suppose that $`\lambda _i`$ is the correlation length at the phase transition taking place at a temperature $`T_i`$. We have $`B_i`$ (the magnetic field produced at the phase transition) to be of the order of $`T_i^2`$ and $`\lambda _i=T_i^1`$ (Vachaspati, 1991). The present magnetic field corresponding to this scale $`T_i^1z_i`$ in comoving coordinates, $`B_{oi}=B_iR_i^2=T_i^2R_i^2=T_0^2`$, being $`T_0`$ the present CMB temperature. It is independent of subindex $`i`$, characterizing the phase transition. (We have used units taking $`c=h=k=1`$). To calculate the spectrum we have $`B_0(\lambda )B_{0i}/N`$ (Vachaspati, 1991) where $`N=T_0\lambda `$ is the number of correlation cells at the scale $`\lambda `$ of interest, therefore $`B_0(\lambda )=T_0^2/(T_0\lambda )=T_0/\lambda `$, which does not contain subindex $`i`$. There is a compensation of two effects: The higher $`T_i`$, the higher the magnetic field produced, but the larger the effect of dilution by expansion. Other authors propose not to divide by $`N`$ but by $`\sqrt{N}`$ (Enqvist and Olesen, 1993). This changes the spectrum but it is still rather independent of the phase transition involved. An important consequence of this fact is that the effect of the different phase transitions could be added to produce an enhanced spectrum.
However, one of the big problems encountered by phase transitions in general, as magnetogenesis mechanisms, is that they provide very small values of the magnetic field at galactic scales. For instance, Vachaspati (1991) found $`B_010^{30}G`$. This result was improved by Enqvist and Olesen (1993) as they divided by $`\sqrt{N}`$ and not by $`N`$, but even so they obtained $`B_04\times 10^{19}G`$, enough to become the seed for galactic dynamos, but insufficient for the large values in protogalactic objects, or if the galactic dynamos do not work. Quashnock, Loeb and Spergel (1989) found $`6\times 10^{38}`$; Vachaspati and Vilenkin, $`10^{27}`$; etc. Beck et al. (1996) summarizing previous works, gave a value less than $`10^{23}`$.
Related to the above problem, these models provide very small scales, and the magnetic field would be destroyed by microphysical mechanisms in the radiation dominated era. This will be discussed in Section 3.
### 2.3 Magnetic fields generated in the radiation dominated era
Matsuda, Sato and Takeda (1971) first proposed a turbulent dynamo working in a radiation dominated universe. Even if the existence of a cosmological turbulence was already under discussion at that epoch, the interesting paper by Harrison (1973) considered that magnetic fields were generated by turbulent vorticity. The treatment was not relativistic. The turbulent medium was made up of ions and a negatively charged dense component composed of electrons and photons tightly coupled by Thompson scattering. A close relation between vorticity and the magnetic field was found:
$$\stackrel{}{B}=(mc/e)\stackrel{}{\omega }$$
where $`m`$ and $`e`$ are the proton mass and charge and $`\stackrel{}{\omega }`$ is the vorticity. This relation was already observed by Batchelor (1950) and has been recently reconsidered by Kulsrud et al. (1997). Magnetic field strengths of the order of $`5\times 10^4G`$ and a characteristic scale of $`2\times 10^2`$pc were obtained for protogalaxies and $`5\times 10^8G`$ and $`10`$ kpc for the intergalactic medium. It is interesting to note that in this model there was an ”external” scale in which structures and peculiar motions were frozen, with a ”turbulent horizon” being a small fraction of the Hubble radius. The existence of a primordial turbulence is still doubtful (see for instance the review by Rees, 1987). The required primordial vorticity, a critical point in this scenario, has been reconsidered by Sicotte (1997).
### 2.4 Magnetic fields generated after Recombination
During recombination the plasma decoupled from the radiation field. The protons caught the free electrons until the decreasing temperature and density did not allow for further recombination. A gas was left which was only partially ionized with $`n_i/n_H10^{45}`$ (e.g. Peebles 1993) and in which the density fluctuations should lead to the formation of galaxies. For our purposes it is enough to state that during the epoch of galaxy formation enough processes appear which can explain the existence of magnetic fields via battery mechanisms, which are a necessary ingredient for magnetohydrodynamical models for galactic magnetic fields, since in basic hydromagnetic equation
$$\frac{𝐁}{t}=\times (𝐯\times 𝐁)\times \eta (\times 𝐁).$$
no source term for the magnetic field appears. That is to say, there is no outright creation of magnetic field in the hydromagnetic description of galactic magnetic fields. Hence, if at any time the universe was devoid of magnetic fields, then as far as hydromagnetic effects are concerned, there would be no magnetic field at any other time.
The battery mechanisms provide these seed fields. We describe now how te seed fields are related to the protogalactic density fluctuations after their decoupling from background radiation, i.e. after recombination (Lesch and Chiba 1995). Whereas for primordial magnetic fields phase transitions and symmetry breaking mechanisms have to be considered, the generation of magnetic fields after recombination is described in terms of elementary electrodynamic properties of a plasma consisting of electrons and protons. The essence of any battery process is that currents are produced whenever the mean velocities of negative and positive charge carriers differ. In general, negative charge carriers are electrons and as such they are orders of magnitude less massive than the positive charge carriers. This makes electrons more responsive to inertial drag forces than ions are. The combination of a gravitational field with differential rotation leads to a nonconservative force acting essentially upon the electrons. These two ingredients occur naturally in disk systems with a central radiation source, as well as in stars, as was first pointed out by Biermann (1950). The ions are concentrated to the equitorial plane by the generated electric field. However, this field cannot cancel the centrifugal acceleration completely, i.e. charges must move and meridional currents appear. Lesch and Chiba (1995) transferred Biermann’s battery into the context of a forming galaxy. The resulting magnetic field of a spherical over-dense region in an expanding universe, with an expansion rate $`a=1/(1+z)`$, is governed by the following equation
$$\frac{1}{a^2}\frac{}{t}a^2B=\frac{m_ic}{2e}|\times 𝐠|\frac{m_ic}{2e}\omega (t)^2.$$
where $`𝐠`$ denotes the centrifugal acceleration, $`\omega `$ is the rotation velocity, and $`m_i`$ is the ion mass.
A second battery process was invoked by Mishustin & Ruzmaikin (1973). They considered the interaction of a rotating electron-proton-plasma with the intense cosmic background radiation. Thermal electrons scatter the photons of the background radiation via Compton scattering and gain energy and momentum, thereby drifting relative to the protons, i.e. producing a current. This current induces a magnetic field, whose time evolution is described by
$$\frac{1}{a^2}\frac{}{t}a^2B=\frac{m_ec}{e}\frac{2\omega }{\tau _\gamma }.$$
$`\tau _\gamma `$ denotes the optical depth for Compton scattering, which is a sensitive function of the redshift
$$\tau _\gamma =\frac{3m_ec}{4\sigma _T\rho _\gamma (0)(1+z)^4}\tau _\gamma (0)(1+z)^4.$$
$`\sigma _T=6.6510^{25}`$ cm<sup>2</sup> is the Thomson cross section and $`\rho _\gamma 410^{13}`$ erg cm<sup>-3</sup> is the present energy density of the background radiation. Here a term related to the Coulomb collisions of electrons and protons is neglected compared to the effect of current generation in the context concerned (Mishustin & Ruzmaikin 1973).
Magnetic fields are also created by sheared flows in weakly ionized plasmas as proposed by Lesch et al. (1989) for active galactic central regions, and by Huba & Fedder (1993) for the general case of shearing motions between plasmas and neutral gases: Again, the different mobility of electrons and ions is used. The electrons collide with neutral atoms, thereby drifting relative to the ions. For a differentially rotating system, the drift corresponds to a current, which induces a magnetic field. The field generation term is
$$\frac{1}{a^2}\frac{}{t}a^2B=\frac{m_ec}{e}|\times \nu _{en}(𝐕_𝐢𝐕_𝐧)|\frac{m_ec}{e}\nu _{en}\frac{v_r}{l_{shear}}.$$
$`\nu _{en}`$ denotes the electron-neutral collision frequency, $`v_r`$ is the relative ion-neutral drift speed, and $`l_{shear}`$ is the shear length. $`𝐕_𝐢(𝐕_𝐧)`$ is the ion (neutral) fluid velocity. The battery effects discussed above result in field strength at the so-called ”turnover” of about $`10^{23}10^{19}`$ G. The turnover denotes the redshift at which a gravitationally unstable structure decouples from the overall Hubble expansion. Taking into account the dynamics of a collapsing disk galaxy Lesch and Chiba (1995) showed that the field strengths at that redshift at which galactic disks form is at least four to five orders of magnitudes stronger then the field at turnover. The battery mechanisms within protogalactic systems lead to seed fields between $`10^{13}10^{16}`$G. During the disk formation non-axisymmetric instabilities lead to further now exponential growth of the field on time scales of the order of $`10^8`$ years by compression (Chiba and Lesch 1994). So after about 1-2 Gigayears the forming galaxies will contain $`\mu `$G-fields, as it is observed in the high-redshift Lyman$`\alpha `$-clouds
We have seen the possible production and evolution of seed magnetic fields in the course of the growth of protogalactic density fluctuations. In this picture, principal ingredients of presently observed magnetic fields are supposed to be seeded after the recombination epoch and before the first ignition of stars in a disk. Magnetic fields that came out in a forming disk galaxy are compatible with those reported in high redshift objects.
Alternative scenarios of seeding galactic magnetic fields have also been proposed, invoking the detailed plasma processes in galactic nuclei and jets, the effects of first star formation, and the pregalactic physics.
Some of extragalactic sources reveal radio jets (e.g. Bridle and Perley 1984). The magnetic fields in jets, typically having equipartition strength of $`10\mu G`$ and coherent length of several tens kpc, are thought to originate in the vicinity of a central compact object. Daly and Loeb (1990) argued that if all of galaxies were initially activated through a compact nucleus, which are currently inactive, the jets associated with a nucleus must tunnel through the ambient protogalactic medium, and the equipartition magnetic fields carried by jets are dispersed over the ambient medium with the strength compatible with the present strength of several $`\mu G`$. The picture presented is devoted to the fate of observed jet magnetic fields; original magnetic fields near a central compact object may be seeded by inherent plasma process involved (Lesch et al. 1989; Chakrabarti 1991) or accretion from the body of the host galaxy. The magnetic fields near the center may be further strengthened up to equipartition strength by accretion-disk dynamos around a central object (Bisnovati-Kogan and Ruzmaikin 1976; Pudritz 1981), and the velocity gradient in jets results in the longitudinal field.
Alternatively, the production of seed fields may have had to await to the onset of star formation; although a process of forming first-generation stars is very different from the present if there were no magnetic fields (Rees 1987), once the stars formed, the combination of battery and dynamo mechanisms inside stars may generate magnetic fields. The fields then could be ejected into the interstellar medium via stellar winds and supernova explosions. The resulting magnetic field is randomly aligned with a characteristic scale of 100 pc and strength of $`\mu G`$ (e.g. Ruzmaikin et al. 1988). Supposing that the interstellar medium of damped Ly$`\alpha `$ clouds is enriched by first generation of stars, it may also be enriched by magnetic fields created within stars if the picture described works. However the field flux is mostly concentrated on a number of small-scale components with many reversals, and thus it is hard to imagine how the ensemble of such small-scale fields reproduces the statistically siginificant level of Faraday rotation. Some sort of pregalactic dynamos may be responsible for organizing the large-scale structure of magnetic fields (Zweibel 1988; Pudritz and Silk 1989).
## 3 Evolution of small scale fields
After Annihilation and before Recombination, small scale magnetic fields are subject to severe destructive microphysical processes. Magnetic fields are then supported by electric currents that must be stablished or maintained in a very dense photon medium very effectively interacting with electrons and protons through Thomson scattering. Lesch and Birk (1998) have studied the conductivity and hence the magnetic diffusion during this critical epoch. They gave an equation for the diffussion time equivalent to:
$$\tau _{diff}=10^{44}z^6\lambda ^2$$
where $`\tau _{diff}`$ is measured in seconds and $`\lambda `$ in cm. The dependence of $`\tau _{diff}`$ on $`z`$ is very pronounced, the most restrictive action of the magnetic diffusivity is at the beginning, for $`z=z_{ann}`$, at Annihilation. The question is to find what scales are able to survive and reach Recombination, when this hostil medium is decoupled. Taking $`\tau _{diff}=\tau _{rec}`$ (recombination) and $`z=z_{ann}`$ we obtain:
$$\lambda =5\times 10^{16}z_{ann}^3$$
and this $`\lambda `$ will grow to its present comoving size:
$$\lambda _0=5\times 10^{16}z_{ann}^4$$
The Annihilation took place at $`T=5\times 10^9K`$ (being the mass of the electron $`0.511MeV`$), therefore, $`z_{ann}2\times 10^9`$. We conclude that only scales greater than about $`3kpc`$ would survive.
This is really a large value if the field was generated by a phase transition. The most recent one, the QCD phase transition, took place at $`200MeV`$, with a correlation scale of $`T_{QCD}^1`$ (Vachaspati,1991), i.e. $`10^{11}cm`$ in conventional units. This is at present only $`10cm`$. Other phase transitions provide similar values, as the correlation length at any phase transition corresponds to a present size of $`z_{pt}/T_{pt}(T_{pt}/T_0)/T_{pt}t_0^1`$, noticeably independent of the precise phase transition involved. The subindex $`pt`$ denotes any phase transition.
The minimum value of $`\lambda _0=3kpc`$ is even much higher than the present size of the horizon at any phase transition. This is important as probably the correlation length grows faster than the horizon (Dimopoulos and Davies, 1996) and then $`\lambda _0`$ should be compared with the horizon present size. The horizon at the QCD phase transition was $`10^6cm`$, equivalent to $`0.2pc`$ at present. The horizon at the electroweak phase transition was only few centimeters, corresponding to about $`1AU`$ at present. For earlier phase transitions the situation is even worse.
Some mechanism for very efficiently increasing the scales is required. An interesting calculation was carried out by Brandenburg, Enqvist and Olesen (1996). These authors considered MHD in a turbulent expanding radiation dominated universe, the metric variations being ignored. They found an inverse cascade, producing larger and larger scales for increasing time. The energy of small scale fields is transferred to larger scale fields. Important as it is, this model seems to be insufficient.
The existence of a turbulence, i.e. of non-linear effects in a medium so extremely close to equilibrium (much more than the CMB) is controversial. Aside the remarks summarized by Rees (1987), the relative contrast density $`\delta \delta \rho /\rho `$ evolves not in a random way, as it could be expected from a turbulent behaviour. The Jeans mass is very low, in particular at the beginning of the radiation dominated epoch, of less than $`1M_{}`$ (e.g. Battaner, 1996), so that collapse is a common state of inhomogeneities. If $`\delta >0`$ initially $`\delta `$ will always increase, and if $`\delta <0`$ initially $`\delta `$ will always decrease, at least for scales equivalent to a baryon rest mass less than $`1M_{}`$. Autogravitation, or in the required relativistic treatment for this epoch, perturbations of the metric tensor, is a basic fact in the evolution of inhomogeneities. Gravitational collapses, even if they do not process very fast, only $`a^2`$, cannot be ignored. Even if it is difficult to concive turbulence in a medium dominated by collapses, perturbations of the metric tensor should be incorporated to this type of turbulence models. Of course, also, turbulence and inverse cascades must stop at scales comparable to the horizon (Harrison, 1973), therefore being unable to explain fields at comoving scales larger than $`1Mpc`$.
Quantitatively, for the smaller scales, the obtained figures seem to be too low. The larger scale feeded are only of the order of 2 pc and the N value of Vachaspati (1991) is shifted from about $`10^{24}`$ to $`10^{19}`$, clearly insufficient. The calculation is limited to a time $`10^9`$ times the electroweak phase transition time. Extended calculations to much recent times could provide much less values of N, so this model can still deserve interesting possibilities. But we find unlike that this mechanism was able to surmount the effects of a so large conductivity, if a turbulence actually exist at all, during this epoch. A relativistic MHD in an expanding universe has been also studied by Gailis, Frankel and Dettman (1995).
There is another effect destroying small scale magnetic fields along the radiation dominated era and specially just after Equality and before Recombination. On general grounds, one would expect that magnetic field inhomogeneities should be associated to radiation and matter inhomogeneities and that the former would be destroyed if the later are damped. A classical treatment of density inhomogeneities in the imperfect fluid made up of photons and baryons (Weinberg, 1972; Silk, 1968) shows that masses less than the Silk mass are damped in the Acoustic epoch, when the cloud mass becomes larger than the Jeans mass, before Recombination. It is unlikely that magnetic fields prevent the inhomogeneity from the destructive effects of viscosity and heat conduction due to photon diffusion.
A model of this magnetized imperfect fluid has been developed by Jedamzik, Katalinic and Olinto (1996) concluding that MHD modes are completely damped by photon diffusion up to the Silk mass, as expected, and convert magnetic energy into heat. Damping would also be very important during the neutrino decoupling era, therefore small scale fields could have been washed out before the radiation dominated era. A direct consequence of photon diffusion damping would be that primordial magnetic fields would neither directly produce present galactic fields nor directly influence the galaxy formation process.
An equivalent argument is given by Lesch and Birk (1998) showing that vorticity and their potentially associated magnetic fields are severely affected by kinematic viscosity.
However, Brandenburg, Enqvist and Olesen (1977) are again more optimistic, estimating that the inverse cascade process is scarcely affected by Silk damping, except very late and perhaps for very weak fields.
## 4 Effects of large scale magnetic fields
Large scale magnetic fields are not affected by microphysical processes and evolve as $`Ba^2`$, or in other words $`B_0`$ is constant, more or less, from Inflation to Recombination. Even after Recombination, the evolution should not be dramatic. Large scale density inhomogeneities still behave linearly and so probably behave their (probably) associated magnetic field inhomogeneities. Small scale effects such as ejections from radiogalaxies, dynamos, contractions in galaxy formation, non-linear effects etc., taking place at protogalactic stages or once the first galaxies are formed do not alter the $`Ba^2`$ evolution of large scale fields. Shapes of field configuration are conserved, just they grow within expansion, becoming larger and weaker.
These large scale $`\stackrel{}{B}`$-inhomogeneities may have had a substantial influence on the formation of the large scale structures in the Universe. Since long, several authors have considered that magnetic fields could affect the formation of galaxies, mainly Piddington (1969, 1972) who tried to explain the present morphological differences between different types of galaxies from differences of magnetic and angular directions when galaxies formed. Wasserman (1978) proposed that magnetic field configurations at Recombination could decide the formation of galaxies and even their angular momenta. This work has been recently continued and extended to the non-linear regime by Kim, Olinto and Rosner (1996), being this model and the pioneer one by Wasserman (1978) were devoted to the post-Recombination era. It probably happens however, that the magnetic fields and the density structure were formed before, during the radiaton dominated era or before. These models mainly consider the problem of the formation of galaxies, while we are here favoring that this process is not directly affected by primordial magnetic fields.
Let us consider the problem of how large scale magnetic fields inhomogeneities have an influence on the formation of large scale density inhomogeneities in the Universe. Coles (1992) pointed out that the failure of the CDM scenario to explain large scale structures could be satisfactorely surmounted if magnetic felds were taken into account. The observations of large structures is in clear disagreement with the random behaviour predicted by CDM models, showing an impresive regularity and periodicity (Einasto et al., 1997).
The study of the influence of $`\stackrel{}{B}`$ along the large scale structure along the radiation dominated epoch was undertaken by Battaner, Florido and Jimenez-Vicente (1997), Florido and Battaner (1997) and Battaner, Florido and Garcia-Ruiz (1997), introducing linear perturbations in the physical quantities, including the metric tensor and the magnetic field, in a Robertson-Walker metrics. They found that preexisting magnetic structures were able to produce anisotropic density inhomogeneities in the photon fluid and local perturbations of the metrics. In particular, they were able to produce filaments. These radiative and gravitational potential filaments were the sites where baryons, or any other dark matter component, collapsed, forming the today observed luminous filaments as elements of the large scale structure (Shectman et al., 1996). Magnetic fields of the order of $`B_0=10^810^9G`$ could be responsible of the filamentary large scale structure. Cosmological filaments, as any other small scale filament in astrophysical systems could be interpreted as a magnetically driven configuration. Araujo and Opher (1997) have also considered the formation of voids by the magnetic pressure.
If the large scale is made up of filaments joining together to produce a network, and if these filaments actually were magnetic in origin, then the network would be subject to some magnetic restrictions, arising from the condition $`.\stackrel{}{B}=0`$ and from reconnection processes. Battaner, Florido and Garcia-Ruiz (1997) carried out a cristalographic approach, showing that the simplest network under these conditions was an ”egg-carton”, formed up by octahedra joining at their vertexes. This ”egg-carton” universe would have larger amounts of matter along the edges of the octahedra, and specially at the vertexes which would be the sites of large superclusters of galaxies, and voids would correspond to the interiors of the octahedra. From the nodes of the lattice, were two octahedra join, eight filaments would emerge. This spider-like structure has been observed for the local supercluster (Einasto, 1992). It is otherwise very difficult to explain the extreme regularity observed (Tully et al., 1992; Einasto et al.,1997).
Magnetic fields should not be considered as an alternative to current theories on large scale structure formation, but rather, as suggested by Coles (1992), as a missing ingredient in them.
## 5 Limits and future observations
Several limits to the magnetic field intensity or energy density have been reported in the literature (see also the reviews by Lesch and Chiba, 1997, and Beck et al., 1996). However, most of these limits affect a cosmological homogeneous magnetic field (which is a hypothesis scarcely defended; esceptions are Zeldovich, 1965, and Enqvist and Olesen,1994) therefore being useless if magnetic fields were randomly distributed (with $`<\stackrel{}{B}>=0`$ even if $`<B^2>0`$), or at least if there existed a homogeneous distribution of magnetic energy density, which is probably also a bad assumption. If instead, we are interested in the limits of typical peak values, the above mentioned limits should be increased by a factor which would depend on the statistical distribution of size and position of coherence cells or filaments. This factor could be of the order of 100 or 1000. This consideration affects, for instance, the limits based on the $`{}_{}{}^{4}He`$ abundance of about $`B_010^7G`$ (Greenstein, 1969; Zeldovich and Novikov, 1975; Matese and O’Conell, 1970; Barrow, 1976; Cheng , Scharmm and Truran, 1994; Kernan, Starkman and Vachaspati, 1995; Grasso and Rubistein, 1995, 1996; Cheng et al., 1996, and others), on the neutrino spin flip of about $`B_04\times 10^9G`$ (though very much depending on the mass of all neutrinos) (Shapiro and Wasserman, 1981, Enqvist et al., 1993) and on the CMB isotropy of about $`B_0<4\times 10^9G`$ (Lesch and Chiba, 1997; Barrow, Ferreira and Silk (1997).
From an observational relation between Faraday rotation and redshift of quasars it is concluded a limit for a widespread cosmological aligned field of about $`<10^{11}G`$ (Rees and Reinhardt, 1972; Kronberg and Simard-Normandin, 1976; Vallee, 1983; Lesch and Chiba, 1997). This limit is weakened to $`10^9G`$ if the coherence cells are $`1Mpc`$ large (Kronberg, 1994) or weakened to become $`>3\times 10^8G`$ if the field is coherent only on scales $`<10Mpc`$ (Kosowsky and Loeb, 1996). Peak values in a structure similar to the mentioned in the above section could be much greater than these limits for the same Faraday rotation data.
Let us propose a limit for a typical maximum of $`B_0`$ in the radiation dominated era. Rees (1987) estimated that in order to trigger galaxy formation, magnetic fields just after recombination would amount to $`B_0>10^9G`$. The argument could be inverted to provide un upper limit. Based on the results by Battaner, Florido and Jimenez-Vicente (1997) and Florido and Battaner (1997) we must have:
$$B_0<10^8G$$
(1)
for large scale peaks in the radiation era, because otherwise the formation of large scale structures would have begun too early and would be at present in a much advanced state of collapse.
Clearly, we observe at present peak values much larger than these. Today, if we exclude small scale peaks, such as jets, or even pulsars, we could have $`B_010^6G`$ (Kronberg, 1994). Therefore, some post-Recombination processes have either amplified or generated additional intergalactic fields.
Observations of present intergalactic and protogalactic magnetic fields have been reviewed by Kronberg (1994) and the results should not been repeated here. Let us therefore comment some recent proposals of future potential observations.
Plaga (1995) has proposed that the arrival time of $`\gamma `$-rays from extragalactic sources could provide information about very low intergalactic magnetic fields, in the range $`10^{12}10^{24}G`$. The delay in the arrival of the energetic TeV-$`\gamma `$-rays, with respect the low energy $`\gamma `$-rays that would directly reach us, is due to $`e^{}e^+`$ pair production involving IR background radiation. The particle pairs produced would scatter off CMB photons, producing the observable high energy $`\gamma `$-rays. See also the comment by Kronberg (1995) on this method.
Observations of coherence cells of aligned disc warps (Battaner et al., 1991; Zurita and Battaner, 1997), under the interpretation that these warps are produced by intergalactic magnetic fields (Battaner, Florido and Sanchez-Saavedra, 1990) have provided temptative values of $`\lambda 25Mpc`$. Future $`21cm`$ and optical galactic maps and surveys could provide better results and extended to greater regions in the Milky-Way neighborhood.
Improving the sensitivity of experiments measuring the CMB radiation, in a feasible way in a next future, would also permit to gather information about magnetic fields (Magueijo, 1994). Kosowsky and Loeb (1996) analized their influence on the Faraday rotation of the CMB radiation estimating that a field of $`10^9G`$ would produce a Faraday rotation of 1 degree at a frequency of $`30GHz`$. Adams et al. (1996) proposed that $`10^9G`$ fields generated at inflation would produce measurable distortions in the acoustic peaks in the CMB radiation.
Observations of the composition, spectrum and directional distributions of extragalactic ultrahigh energy cosmic rays, with energies greater than $`10^{18}10^{19}eV`$ can deserve estimates on the large scale component of magnetic fields of the order of $`10^9G`$ or less (Lee, Olinto and Sigl, 1996; Stanev et al., 1995).
J. Adams, U.H. Danielson, D. Grasso and H.R. Rubinstein, Phys. Lett. B 388, 253 (1996).
J.C.N. de Araujo and R. Opher, astro-ph/9707303 (1997).
J.D. Barrow, Mon.Not. R.A.S. 175, 339 (1976).
J.D. Barrow, P.G. Ferreira and J. Silk, astro-ph/9701063 (1997).
E. Battaner, Astrophysical fluid dynamics, Cambridge Univ. Press (1996).
E. Battaner, E. Florido and J.M. Garcia-Ruiz, Astron. Astrophys. in press (1997).
E. Battaner, E. Florido and J. Jimenez-Vicente, Astron. Astrophys. in press (1997).
E. Battaner, E. Florido and M.L. Sanchez-Saavedra, Astron. Astrophys. 236, 1 (1990).
E. Battaner, J.L. Garrido, M.L. Sanchez-Saavedra and E. Florido, Astron. Astrophys. 251, 402 (1991).
G.K. Batchelor, Proc. R. Soc. London 201, 405 (1950).
G. Baym, D. Boedeker and L. McLerran, Phys. Rev. D 53, 662 (1996).
R. Beck, A. Brandenburg, D. Moss, A. Shukurov and D. Sokoloff, An. Rev. Astron. Astrophys. 34, 155 (1996).
L. Biermann, Zeit. Naturforschung 5a, 65 (1950).
L. Biermann and A. Schleuter, Phys. Rev. 82, 863 (1951).
G.S. Bisnovatyi-Kogan and A.A. Ruzmaikin, Astrophys. Space Sci. 42, 401.
G. Boerner, The Early Universe, Springer-Verlag. Berlin (1988).
R.H. Brandenberger, A.C. Davis, A.M. Matheson and M. Thodden, Phys. Lett. B 293, 287 (1992).
A. Brandenburg, K. Enqvist and K. Olesen, Phys. Lett. B 392, 395 (1997).
A. Brandenburg, K. Enqvist and P. Olesen, Phys. Rev. D 54, 1291 (1996).
A.H. Bridle and R.A. Perley, Ann. Rev. Astron. Astrophys 22, 319. (11984).
F. Cattaneo, Astrophys. J. 434, 200 (1994).
S.K. Chakrabarti, Mon. Not. R.A.S. 252, 246 (1991).
B. Cheng, A.V. Olinto, D.N. Schramm and J.W. Truran, Preprint Los Alamos National Laboratory (1997).
B. Cheng and A. Olinto, Phys. Rev. D 50, 2421 (1994).
B. Cheng, D.N. Schramm and J.W. Truran, Phys. Rev. D 49, 5006 (1994).
M. Chiba and H. Lesch, Astron. Astrophys. 284,731 (1994).
P. Coles, Comments Astrophys. 16, 45 (1992).
R.A. Daly and A. Loeb, Astrophys. J. 364, 451. (1990).
S. Davidson, Phys. Lett. B 380,253 (1996).
A.C. Davis and K. Dimopoulos, cern-th/95-175 (1995).
K. Dimopoulos and A.C. Davis, Phys. Lett. B 390, 87 (1996).
A.D. Dolgov, Phys. Rev. D 48, 2499 (1993).
A.D. Dolgov and J. Silk, Phys. Rev. D 47, 3144 (1993).
J. Einasto, Observational and Physical Cosmology, Ed. by F. Sanchez, M. Collados and R. Rebolo. Cambridge Univ. Press (1992).
J. Einasto, M. Einasto, S. Gottloeber, V. Mueller, V. Saar, A.A. Starobinsky, E. Tago, D. Tucker, H. Andernach and P. Frsch, Nature 385, 139 (1997).
K. Enqvist, in Strong and electroweak matter. Hungary. astro-ph/9707300 (1997).
K. Enqvist and P. Olesen, Phys. Lett. B 319, 178 (1993).
K. Enqvist and P. Olesen, Nordita preprint 94/6 (1994).
K. Enqvist, V. Semikoz, A. Shukurov and D. Shokoloff, Phys. Rev. D 48, 4557 (1993).
E. Florido and E. Battaner, Astron. Astrophys. in press (1997).
R.M. Gailis, N.E. Frankel and C.P. Dettman, Phys. Rev. D 52, 6901 (1995).
W.D. Garretson, G.B. Field, S.M. Carrol, Phys. Rev. D 46, 5346 (1992).
M. Gasperini, M. Giovannini and G. Veneziano, cern-th/95-85 (1995).
M. Gasperini, M. Giovannini and G. Veneziano, cern-th/95-102 (1995b).
M. Gasperini and G. Veneziano, Astropart. Phys. 1, 317 (1993).
M. Gasperini and G. Veneziano, Mod. Phys. Lett. A 8, 3701 (1993).
D. Grasso and H.R. Rubistein, Astropart. Phys. 3, 95 (1995).
D. Grasso and H.R. Rubinstein, Phys. Lett. B 379, 73 (1996).
G. Greenstein, Nature 233, 938 (1969).
E.H. Harrison, Mon. Not. R.A.S. 165, 185 (1973).
J.C. Kemp, Pub. astron. Soc. Pacific 94, 627 (1982).
P.J. Kernan, G.D. Starkman, T. Vachaspati, astro-ph/9509126 (1995).
T.W.B. Kibble and A. Vilenkin, Phys. Rev. D 52, 1995 (1995).
K.T. Kim, P.P. Kronberg, G. Giovannini and T. Ventury, Nature 341, 720 (1989).
M. Hindmash and A. Everett, astro-ph/9708004 (1997).
C.J. Hogan, Phys. Rev. Lett. 51, 1488 (1983).
A.M. Howard and R.M. Kulsrud, Astrophys. J. 483, 648 (1996).
K. Jedamzik, V. Katalinic and A. Olinto, astro-ph/9606080 (1996).
E. Kim, A. Olinto and R. Rosner, Astrophys. J. 468, 28 (1996).
A. Kosowsky and A. Loeb, Astrophys. J. 469, 1 (1997).
P.P. Kronberg, Rep. Prog. Phys. 57, 325 (1994).
P.P. Kronberg, J.J. Perry and E.L.H. Zukowski, Astrphys.J. 387, 528 (1992).
P.P. Kronberg and M. Simard-Normandin, Natura 263, 653 (1976).
R.M. Kulsrud and S.W. Anderson, Astrophys. J. 396, 606 (1992).
R.M. Kulsrud, S.C. Cowley, A.V. Gruzinov and R.N. Sudan, Phys. Rep. 283,213 (1997). See also R. M. Kulsrud, R. Cen, J. Ostriker and D. Ryn, Astrophys. J. 480, 481 (1997).
K.M. Lanzetta, A.M. Wolfe and D.A. Turnshek, Astrophys. J. 440, 435 (1995).
S. Lee, A. Olinto and G. Siegl, Astrophys. J. Lett. 455, L21 (1995).
D. Lemoine and M. Lemoine, Phys. Rev. D 52, 1995 (1995).
H. Lesch and G. Birk, Phys. of Plasmas 5, 2773 (1998).
H. Lesch and M. Chiba, Astron. Astrophys. 297, 305 (1995).
H. Lesch and M. Chiba, Fundamentals of Cosmic Phys. 18,273 (1997).
H. Lesch, R. Crusius, R. Schlickeiser and R. Wielebinski, Astron. Astrophys. 217, 99 (1989). J.D. Huba and J.A. Fedder, Phys. Fluids B 5, 3779 (1993).
J.C.R. Magueijo, Phys. Rev. D 49,671 (1994).
J.J. Matese and R.F. O’Connell, Astrophys. J. 160, 451 (1970).
C.J.A.P. Martins and E.P.S. Shellard, astro-ph/9706287 (1997).
T. Matsuda, H. Sato and H. Takeda, Pub. astr. Soc. Japan, 23, 1 (1971).
I.N. Mishustin and A.A. Ruzmaikin, Sov. Phys. JETP 34, 233 (1973).
P. Olesen, in Nato advanced research workshop on “Theretical Physics”, Zakopane. Poland (1997).
P.J.E. Peebles, The large scale structure of the Universe, Princeton Univ. Press. Princeton (1993).
J.J. Perry, A.M. Watson and P.P. Kronberg, Astrophys.J. 406, 407 (1993).
J.H. Piddington, Cosmic Electrodynamics. Wiley Interscience. New York (1969).
J.H. Piddington, Cosmic Electrodynamics. Wiley Interscience. New York (1970).
R. Plaga, Nature, 374, 430 (1995).
R. Pudritz, Mon. Not. R.A.S. 195, 881 (1981).
R. Pudritz and J. Silk, Astrophys. J. 342, 650 (1989).
J. Quashnock, A. Loeb and D.N. Spergel, Astrophys. J. Lett. 344, L49 (1989).
B. Ratra, Astrophys. J. Lett. 391, L1 (1992).
M. Rees, Q. Jl. R. astr. Soc. 28, 197 (1987).
M.J. Rees and M. Reinhardt, Astron. Astrophys. 19, 104 (1972).
A.A. Ruzmaikin, A.M. Shukurov and D.D. Sokoloff, Magnetic Fields of Galaxies. Kluver, Dordrecht (1988).
S.L. Shapiro and I. Wasserman, Nature, 289, 657 (1981).
S.A. Shectman, S.D. Landy, A. Oemler, D.L. Tucker, H. Lin, P. Kirshner and P.L. Schechter, Astrophys. J. 470, 172 (1996).
H. Sicotte, Mon. Not. R.A.S. 287, 1 (1997).
G. Sigl, A. Olinto and K. Jedamzik, astro-ph/9610201 (1996).
J. Silk, Astrophys. J. 151, 459 (1968).
T. Stanev, P.L. Biermann, J. Lloyd-Evans, J.P. Rachen and A. Watson, Phys. Rev. Lett. 75, 3056 (1995). T. Stanev, Astrophys. J. 479, 290 (1997). M. Lemoine, G. Sigl, A.V. Olinto and D.N. Schramm, Astrophys. J. Lett. 486, 115 (1997).
O. Tornqvist, hep-ph/9707513 (1997).
R.B. Tully, R. Scaramella, G. Vettolani and G. Zamorani, Astrophys. J. 388, 9 (1992).
M.S. Turner and L.M. Widrow, Phys. Rev. D 37, 2743 (1988).
T. Vachaspati, Phys. Lett B 265, 258 (1991).
T. Vachaspati and A. Vilenkin, Phys. Rev. Lett. 67, 1057 (1991).
S.I. Vainshtein and F. Cattaneo, Astrophys.J. 393, 165 (1992).
S.I. Vainshtein, E.N. Parker and R. Rosner, Astrophys. J. 404, 773 (1993).
J.P. Vallee, Astrophys. Lett 23, 87 (1983).
G. Veneziano, Phys. Lett. B 265, 287 (1991).
I. Wasserman, Astrophys. J. 224, 337 (1978).
S. Weinberg, Gravitation and Cosmology John Wiley Sons. New York (1972).
A.M. Wolfe, K.M. Lanzetta and A.L. Oren, Astrophys. J. 404, 480 (1992).
Ya B. Zeldovich, Sov. Phys. JETP 48, 986 (1965).
Ya B. Zeldovich and I.D. Novikov, The Estructure and Evolution of the Universe Univ. Press, Chicago (1975).
A. Zurita and E. Battaner, Astron. Astrophys. 322, 86 (1997).
E.G. Zweibel, Astrophys. J. 329, L1 (1988).
B.G. Zweibel and C. Heiles, Nature 385, 131 (1997).
|
no-problem/0003/astro-ph0003358.html
|
ar5iv
|
text
|
# Luminosity– and morphology–dependent clustering of galaxies
## 1 Introduction
The geometrical properties of the large–scale structure in the Universe are a common test for cosmic structure formation theories. However, comparisons between analytical models and observational data suffer from the fact, that theoretical predictions refer to mass correlations whereas in galaxy catalogs only luminous matter is observed. This gap gives rise to the bias problem and is usually filled using biasing schemes. Mostly, these schemes relate properties of the density contrast field to the distribution of the galaxies, thus combining descriptors of a random field with point process characteristics. Due to the nature of the dark matter, only indirect methods are feasible to address the bias problem empirically. In this line of thought, it seems promising to ask whether the clustering properties of galaxies depend on their mass, luminosity or morphological type. The idea behind this search for luminosity and morphology segregation is that different galaxy subpopulations may trace the dark matter distribution on a different level.
Empirical investigations concerned with this problem were mainly carried out in two directions:
* The two–point correlation function was calculated for a series of volume–limited subsamples from galaxy surveys. A difference in the amplitude of the two–correlation function between such samples was interpreted as an indication of luminosity or morphology–segregation. For luminosity segregation see e.g., Ostriker & Turner (1979); Hamilton (1988); Domínguez-Tenreiro & Martínez (1989); Benoist et al. (1996); Willmer et al. (1998). The void probability and cross–correlation functions have been used by Maurogordato & Lachièze–Rey (1987) and Valotto & Lambas (1997). For morphology segregation see e.g., Domínguez-Tenreiro et al. (1994); Hermit et al. (1996). These investigations are sensitive to segregation effects on scales roughly between 1 and 10$`h^1`$Mpc. However, Coleman & Pietronero (1992) gave an alternative explanation of the rising amplitude in terms of a fractal galaxy distribution, without any luminosity–dependent clustering.
* Dressler (1980) showed that in clusters of galaxies the morphological type of a galaxy is depending on the local (surface) density; this is called the morphology–density relation. For mainly spherical clusters, where the local density is closely related to the radial distance from the cluster center, this translates into the Butcher & Oemler (1978) effect. For more recent accounts of the morphology–density relation see Caon & Einasto (1995); Dressler et al. (1997); Andreon et al. (1997). Most of these investigations focussed on the morphology–density relation inside clusters, hence on scales smaller than 1.5$`h^1`$Mpc. But the morphology–density relation can be observed also in groups of galaxies (Postman & Geller, 1984; Maia & da Costa, 1990) and for dwarf galaxies in the field (Binggeli et al., 1990).
With the first method, one compares two–point correlation functions, whereas with the second, one considers the relation between the local number density and the local morphology, i.e., a comparison of one–point densities. Both methods rely on unweighted descriptors.
The observations of luminosity segregation or the morphology–density relation were complemented by theoretical considerations. Motivated by the offset between the galaxy–galaxy and the cluster–cluster correlation functions, Kaiser (1984) and Bardeen et al. (1986) suggested that clusters may be understood as peaks in the density field. Starting from a Gaussian random field they showed how the amplitude of the correlation function increases with the threshold imposed on the initial density field, i.e., with the height of the peaks in the density field. This also provided an explanation for the morphology–density relation (Evrard et al., 1990).
Other authors developed a conceptual framework to describe the bias (see e.g., Coles 1993, Dekel & Lahav 1999, and refs. therein). Within these biasing schemes, characteristics of the galaxy point pattern are connected with descriptions of the density field – often the mass density contrast and the galaxy over–density are compared. The relation is assumed to be (non–) linear and either deterministic or stochastic (Dekel & Lahav, 1999). More involved biasing schemes were considered to facilitate the extraction of reasonable galaxy catalogs from $`N`$-body simulations (see e.g., Kates et al. 1991, Weiß & Buchert 1993, Kauffmann et al. 1997).
In this paper, we introduce a new method to handle the bias problem. Our approach complements both the more observational methods and the analytical and theoretical treatments. We understand the galaxies with their intrinsic properties as a realization of a marked point process. Using conditional weighted correlation functions, we put an intermediate step in between the pure point process statistics and the statistics of random fields. In our description stochasticity is present from the very beginning. It provides us with stochastic models which enable us to exclude certain families of models for the luminosity distribution of galaxies.
More precisely, the aim of our paper is twofold:
On the one hand, we want to clarify the notion of luminosity/morphology–dependent clustering by discussing this task in the mathematical framework of marked point processes (Sect. 2). This allows us to introduce a new class of indicators sensitive to luminosity segregation (Subsect. 3.1) and to discuss models for marked point patterns (Subsect. 3.2 and Subsect. 6.1). Methods similar in spirit are the cross–correlation function and luminosity–weighted correlation functions considered by Alimi et al. (1988), Börner et al. (1989), Valls-Gabaud et al. (1989), and Tegmark & Bromley (1999). Our methods allow for a study of the interplay between the spatial clustering and the luminosity and morphology distribution of the galaxies, complementing the characterization of the purely spatial distribution of the galaxies.
On the other hand, we address the empirical question, whether the luminosities or morphological types of galaxies depend on their spatial distribution by analyzing the SSRS2 catalog (da Costa et al., 1998) in Sect. 4. Our results show a significant scale–dependent luminosity and morphological segregation. To understand the data more closely we compare our results with the random field model. The comparison with galaxy samples from the IRAS 1.2Jy (Fisher et al., 1995) and the PSCz (Saunders et al., 2000) strengthens our conclusions.
In Sect. 5 we will discuss the usual way of looking for luminosity segregation via the amplitude of the correlation function in the framework of marked point processes. The criticism by Coleman & Pietronero (1992) is reviewed and we show that this degeneracy between a fractal spatial distribution and luminosity segregation is not encountered if one uses the mark–correlation functions we proposed. This strengthens the conclusions of our empirical work in Sect. 4.
Investigations inside clusters of galaxies gave clear evidence for the morphology–density relation (Dressler, 1980). In Sect. 6 we however show that the observed luminosity segregation may not be explained by the spatial interaction of early– and late–type galaxies alone. Luminosity segregation is already present in the subsample consisting only of early–type galaxies.
In Sect. 7 we summarize and provide an outlook. Technicalities concerning the estimation of mark–correlation functions are left to Appendix A.
## 2 Marked point distributions
Consider a set of points $`X=\{𝐱_i\}_{i=1}^N`$ given by the spatial coordinates $`𝐱_i^3`$ of the galaxies inside a sample geometry $`𝒟`$. Additionally to their positions in space we know intrinsic properties of the galaxies like their luminosity, mass, morphological type etc. Formally, we assign to each point $`𝐱_i`$ a mark $`m_i`$, e.g., the luminosity of the galaxy $`m_i=L_i`$, and obtain the marked point set $`X^M=\{(𝐱_i,m_i)\}_{i=1}^N`$. We are not limited to continuous marks like the luminosity, also discrete marks like morphological types (e.g., spiral or elliptical) can be used. The description of the galaxy distribution in a statistical way that we will propose in this article, rests on the assumption that the empirical data points may be considered as a realization of a marked point process. Formally, $`X=\{𝐱_i\}_{i=1}^N`$ and $`M=\{m_i\}_{i=1}^N`$ may be thought of as realizations of a point process each, which may be characterized by the usual point process statistics. Physically, however, we are interested in the interplay between the spatial statistics and the mark distribution, which is expressed in quantities combining information on the space and the mark distribution.
The second–order theory of marked point processes was developed in detail by Stoyan (1984) where also a mark–weighted conditional correlation function was introduced (see also Stoyan & Stoyan 1994). Some aspects have been also discussed by Peebles (1980).
### 2.1 One–point properties
A point process may be characterized by its moments. For a homogeneous spatial point distribution the first moment is the mean number density $`\rho `$, which may be estimated with $`N/|𝒟|`$, where $`|𝒟|`$ is the volume of the sample and $`N`$ the number of points inside $`𝒟`$. Let $`\rho _1^M(m)\mathrm{d}m`$ denote the probability that the value of a mark lies within the interval $`[m,m+\mathrm{d}m]`$, then the mean mark $`\overline{m}`$ and the variance of the marks $`V`$ are given by
$$\overline{m}=dm\rho _1^M(m)m,\text{ and }V=dm\rho _1^M(m)(m\overline{m})^2,$$
(1)
which may be estimated by
$$\frac{1}{N}\underset{i=1}{\overset{N}{}}m_i\text{ and }\frac{1}{N1}\left(\underset{i=1}{\overset{N}{}}m_i^2N\overline{m}^2\right),$$
respectively.
For a homogeneous marked point process, the joint probability $`\rho _1^{SM}(𝐱,m)\mathrm{d}V\mathrm{d}m`$ of finding<sup>1</sup><sup>1</sup>1For the sequel we speak for reasons of simplicity of “finding at $`𝐱`$ with mark $`m`$” instead of “finding in a volume element $`\mathrm{d}V`$ at position $`𝐱`$ with mark in the range $`[m,m+\mathrm{d}m]`$”. a point at position $`𝐱`$ with mark $`m`$, splits into a space–independent mark probability and the constant mean density: $`\rho _1^M(m)\mathrm{d}m\times \rho \mathrm{d}V`$. In general, the mark distribution $`\rho _1^M`$ is not homogeneous. Note that this notion of independence does not rule out luminosity segregation at all and seems a physically justified assumption, since it simply requires that no region of space has an a priori specified mark distribution different from that one of another region.
### 2.2 Two–point properties
The second–order properties of the spatial distribution of the point set $`X`$ are fully specified by the product–density $`\rho _2^S(𝐱_1,𝐱_2)\mathrm{d}V_1\mathrm{d}V_2`$ giving the probability of finding a point at $`𝐱_1`$ and another point at $`𝐱_2`$. For a stationary and isotropic point distribution we have with $`r=|𝐱_1𝐱_2|`$
$$\rho _2^S(𝐱_1,𝐱_2)=\rho ^2(1+\xi (r))$$
(2)
with the two–point correlation function $`\xi (r)`$. Similarly, second–order properties of the marked point set $`X^M`$ are fully specified by the mark product–density:
$$\rho _2^{SM}((𝐱_1,m_1),(𝐱_2,m_2))\mathrm{d}V_1\mathrm{d}m_1\mathrm{d}V_2\mathrm{d}m_2$$
(3)
is the joint probability of finding a galaxy at $`𝐱_1`$ with the mark $`m_1`$ and another point at $`𝐱_2`$ with the mark $`m_2`$. Hence the (spatial) product–density $`\rho _2^S(𝐱_1,𝐱_2)`$ is the marginal density
$$\rho _2^S(𝐱_1,𝐱_2)=dm_1dm_2\rho _2^{SM}((𝐱_1,m_1),(𝐱_2,m_2)).$$
(4)
With an appropriate chosen integration measure similar definitions apply for discrete marks.
Now consider a finite domain $`𝒟`$. The normalization of $`\rho _2^S`$ is given by
$$𝒩_2=_𝒟dx_1^3_𝒟dx_2^3\rho _2^S(𝐱_1,𝐱_2)=𝔼[N(N1)],$$
(5)
with $`N`$ the number of points of one realization inside $`𝒟`$, and $`𝔼`$ the mean value over several realizations.
Respecting this normalization, a marginal product density for the marks can be defined by
$$\rho _2^M(m_1,m_2)=\frac{1}{𝒩_2}_𝒟dx_1^3_𝒟dx_2^3\rho _2^{SM}((𝐱_1,m_1),(𝐱_2,m_2)).$$
(6)
$`\rho _2^M(m_1,m_2)\mathrm{d}m_1\mathrm{d}m_2`$ quantifies the probability to find the marks $`m_1`$ and $`m_2`$ at two given points in the distribution. Mathematically, $`\rho _2^M(m_1,m_2)`$ quantifies a real two–point property. Physically, however, we expect – at least in our case – that intrinsic correlations in mark space are not present, i.e., that
$$\rho _2^M(m_1,m_2)=\rho _1^M(m_1)\rho _1^M(m_2).$$
(7)
Otherwise the probability of finding a galaxy with mark $`m_i`$ in a fixed sample would depend on the other marks regardless how distant they are, a consequence which may seem reasonable in biosciences (epidemiology) but not in our case of large galaxy surveys. In other words, spatial mark correlations may be present, but globally the presence of a mark with value $`m`$ in the sample does not prearrange the values of the other marks. Typically, the one–point mark distribution $`\rho _1^M`$ is inhomogeneous in mark–space. Therefore, one cannot check the relation (7) from one realization only; several independent samples are needed. In future redshift surveys it may be possible to extract approximately independent subsamples seperated by a large distance, allowing for such a check. Throughout this paper, we will adopt the assumption (7).
### 2.3 Mark correlations depending on the spatial distance
In the following we want to know, whether the clustering in space and the luminosity distribution are correlated. We define the conditional mark density:
$$_2(m_1,m_2|𝐱_1,𝐱_2)=\{\begin{array}{cc}\frac{\rho _2^{SM}((𝐱_1,m_1),(𝐱_2,m_2))}{\rho _2^S(𝐱_1,𝐱_2)}\hfill & \text{ for }\rho _2(𝐱_1,𝐱_2)0,\hfill \\ 0\hfill & \text{ otherwise }.\hfill \end{array}$$
(8)
For a stationary and isotropic point distribution, $`_2(m_1,m_2|𝐱_1,𝐱_2)`$ is the probability density<sup>2</sup><sup>2</sup>2 The notation $`_2(m_1,m_2|r)`$ is somehow imprecise, since it does not remind us of the fact that the marks refer to given points $`𝐱_1`$ and $`𝐱_2`$. For simplicity, we do not use a more accurate notation like $`_2(m_1(𝐱_1),m_2(𝐱_2)|r)`$. of finding the marks $`m_1`$ and $`m_2`$ at two galaxies located at $`𝐱_1`$ and $`𝐱_2`$, respectively, under the condition that galaxies at these positions are present in the data. For the following, we assume that this quantity is only a function of the galaxy distance $`r=|𝐱_1𝐱_2|`$: $`_2(m_1,m_2|r)`$. This assumption expresses a sort of homogeneity and isotropy, however, it does not presuppose a well–defined mean density and is thus only a weak requirement.
The full mark product–density can be written as
$$\rho _2^{SM}((𝐱_1,m_1),(𝐱_2,m_2))=_2(m_1,m_2|𝐱_1,𝐱_2)\rho _2^S(𝐱_1,𝐱_2).$$
(9)
$`_2(m_1,m_2|r)`$ is a function depending on three variables and is therefore hard to estimate. With the mark–weighted correlation functions and the discrete mark–correlation function we further distill the information as discussed in Subsection 3.1.
If the distribution of the marks is independent of the distribution of the points, the conditional mark density becomes independent of $`r`$:
$$_2(m_1,m_2|r)=\rho _1^M(m_1)\rho _1^M(m_2),$$
(10)
Intuitively, this independence may be understood in the following way: After having distributed galaxies in space, we choose marks (as a realization of a second independent stochastic process) and distribute them randomly without any regard to the clustering of the galaxies.
Equation (10) is the basic assumption behind projection formulas like Limber’s equation (Peebles, 1980). If, on the other hand, $`_2(m_1,m_2|r)`$ does depend on $`r`$, we speak of e.g., mark segregation: The probability of observing two marks $`m_1`$ and $`m_2`$ (e.g., luminosities) on the galaxies at $`𝐱_1`$ and $`𝐱_2`$ varies with the separation $`r`$ of these two galaxies.
Note, that for every empirical dataset of marked points (which we may think of as realization of a marked point process) we can artificially construct another dataset with the same spatial features showing no mark segregation by redistributing the marks to the points randomly. This boostrap resampling strategy for the marks provides a method for testing the statistical significance of mark correlations.
### 2.4 Spatial correlations depending on the marks
There are complementary definitions of this sort of independence or luminosity segregation. For example, we can think the other way round and define a conditional density that there be two galaxies at $`𝐱_1`$ and $`𝐱_2`$, under the condition that their marks be $`m_1`$ and $`m_2`$:
$$𝒮_2(𝐱_1,𝐱_2|m_1,m_2)=\{\begin{array}{cc}\frac{\rho _2^{SM}((𝐱_1,m_1),(𝐱_2,m_2))/𝒩_2}{\rho _2^M(m_1,m_2)}\hfill & \text{ for }𝒩_2\rho _2^M(m_1,m_2)0,\hfill \\ 0\hfill & \text{ otherwise },\hfill \end{array}$$
(11)
with $`𝒩_2`$ given in Eq. (5). If the conditional space correlation is independent of $`m_1`$ and $`m_2`$, then $`𝒮_2(𝐱_1,𝐱_2|m_1,m_2)=\rho _2^S(𝐱_1,𝐱_2)/𝒩_2`$. In the case of luminosity segregation, on the other hand, the values of the marks influence the spatial clustering. Using $`𝒮_2`$ we will discuss the usual way of looking for luminosity segregation in Subsect. 5.1.
### 2.5 $`n`$–point properties
For completeness we mention that $`n`$–point–properties may be discussed in the same way. Basic quantities are the $`n`$–point product densities $`\rho _n^{SM}((𝐱_1,m_1),\mathrm{},(𝐱_n,m_n))`$ and the conditional densities $`_n(m_1,\mathrm{},m_n|𝐱_1,\mathrm{},𝐱_n)`$. At this level the issue may be re–discussed, whether the mark distribution depends on the spatial clustering.
Robust statistics for the clustering of galaxies in space, incorporating higher–order correlations, are the $`J`$–function (van Lieshout & Baddeley 1996, Kerscher 1998, Kerscher et al. 1999) and the Minkowski functionals (Mecke et al. 1994, for a review see Kerscher 2000). A first extension of the $`J`$–functions to discretely marked point sets is discussed by van Lieshout & Baddeley (1997). The application to galaxy catalogs and the generalization for continuous marks is currently under investigation.
## 3 Mark–weighted conditional correlation functions and models for marked point distributions
Since the joint space and mark product–density $`\rho _2^{SM}`$ and the conditional mark density $`_2`$ depend on three variables at least, they are not easy to handle. Therefore, we discuss quantities accessible both to straight–forward interpretation and to numerical estimation. Particularly, we investigate the mark–weighted conditional densities.
### 3.1 Mark–weighted conditional correlation functions
For a non–negative weighting function $`f(m_1,m_2)`$ we define the average over pairs with separation $`r`$:
$$f_\mathrm{P}(r)=dm_1dm_2f(m_1,m_2)_2(m_1,m_2|r).$$
(12)
$`f_\mathrm{P}(r)`$ is the expectation value of the weighting function $`f`$ (depending only on the marks), under the condition that we find a galaxy–pair with separation $`r`$ in the data. With this definition we separate the mark correlation properties from the spatial clustering properties of the underlying point–distribution, as can be seen directly from
$$f_\mathrm{P}(r)=\frac{dm_1dm_2f(m_1,m_2)\rho _2^{SM}((𝐱_1,m_1),(𝐱_2,m_2))}{\rho _2^S(r)}$$
(13)
for $`\rho _2^S(r)0`$. We are free to choose appropriate weighting functions adopted to our problem. In the following we discuss common choices from the literature and introduce some new ones.
#### 3.1.1 Continuous marks
Using several positive weighting functions we construct statistical indicators to investigate the mark correlation properties of a point set (see also Stoyan & Stoyan 1994 and Schlather 1999, we assume that the marks are positive numbers):
1. At first we consider the mean mark:
$$k_m(r)=\frac{m_1+m_2_\mathrm{P}(r)}{2\overline{m}}.$$
(14)
$`k_m`$ equal unity indicates the absence of mark segregation. A preferred clustering of marks e.g., $`m>\overline{m}`$ at a scale $`r`$ can be concluded from $`k_m(r)>1`$.
2. Closely related is Stoyan’s $`k_{mm}`$–function<sup>3</sup><sup>3</sup>3Also called (normalized) mark–correlation function, see however the comments by Schlather (1999). (Stoyan & Stoyan, 1994):
$$k_{mm}(r)=\frac{m_1m_2_\mathrm{P}(r)}{\overline{m}^2}.$$
(15)
With $`k_{mm}`$ we investigate the square of the geometric mean of the marks on points at a distance of $`r`$. Therefore, a preferred clustering of marks at a scale $`r`$ can be inferred from $`k_{mm}(r)>1`$ similar to $`k_m`$. Note that if the mark is the mass of a galaxy, $`k_{mm}`$ may serve as an estimator for the conditional mass correlations $`𝔼[\varrho (0)\varrho (𝐱)]/\rho _2^2(0,𝐱)`$, where $`\varrho (𝐱)`$ is the mass–density at position $`𝐱`$, thus it quantifies the ratio between galaxy and mass correlations.
3. The mark variogram (Wälder & Stoyan, 1996) is defined by
$$\gamma (r)=\frac{1}{2}(m_1m_2)^2_\mathrm{P}(r)=m_1^2_\mathrm{P}(r)m_1m_2_\mathrm{P}(r).$$
(16)
$`\gamma (r)`$ equals the variance $`V`$ of the mark distribution, if mark segregation is absent; it exceeds $`V`$ at some scale $`r`$, if points that are about $`r`$ apart from each other, tend to have very different marks.
4. Another tool for investigating the variance of the mark distribution is the mark covariance function (Cressie, 1991)
$$\text{cov}(r)=m_1m_2_\mathrm{P}(r)m_1_\mathrm{P}(r)m_2_\mathrm{P}(r)=m_1m_2_\mathrm{P}(r)m_1_\mathrm{P}^2(r).$$
(17)
Thus, luminosity segregation can be detected by looking whether $`\text{cov}(r)`$ does significantly differ from zero for some $`r`$.
5. Both $`\gamma (r)`$ and $`\text{cov}(r)`$ mix the two–point and one–point fluctuations of the mark distribution. To quantify the fluctuations of the mark at one point only, given there is another point at distance $`r`$, we suggest to use
$$\text{var}(r)=\left(m_1m_1_\mathrm{P}(r)\right)^2_\mathrm{P}(r).$$
(18)
From Eq. (16) and (17) one directly obtains
$$\text{var}(r)=\gamma (r)+\text{cov}(r).$$
(19)
6. Closely related to $`\text{cov}(r)`$ is the mark–correlation function of Isham (1985)
$$\text{cor}(r)=\frac{m_1m_2_\mathrm{P}(r)m_1_\mathrm{P}^2(r)}{m_1^2_\mathrm{P}(r)m_1_\mathrm{P}^2(r)}=\frac{\text{cov}(r)}{\text{var}(r)},$$
(20)
the covariance function divided by the fluctuations of the mark.
Schlather (1999) showed that there is an ambiguity in the definitions of these mark characteristics at $`r`$ equal zero, but there is no problem for $`r>0`$. Since we always have to use a finite and non–zero $`r`$ to estimate these mark characteristics, this ambiguity is a technical point we do not need to consider further. As another characteristic for marked point distributions, Capobianco & Renshaw (1998) consider the extension of the $`k_{mm}`$ function on a two–dimensional grid.
#### 3.1.2 Discrete marks
To investigate the correlation properties between galaxies of different morphological types the marks $`m_i`$ are chosen out of a finite range of attributes $`m_i\{t_\alpha \}_{\alpha =1}^A`$. We also could use other intrinsic properties, like spectral features etc. of the galaxies to define these discrete marks. Similarly, a finite binning may be used for continuous marks. Consider pairwise disjoint bins $`I_\alpha `$ in luminosity space, then the mark is chosen to be $`m_i=t_\alpha `$ if the luminosity of the galaxy is $`L_iI_\alpha `$.
For discrete marks the following symmetric weight functions for $`\alpha ,\beta =1,\mathrm{},A`$ are appropriate:
$$f_{t_\alpha t_\beta }(m_1,m_2)=\delta _{m_1t_\alpha }\delta _{m_2t_\beta }+(1\delta _{\alpha \beta })\delta _{m_2t_\alpha }\delta _{m_1t_\beta },$$
(21)
where the Kronecker $`\delta _{m_1t_\alpha }`$ equals unity if $`m_1=t_\alpha `$ and zero otherwise. According to Eq. (12) we consider the (normalized) conditional cross–correlation functions
$$C_{t_\alpha ,t_\beta }(r)=f_{t_\alpha t_\beta }_\mathrm{P}(r).$$
(22)
Clearly, $`_{\alpha =1}^A_{\beta =\alpha }^Af_{t_\alpha ,t_\beta }=1`$ and therefore also
$$\underset{\alpha =1}{\overset{A}{}}\underset{\beta =\alpha }{\overset{A}{}}C_{t_\alpha ,t_\beta }(r)=1$$
(23)
for all $`r`$. If the marks are independent on the distribution in space one can show that
$$C_{t_\alpha ,t_\beta }(r)=\frac{2\rho _{t_\alpha }\rho _{t_\beta }}{\rho ^2}\text{ for }t_\alpha t_\beta ,\text{ and }C_{t_\alpha ,t_\alpha }(r)=\frac{\rho _{t_\alpha }^2}{\rho ^2},$$
(24)
with the number density $`\rho _{t_\alpha }`$ of points with mark $`t_\alpha `$.
Summarizing, there is a variety of test quantities which allows us to search for luminosity segregation in real data. Note that these quantities are applicable to a single data set without the need of constructing a series of volume–limited subsamples. With these methods we are able to gain new insights into the luminosity and morphological dependent clustering of galaxies (Sect. 4). As we will show in Subsect. 5.3, these methods break the degeneracy between fractal spatial structure and luminosity segregation.
### 3.2 Marked Poisson processes
Before applying these test quantities to real data we explain their properties with a simple model, where the marks are artificially constructed from the spatial pattern. Other models are discussed in Subsect. 4.2 and Subsect. 6.1.
We start with Poisson–distributed points $`𝐱_i`$, with number density $`\rho `$ and assign to each point the mark $`m_i=N_i(R)`$, where $`N_i(R)`$ is the number of other points within a sphere of radius $`R`$ around the point $`𝐱_i`$. Explicit formulas for $`\gamma (r)`$ and $`k_{mm}(r)`$ were derived by Wälder & Stoyan (1996). In Fig. 1 we compare numerical simulations with the theoretical curves. Points, which are members of a pair with small separation, are on average situated in over–dense regions, have more neighbors and therefore get higher marks. This is reflected by $`k_m(r)`$ and $`k_{mm}(r)`$ larger than unity on small scales ($`k_m(r)`$ and $`k_{mm}(r)`$ indeed show a jump at $`r=R`$). Since nearby points get similar marks, the mark variogram is suppressed on small scales, which can be seen directly from the reduced $`\gamma (r)`$ on small scales. However, the mean fluctuations of the mark at one point are not influenced by the presence of nearby other points for a Poisson process, and consequently $`\text{var}(r)=V`$. The strong correlation of marks on small scales can be seen also from the the covariance $`\text{cov}(r)`$ and correlation $`\text{cor}(r)`$. Empirically, both $`\text{cov}(r)`$ and $`\text{cor}(r)`$ and also $`k_m(r)`$ and $`k_{mm}(r)`$ exhibit the same information content. We also found this in our analysis of the galaxy catalogs in Sect. 4. Moreover, $`\gamma (r)`$ may be expressed with $`\text{cov}(r)`$ and $`\text{var}(r)`$ (Eq. (19)). Therefore, we will focus in the following only on $`k_{mm}(r)`$, $`\text{var}(r)`$ and $`\text{cov}(r)`$.
## 4 Luminosity and morphological segregation in the galaxy distribution
Having clarified the basic notion of luminosity segregation, we now apply the above–defined characteristics to real data and discuss the empirical question whether there is evidence for luminosity segregation in the large–scale structure of the galaxy distribution. We study luminosity– and morphology–dependent clustering in the Southern Sky Redshift Survey 2 (SSRS2, da Costa et al. 1998). This survey is 99% complete with a limiting magnitude of $`m_B=15.5`$ within the region $`40^{}\delta 2.5^{}`$ and $`b40^{}`$ and the region $`\delta 0^{}`$ and $`b35^{}`$. We will focus on a volume–limited subsample with 100$`h^1`$Mpc depth with 1179 galaxies. We obtained the same results looking at samples with different limiting depths (see Sect. 4.5). In Sect. 4.6 we compare with the results from IRAS selected samples.
### 4.1 Luminosity as a continuous mark
For a galaxy at a distance $`r_i=|𝐱_i|`$ from our galaxy with a magnitude $`\text{mag}(𝐱_i)`$ the luminosity $`L_i`$ is proportional to $`r_i^210^{0.4\text{mag}(𝐱_i)}`$. Since we look at normalized quantities, the absolute scaling of the luminosity is unimportant, and we assign to a galaxy at $`𝐱_i`$ the mark $`m_i=r_i^210^{0.4\text{mag}(𝐱_i)}`$. To estimate $`k_{mm}`$, var, and cov we show the results obtained with the estimator without boundary corrections, which is distinguished by its simplicity and unbiasedness. The other estimators gave fully consistent results. A systematic examination of the estimators further justifying this approach is given in Appendix A. The errorbars for the case of no luminosity segregation were estimated by randomly redistributing the marks of the galaxies, keeping their positions in space fixed.
Already at a first glance Fig. 2 reveals that all test quantities show evidence of luminosity segregation at a high level of significance, especially $`k_{mm}`$ and var. The increasing $`k_{mm}`$ towards small scales supports the hypothesis that bright galaxies exhibit stronger clustering than the dim ones ($`k_m`$ shows the same feature). The strong signal of var is a result which escaped previous analyses; the luminosity fluctuations of galaxies with a neighbor closer than $`15h^1\mathrm{Mpc}`$ are enhanced, showing that the luminosity distribution is broader for these galaxies in addition to their higher mean luminosity as detected by $`k_{mm}`$. Both $`k_{mm}`$ and var show a signal out to 15$`h^1`$Mpc, indicating that luminosity segregation is not only confined to clusters of galaxies. The covariance cov measures the correlations between the luminosities on both galaxies. It shows only weak evidence for luminosity segregation on large scales, however, on scales smaller than 3$`h^1`$Mpc, the $`\text{cov}>0`$ indicates an excess correlation between the luminosities of two galaxies: Close pairs of galaxies tend to assume similar luminosities. At $`r10h^1\mathrm{Mpc}`$, $`k_{mm}`$ and especially var show a second peak, indicating that the average luminosity of the galaxy pairs and the fluctuations of the luminosity on each galaxy are enhanced. cov shows a negative minimum corresponding to an increased diversity between the luminosities of the two galaxies. Clearly this is at most a two–$`\sigma `$ result, however these features also appear in volume–limited samples with different depths.
### 4.2 The random field model
To understand the data in more detail we compare with a particular model for marked point processes which shows mark segregation (Wälder & Stoyan, 1996). In the random field model the marks $`m_i`$ are assigned to the points $`𝐱_i`$ of a (unmarked) point process using an independent random field $`u(𝐱)`$: $`m_i=u(𝐱_i)`$. This is a basic model in geo–statistics (see e.g., Cressie 1991). If the point process and the random field are homogeneous, so is the marked point process. In this case one obtains for $`r>0`$ (Wälder & Stoyan, 1996)
$$k_{mm}(r)=\frac{1}{\overline{m}^2}𝔼[u(0)u(r)],$$
(25)
and
$$\gamma (r)=\frac{1}{2}𝔼\left[(u(0)u(r))^2\right].$$
(26)
Here, $`𝔼`$ is the average over several realizations of the random field, thus $`𝔼[u(0)u(r)]`$ is the covariance of the field. Using well–known properties of random field covariances (Adler, 1981; Wälder & Stoyan, 1996), a relation for the random field model can be derived:
$$\gamma (r)=𝔼\left[u(0)^2\right]\overline{m}^2k_{mm}(r)=V+\overline{m}^2\overline{m}^2k_{mm}(r).$$
(27)
This enables us to test whether a marked point process may be understood in terms of the random field model. – Note, that in the random field model the marks are given by an underlying random field, which is not affected by the spatial distribution of the points. This does not cover the general case, where the marks on the points may be influenced by spatial interactions of the points, as in the marked Poisson process in Sect. 3.2. Indeed, the relation (27) is not fulfilled for this model as inferred directly from Fig. 1.
From Fig. 3 we see that for the galaxy distribution the estimated variogram $`\gamma (r)`$ and the $`\gamma ^{\text{rf}}(r)`$, calculated from Eq. (27), show the opposite behavior. Hence, the luminosity segregation observed in this galaxy sample can not be described by a random field model. Therefore, the luminosity of a galaxy does not trace an independent luminosity field, but rather depends on the spatial interactions with other galaxies. Such an interaction is expected physically in clusters of galaxies, where galaxies merge. Beyond cluster scales this “interaction” may be caused by a common origin in the same large–scale feature of the density distribution.
### 4.3 Luminosity classes as discrete marks
Now we split the volume–limited sample with 100$`h^1`$Mpc depth into three distinct subsamples with 393 galaxies each. These subsamples consist of luminous, medium and dim galaxies, labeled with $`l`$, $`m`$ and $`d`$ respectively. The conditional cross–correlation functions $`C_{dd}`$, $`C_{dm}`$, $`C_{dl}`$, $`C_{mm}`$, $`C_{ml}`$, $`C_{ll}`$ are shown in Fig. 4, estimated from the volume–limited sample with 100$`h^1`$Mpc depth using the estimator employing no boundary correction.
They show that our above interpretation of $`k_{mm}(r)`$ based on Fig. 2 points into the right direction. At scales up to $`5h^1\mathrm{Mpc}`$, the bright galaxies cluster more strongly than the other ones, this effect is at the expense of the dim galaxies, the galaxies with medium luminosities do not contribute to luminosity segregation. However, an analysis based on luminosity classes cannot explain the strong peak of var and cov at small scales since both embody fluctuations of the marks. Note, that this partition in luminous, medium and dim galaxies is arbitrary and neither physically justified nor suggested directly by the data. We also divided the sample into two luminosity classes of equal size. Here the cross–correlations are all compatible with the randomized results and no luminosity segregation seems to be present. This emphasizes the discriminative power of the continuous mark correlation functions: The conditional cross–correlation functions for the binned marks may be blind to luminosity segregation. But with a carefully adapted binning they are able to strengthen the conclusions obtained with the continuous mark–correlations functions.
### 4.4 Morphological types as discrete marks
Using the morphological type of a galaxy as a mark we investigate morphological segregation using the conditional cross–correlation functions defined in Sect. 3.1.2.
The morphological classification of the galaxies in the SSRS2 catalog was compiled from different sources. So, only wide classes will give reliable results (da Costa et al., 1998). We compare the clustering properties of two classes, consisting of spiral, irregular and peculiar galaxies, labeled with $`l`$ (late type), and elliptical and lenticular galaxies, labeled with $`e`$ (early type). We discard the small fraction of unclassified galaxies. In Fig. 5 the conditional cross–correlation functions $`C_{ee}(r)`$, $`C_{el}(r)`$, $`C_{ll}(r)`$ are shown, estimated from the volume–limited sample with 100$`h^1`$Mpc depth, using no boundary correction.
The results demonstrate, that the clustering properties of the SSRS2–galaxies depend on morphology. Although the late–type galaxies predominate the catalog, especially the small–scale clustering is disproportionally due to pairs of early–type galaxies. In Subsect. 4.3 we saw that the luminous galaxies tend to cluster stronger. At this point, the question arises, whether the morphology segregation is a possible explanation of the luminosity segregation or vice versa. We will discuss the connection between both sorts of mark segregation in Sect. 6.1.
### 4.5 Error estimates
In the preceding sections we have shown results for a volume–limited sample with 100$`h^1`$Mpc depth. We also considered volume–limited samples with a limiting depth of 60$`h^1`$Mpc, 80$`h^1`$Mpc and 120$`h^1`$Mpc, all giving similar results. Moreover, the results do not change if we use luminosity distances instead of Euclidean and apply a type–dependent $`K`$–correction as used by Benoist et al. (1996) (see Fig. 6).
Systematic errors may occur, since we performed our analysis in redshift space, i.e., we estimate the luminosity $`L`$ of a galaxy using its redshift $`z`$: $`Lz^210^{0.4\text{mag}}`$. Therefore peculiar velocities not only change the spatial correlations, but also the values of the marks may be biased in a systematic way. It is difficult to correct for such an effect, since in–fall and streaming motions lead to correlated peculiar velocities. To estimate the order of magnitude of this error we randomly add a line–of–sight peculiar velocity to each galaxy following a Gaussian distribution with zero mean and a width of $`300`$km/s in agreement with the value for the pairwise velocity dispersion in the SSRS2 given by Marzke et al. (1995). In randomizing the radial velocities independently we overestimate this error since correlated pairs are eventually torn apart <sup>4</sup><sup>4</sup>4In adding random peculiar velocities we also account for possible errors within the measurements of the redshifts, which are in fact much smaller than the imprints of peculiar velocities.. Repeating this procedure several times we can show that the mean values of $`k_{mm}`$, var and cov do not change compared to the results in Fig. 2. The additional fluctuations introduced by this procedure are smaller than the statistical errors quantified by randomizing the marks, as can be seen in Fig. 6. Both $`k_{mm}`$ and var show a signal outside the one–$`\sigma `$ range of this luminosity error combined with the statistical errors, whereas cov is becoming marginally consistent.
Note, that in volume–limited samples a special sort of Malmquist–bias may influence the luminosities: The luminosities are estimated using the flux and the redshift as the distance indicator. Hence the distance is influenced by the individual peculiar velocity of the galaxy. Consider a shell at distance $`r`$. For geometrical reasons more galaxies from the outer side get scattered into the shell than galaxies get scattered outside. Hence, in the mean more galaxies are assigned too small distances resulting in underestimated luminosities. Considering only galaxies with a distance smaller than 90$`h^1`$Mpc in the volume–limited sample with 100$`h^1`$Mpc depth we obtain nearly identical results for the mark–correlation functions. Therefore, this sort of bias does not affect our analysis.
### 4.6 IRAS selected galaxies
Up to now we investigated luminosity segregation in the optically selected SSRS2 catalog, with the luminosities estimated from the B–magnitude. To see how our results depend on the selection criteria imposed on the catalog we look at the mark correlations determined from the infrared selected IRAS 1.2 Jy and PSCz galaxy catalogs (for details see Fisher et al. 1995, Saunders et al. 2000).
We analyze 2259 galaxies in the volume–limited sample of the PSCz galaxy catalog with a depth of 100$`h^1`$Mpc inside the mask given by Saunders et al. (2000). Similarly to Sect. 4 we use $`m_i=r_i^2f(𝐱_i)`$ as a continuous mark, proportional to the luminosity of the galaxy at a distance of $`r_i=|𝐱_i|`$ with an observed flux $`f(𝐱_i)`$ at 60 microns. From Fig. 7 we conclude that no significant luminosity segregation is present in the PSCz galaxy catalog. The same result holds for volume–limited samples with different depths, and for volume–limited samples extracted from the IRAS 1.2 Jy catalog. This confirms the results by Bouchet et al. (1993) from the IRAS 1.2 Jy and especially the investigation of the PSCz by Szapudi et al. (1999) who used a variant of the conditional cross–correlations discussed in Subsect. 3.1.2. Similarly, only a weak dependence on spectral features was reported by Mann et al. (1996) for the QDOT catalog. In Sects. 6.2 and 6.3 we will see that the luminous early–type galaxies play a dominant role for luminosity and morphology segregation. This is supported by the negative results from these IRAS selected samples, since early–type galaxies are significantly underrepresented in infrared–selected galaxy samples.
There is however an interesting feature in the deeper volume–limited samples from the PSCz. Both $`k_{mm}(r)`$ and $`\text{var}(r)`$ are consistent with a random mark distribution, but the covariance $`\text{cov}(r)`$ shows an almost three–$`\sigma `$ peak near $`r=20h^1\mathrm{Mpc}`$. This increased covariance at 20$`h^1`$Mpc is currently beyond an explanation, however the feature is also visible in volume–limited samples with 200$`h^1`$Mpc and 300$`h^1`$Mpc depth, and stable against distance cuts and different binning.
## 5 Luminosity segregation via amplitudes
Previous investigations detecting luminosity segregation have used a sequence of volume–limited samples and compared the correlation amplitude of the two–point correlation function $`\xi (r)`$ (see e.g., Willmer et al. 1998). In Subsect. 5.1 we show how this can be incorporated into the more general formalism provided in Sect. 2. In Subsect. 5.2 we reassess the arguments given by Coleman & Pietronero (1992) showing that there is a degeneracy between a scale–invariant point distribution and luminosity segregation if the analysis is based on the amplitudes of $`\xi `$. The mark characteristics introduced in Subsect. 3.1 do not suffer from this artifact as shown in Subsect. 5.3. This strengthens our conclusions in Sect. 4 that there is indeed luminosity and morphological segregation.
### 5.1 Luminosity segregation from a series of volume–limited samples
Consider a flux–limited sample with limiting flux $`f_{\mathrm{lim}}`$. Every galaxy at a distance $`|𝐱_i|`$ with observed flux $`L_i/(4\pi |𝐱_i|^2)`$ larger than some limiting flux $`f_{\mathrm{lim}}`$ is included in the sample. We construct volume–limited subsamples by introducing a limiting depth $`R`$ and a limiting luminosity $`L_{\mathrm{lim}}`$ with $`L_{\mathrm{lim}}/(4\pi R^2)=f_{\mathrm{lim}}`$ and by admitting only galaxies with $`|𝐱|<R`$ and $`L>L_{\mathrm{lim}}`$. In such a volume–limited sample<sup>5</sup><sup>5</sup>5 In general, we have more freedom in constructing volume–limited samples: varying $`R`$ and $`L_{\mathrm{lim}}`$ independently, as long as the constraint: $`R^2<\frac{L_{\mathrm{lim}}}{4\pi f_{\mathrm{lim}}}`$ is respected. Holding $`L_{\mathrm{lim}}`$ fixed, we can vary $`R`$ and look, whether the statistical properties, e.g., the amplitude of the correlation function $`\xi `$, differs between these samples. This would allow to test for fractal spatial structures independent from luminosity segregation. the observed number density $`\rho _{1,R}^S(𝐱)=\rho _{1,R}^S`$ is spatially constant
$$\rho _{1,R}^S=\rho _{L_{\mathrm{lim}}}^{\mathrm{}}dL\rho _1^M(L),$$
(28)
if the underlying galaxy pattern is homogeneous.
For two–point properties we can proceed similarly. The spatial two–point density in the volume–limited samples for $`|𝐱_1|<R`$ and $`|𝐱_2|<R`$ is
$$\rho _{2,R}^S(𝐱_1,𝐱_2)=_{L_{\mathrm{lim}}}^{\mathrm{}}dL_1_{L_{\mathrm{lim}}}^{\mathrm{}}dL_2\rho _2^{SM}((𝐱_1,L_1),(𝐱_2,L_2)).$$
(29)
Using the definition (11) of the conditional probability density $`𝒮_2`$ and the assumption (7) we get
$$\rho _{2,R}^S(𝐱_1,𝐱_2)=𝒩_2_{L_{\mathrm{lim}}}^{\mathrm{}}dL_1_{L_{\mathrm{lim}}}^{\mathrm{}}dL_2𝒮_2(𝐱_1,𝐱_2|L_1,L_2)\rho _1^M(L_1)\rho _1^M(L_2).$$
(30)
With $`r=|𝐱_1𝐱_2|`$, the two–point correlation function $`\xi _R(r)`$ in a volume–limited sample is then
$$\xi _R(r)+1=\frac{𝒩_2}{\left(\rho _{L_{\mathrm{lim}}}^{\mathrm{}}dL\rho _1^M(L)\right)^2}_{L_{\mathrm{lim}}}^{\mathrm{}}dL_1_{L_{\mathrm{lim}}}^{\mathrm{}}dL_2𝒮_2(𝐱_1,𝐱_2|L_1,L_2)\rho _1^M(L_1)\rho _1^M(L_2).$$
(31)
If no luminosity segregation is present, $`𝒮_2(𝐱_1,𝐱_2|L_1,L_2)=\rho _2^S(𝐱_1,𝐱_2)/𝒩_2`$ and therefore:
$$\xi _R(r)=\xi (r).$$
(32)
If, on the other hand, the clustering of the galaxies does depend on the luminosities, the two–point correlation function is different between volume–limited samples of varying depths, and also differs from the two–point correlation function of all galaxies.
As an illustration we calculate $`\xi _R(r)`$ from volume–limited samples of the SSRS2 with increasing limiting depth $`R`$. Our results in Fig. 8 completely agree with the results reported by Willmer et al. (1998), showing a higher amplitude of $`\xi _R(r)`$ for the deeper volume–limited samples. See also the comprehensive investigations of Cappi et al. (1998) and Benoist et al. (1999). We used several estimators for the two–point correlation function (Kerscher, 1999), including the minus estimator shown in Fig. 8 and found that this behavior of the amplitude is independent of the estimator.
### 5.2 Faking luminosity segregation
In this section we illustrate the argument by Coleman & Pietronero (1992) who showed that there is a degeneracy between luminosity segregation determined with the standard method (Subsect. 5.1) and a fractal galaxy distribution. Indeed, a general inhomogeneous galaxy distribution can fake a sort of “luminosity segregation”. Here, we use a “fractal point set” as a simple, yet analytically tractable model for general inhomogeneous point distributions.
The argument is based on the scaling behavior of the number of points inside a sample $`N(R)R^D`$ for a fractal point set in a sample with linear extent $`R`$, where $`D`$ is the (correlation–) dimension. For a fractal point set the two–point correlation function behaves like
$$\xi _R(r)+1R^Dr^{D3},$$
(33)
with an amplitude of $`\xi _R`$ depending on the extent of the sample (for details see Sylos Labini et al. 1998). We illustrate this in Fig. 8 showing that fractal correlations according to formula (33) can mimic a behavior of $`\xi _R`$ as observed in the galaxy data.
To summarize, the behavior of $`\xi `$ in a series of volume–limited samples can be explained either by a fractal point distribution or luminosity segregation or both. So $`\xi `$ does not seem to be a good method to assess one of both claims. Note, that Pietronero’s argument is based on the assumption that no luminosity segregation is present.
### 5.3 Robustness of mark–correlation functions
In the preceding section we have seen that to search for luminosity segregation employing the amplitude of $`\xi _R`$ may be uncertain. Now we show that the mark characteristics introduced in Subsect. 3.1 do not suffer from this degeneracy.
All the quantities we used to investigate luminosity and morphological segregation were defined using the average $`f_\mathrm{P}(r)`$ over a weight function $`f`$. With $`f_\mathrm{P}(r)`$ we look at the averages of some mark–dependent weight function $`f(m_1,m_2)`$, under the condition that the points holding the marks are separated by $`r`$. We do not investigate the spatial distribution of the points. As can be seen directly from Eq. (13) the spatial two–point correlations are “divided out”. Hence, quantities like $`f_\mathrm{P}(r)`$ are not only well–defined for homogeneous point distributions, but also give reliable results for inhomogeneous point distributions like fractals.
To illustrate this we use a “fractal point set” kindly provided by Alessandro Amici. This fractal is a three–dimensional realization of the random–$`\beta `$ model with a fractal dimension of two. On a randomly selected set of points from this fractal we distribute marks chosen randomly out of $`[0,1]`$. This resembles a volume–limited sample with no luminosity segregation. We estimate the mark–correlation functions using the estimator without boundary corrections. The function $`k_{mm}(r)`$ shown in Fig. 9 gives the correct result that no mark correlation is present. Hence, our methods give stable results even on such an inhomogeneous point distribution. This is also the case for all other functions and for any estimator considered (Appendix A).
Therefore, our results obtained from the SSRS2 galaxy survey discussed in Sect. 4 can not be explained with a scale–invariant spatial distribution, showing no luminosity, alone.
There is also a more technical advantage of our method: to estimate the correlation function $`\xi (r)`$ one has to employ boundary corrections. Quantities like $`k_{mm}`$ only use conditional probabilities and may be estimated without boundary corrections (see Appendix A), reducing the estimators’ variance.
## 6 The morphology–density relation
The morphology–density relation states that inside clusters, in regions with a high (surface) density of galaxies, the abundance of early–type galaxies is enhanced whereas the abundance of late–type galaxies is reduced (Dressler, 1980). This relation is very well established, and therefore it seems natural to ask whether the observed luminosity segregation can be explained by the morphology–density relation alone. In this section we present a number of reasons why this is not the case.
As a first test we discarded all galaxies in a spherical region with 1.5$`h^1`$Mpc and also 3$`h^1`$Mpc radius around the APM clusters (Dalton et al., 1997), and conduct a analysis similar to the one in Subsect. 4.1 restricted to the intersection of the SSRS2 and the APM cluster catalog. The mark correlation functions did not show any significant change. This may not be decisive, since only a few of the (rich) APM clusters are included; however, it supports our view that the observed luminosity segregation is not caused by clusters of galaxies alone.
But in the spirit of the morphology–density relation, one could try to explain the observed luminosity segregation in the following way: the two populations of galaxies, the early– and the late–type galaxies, cluster in a different way (which is, e.g., manifest in the morphology–density relationship and in the observed morphology segregation Sect. 4.4). If these classes show different average luminosities, the morphology–density relation will generate the luminosity segregation. This is the main idea behind the two–species model discussed below. A first indication, that this kind of model is not able to explain luminosity segregation, comes from the observation, that both early– and the late–type galaxies show very similar luminosity distributions within the volume–limited sample we considered.
We conduct additional tests of this idea, which allow for a further understanding of the luminosity segregation: we consider the two–species model in Subsect. 6.1 in more detail, and we investigate the early– and late–type galaxies separately in Subsect. 6.2; moreover, we look for morphology segregation in dim and luminous subsamples separately in Subsect. 6.3.
### 6.1 The two–species model
As already outlined above, in the two–species model we consider two subpopulations of galaxies, with different spatial clustering and a different mark distribution. Within each class there is no mark segregation. Thus this model explains in a very simple way how mark correlations arise from the spatial interplay of the two classes of galaxies. The subclasses will be formed by early– $`(e)`$ and late–type $`(l)`$ galaxies.
Let $`\rho _l`$, $`\overline{m}_l`$, $`V_l`$ denote the number density, the mean mark, and the variance of the marks of galaxies of type $`l`$, respectively, and similarly for subclass $`e`$. The one–point mark distributions are denoted by $`\rho _{1,e}^M(m)`$ and $`\rho _{1,l}^M(m)`$. The spatial (cross–) correlations are given by $`\xi _{ee}(r)`$, $`\xi _{ll}(r)`$, and $`\xi _{el}(r)`$ (symmetrically defined in $`e`$ and $`l`$, i.e., $`(\xi _{el}+\xi _{le})/2`$). We use the morphological type and the luminosity as components of a compound mark $`𝐦=\{t,m\}`$, where $`t\{e,l\}`$ denotes the morphological type and $`m`$ is the luminosity of the galaxy. The two–point properties within the two–species model are then given by:
$$\begin{array}{c}\rho _2^{SM}((𝐱_1,\{t_1,m_1\}),(𝐱_2,\{t_2,m_2\}))=\hfill \\ \hfill \delta _{t_1e}\delta _{t_2e}\rho _e^2\rho _{1,e}^M(m_1)\rho _{1,e}^M(m_2)(1+\xi _{ee}(r))+\delta _{t_1l}\delta _{t_2l}\rho _l^2\rho _{1,l}^M(m_1)\rho _{1,l}^M(m_2)(1+\xi _{ll}(r))\\ \hfill +\left(\delta _{t_1e}\delta _{t_2l}\rho _{1,e}^M(m_1)\rho _{1,l}^M(m_2)+\delta _{t_1l}\delta _{t_2e}\rho _{1,l}^M(m_1)\rho _{1,e}^M(m_2)\right)\rho _e\rho _l(1+\xi _{el}(r)).\end{array}$$
(34)
With $`q_l=\rho _l/(\rho _l+\rho _e)`$, $`q_e=1q_l`$, the combined two–point correlation is function
$$1+\xi (r)=q_e^2(1+\xi _{ee}(r))+q_l^2(1+\xi _{ll}(r))+2q_eq_l(1+\xi _{el}(r)),$$
(35)
and using the definitions in Sect. 3.1.1 one may calculate the luminosity correlation functions for this specific model. We measured $`q_e`$, $`\xi _{ee}`$, as well as $`\overline{m}_e`$, $`V_e`$ etc. in the volume–limited sample with 100$`h^1`$Mpc depth from the SSRS2. Using these quantities we calculated the mark–correlation functions for the two–species model. In Fig. 10 we compare the var function from the two–species model with the actual observed values (similar results are obtained for $`k_{mm}`$ and cov). Obviously, the two–species model is not able to explain the observed luminosity–correlations. This shows that the spatial interplay between different morphological types, as suggested by the morphology–density relation, is only in part responsible for the observed luminosity segregation. A necessary ingredient is that luminosity segregation is already present in one of the subclasses at least (see the next section).
The results for the two–species model shown in Fig. 10 were obtained selfconsistently from the empirically determined parameters as given by the division of the sample into early– and late–type galaxies. We may go further and treat the two–species model as a toy model with scale–invariant (cross–) correlation functions (e.g., $`\xi _{ee}r^\gamma `$) and free parameters $`\overline{m}_e`$ etc. to fit the data. However, we find that an acceptable qualitative description of the observed luminosity segregation in terms of this model is only satisfied when the parameters of the two–species model are highly unrealistic.
### 6.2 Early–and late–type galaxies separately
As a second test, we split the 100$`h^1`$Mpc volume–limited sample from the SSRS2 into two subsamples consisting out of early– and late–type galaxies each. Using the luminosity as the (continuous) mark, we look for luminosity segregation similarly to the investigations in Sect. 4.1. From Fig. 11 it is evident that both subpopulations show luminosity segregation, but the main contribution comes from early–type galaxies. The late–type galaxies show a small signal in $`k_{mm}`$ only. Clearly, with this kind of analysis we do not pick up features intrinsic to the interplay between early– and late–type galaxies, which may add a further contribution to the observed luminosity segregation (Fig. 2).
### 6.3 The other way round?
So far, our results show clearly, that the luminosity segregation is not a pure effect of the morphology segregation. To investigate the opposite case, where the morphology segregation is caused by the luminosity segregation, we split the volume–limited sample with 100$`h^1`$Mpc depth into two equally sized luminosity classes, with dim and luminous galaxies, respectively. For each of these samples we calculate the conditional cross–correlations between early– and late–type galaxies. The strong (conditional) correlations $`C_{ee}(r)`$ of early–type galaxies on small scales are now only visible in the sample of luminous galaxies, confirming the trends reported by Willmer et al. (1998). The conditional anticorrelation indicated by the $`C_{ll}(r)`$ of the late type galaxies on small scales is present in both subsamples but more pronounced in the sample of luminous galaxies (only $`C_{ee}(r)`$ is shown in Fig. 12).
This test does not allow very strong conclusions, since it is based on an ad–hoc division of the whole sample. A finer division is not feasible, since very few early–type galaxies will populate the subsamples. However, our results strengthen the interpretation that both sorts of mark correlations are irreducible. Neither is the luminosity segregation the source of morphological segregation nor is it the other way around. In particular, the luminous early–type galaxies cluster more strongly than all other galaxies.
## 7 Summary and Outlook
The investigation of luminosity and morphology segregation of galaxies has been a scientific task for many years. Our results allow for a new perspective and suggest that both the methodology and the physical interest should shift slightly.
Methodologically, we discussed luminosity and morphological segregation in the framework of marked point processes. This perspective provides us with a unifying view on morphology and luminosity segregation. Moreover, the mathematical theory of marked point processes provides us with test quantities and models to be compared with the data. In this line we discussed the mark–weighted conditional correlation functions. These functions are not only easy to estimate, but also offer a clear interpretation. They may be applied to a single volume limited sample, a sequence of volume limited samples is not necessary. As a consequence, they break the degeneracy between a fractal spatial structure and luminosity segregation. We suggest that the $`k_{mm}`$, var, and cov functions are of special interest for a first test on luminosity segregation. Since several bias–models assume scale–dependent bias, we need quantities like $`k_{mm}`$, var, and cov which can unfold the scales at which mass or luminosity segregation is relevant. This is not possible by looking at the amplitude of the two–point correlation function $`\xi _R(r)`$ alone. Moreover our method allows for a “built in” significance test, by randomly re–shuffling the marks. The conditional cross–correlation functions seem to be useful if mark segregation has already been shown to be present and is to be understood more closely. However, they are based on a division of the whole sample into subpopulations, a division that has to be done carefully. The conditional mark–correlation functions are rather flexible. With the peculiar velocities or the orientations of galaxies treated as marks, the conditional mark correlation functions will allow for a fresh look at the pairwise velocity dispersion and on alignment effects. Our methods can be easily extended to higher–order correlations. In a forthcoming work we will study the mark correlations using higher–order statistics as the $`J`$–functions (van Lieshout & Baddeley 1996, Kerscher 1998).
Concerning the physical results, we were not only able to assess luminosity segregation as well as morphological segregation. Rather, our perspective allowed us to ask the question: What is the luminosity and morphological segregation like? Our main results obtained from the SSRS2 survey are:
* The average luminosity of pairs of galaxies and the fluctuations in the luminosity on each galaxy is enhanced for pairs closer than 15$`h^1`$Mpc. Hence, luminosity correlations are scale–dependent, and they are significant even outside clusters of galaxies. On scales larger than 15$`h^1`$Mpc our results indicate that neither luminosity nor morphological segregation is present.
* The luminosities of galaxies in pairs closer than 3$`h^1`$Mpc show an increased covariance – close galaxies preferably have similar luminosities.
* The luminosity segregation is not compatible with the random field model. Thus, the luminosity does not trace an underlying independent random field. The luminosity of a galaxy depends on the local clustering and on interactions with other galaxies.
* There is an interesting feature, a small peak, in $`k_{mm}`$, var and cov for galaxy pairs with a separation of approximately $`10h^1\mathrm{Mpc}`$, which is currently beyond an explanation.
* We observe morphological segregation between early– and late– type galaxies for scales smaller than 10$`h^1`$Mpc. This effect is mainly due to highly luminous galaxies. Especially the luminous early–type galaxies seem to play an important role, both for luminosity and morphology segregation.
* The importance of early–type galaxies for luminosity segregation is confirmed by our analysis of the IRAS samples. These infrared samples exhibit a deficit in early–type galaxies and consequently show no luminosity segregation.
* An inhomogeneous, scale–invariant galaxy distribution, but without luminosity segregation, can not account for the signal seen in $`k_{mm}`$, var, and cov. The lowered correlation of the dim galaxies, and the enhanced correlation of the luminous galaxies we found, explains at least in part why the amplitude of the correlation function rises if deeper, i.e., more luminous, galaxy samples are considered.
* With several independent tests we could show that it is not possible to explain the observed luminosity segregation from the morphology–density relation alone.
Nevertheless, a couple of question remain open.
Concerning the data, it seems important to confirm our results using other galaxy surveys. Also the influence of redshift space distortions and of galaxy clusters should be investigated beyond the simple error–estimates presented in Subsect. 4.5 and Sect. 6.
Our methods are directly applicable to volume–limited samples, similar to the usual way of assessing luminosity segregation, where one needs a series of volume–limited samples. Using models for the conditional mark density $`_2`$ or the mark–correlation functions one may determine the parameters of such models from magnitude–limited surveys directly. Similarly, the influence of mark segregation on the two– and $`N`$–point correlations estimated from magnitude–limited surveys can be estimated.
Closely related is the question how strongly the deprojected two– and $`N`$–point correlation functions, determined from 2-dimensional galaxy catalogs, are influenced by luminosity segregation. With models for the mark–correlations a refined Limber’s equation may be constructed (see e.g., Gardini et al. 1999). Both, the concerns about magnitude–limited surveys and deprojection formulas will be addressed in future work.
In this article we focused on clarifying the mathematical framework, on the data–analysis, and on the interpretation of the observed luminosity and morphological segregation. The relation to the peak–formalism (Bardeen et al., 1986) and other biasing schemes will be investigated in future work. Understanding the luminosity distribution on the galaxies from dynamical models is the major goal.
We thank Thomas Buchert, Niv Drory, Ulrich Hopp, Roberto Saglia, Dietrich Stoyan, Alex Szalay, Istvan Szapudi and Herbert Wagner for valuable discussions and Alessandro Amici for kindly providing the fractal point set used in Subsect. 5.3. CB and MK acknowledge support from the Sonderforschungsbereich 375 für Astroteilchenphysik der DFG. MK acknowledges support from the NSF grant AST 9802980.
## Appendix A Estimators for mark–correlation functions
In this section, we discuss estimators for the weighted mark–correlation functions. For this purpose, let $`\{(𝐱_i,m_i)\}_{i=1}^N`$ denote the $`N`$ empirical data points $`𝐱_i`$ inside the sample $`𝒟`$ with their marks $`m_i`$. We prefer estimators which are unbiased and show small variances. For a detailed discussion of two–point estimators see (Kerscher, 1999). One basic idea is to construct estimators for $`f_\mathrm{P}(r)`$ from a combination of estimators for the numerator and for the denominator of eq. (13). We first discuss estimators of this type, but then introduce a different estimator, which does not use any boundary conditions. It turns out, that in our case, this estimator is unbiased and is recommended by its simplicity and low variance.
### A.1 Construction of the estimators
To calculate the correlation functions in bins of width $`\mathrm{\Delta }`$, we use the indicator function of a set $`A`$
$$\text{1l}_A(x)=\{\begin{array}{cc}1\hfill & \text{ if }xA\hfill \\ 0\hfill & \text{ otherwise },\hfill \end{array}$$
(A1)
and the reduced sample window $`𝒟_r=\{𝐱𝒟|d(𝐱,𝒟)>r\}`$ shrunken by $`r`$.
Using these definitions, the ratio–unbiased minus estimator $`\widehat{f}_\mathrm{P}^M(r)`$ for the weighting functions $`f(m_1,m_2)`$ (compare eq. (13)) is simply
$$\widehat{f}_\mathrm{P}^M(r)=\frac{_{ij=1}^N\text{1l}_{𝒟_r}(𝐱_i)\text{1l}_{[r,r+\mathrm{\Delta }]}(|𝐱_i𝐱_j|)f(m_1,m_2)}{_{ij=1}^N\text{1l}_{𝒟_r}(𝐱_i)\text{1l}_{[r,r+\mathrm{\Delta }]}(|𝐱_i𝐱_j|)},$$
(A2)
where the indicator function $`\text{1l}_{𝒟_r}(𝐱_i)`$ assures that the point $`𝐱_i`$ is further than $`r`$ from the boundary (for details see Kerscher 1999).
In the minus estimator the window is effectively shrunken, resulting in an increased variance. On the contrary, the following estimator uses all point pairs $`𝐱_i,𝐱_j`$, however weighted with an geometrical weight $`\omega (𝐱_i,𝐱_j)`$. Such weighting schemes lead to ratio–unbiased estimators for the two–point correlation function (for details see Stoyan et al. 1995). The straight–forward generalization of these concepts results in ratio–unbiased estimators for $`f_\mathrm{P}`$:
$$\widehat{f}_\mathrm{P}^\omega (r)=\frac{_{ij=1}^N\text{1l}_{[r,r+\mathrm{\Delta }]}(|𝐱_i𝐱_j|)\omega (𝐱_i,𝐱_j)f(m_1,m_2)}{_{ij=1}^N\text{1l}_{[r,r+\mathrm{\Delta }]}(|𝐱_i𝐱_j|)\omega (𝐱_i,𝐱_j)}.$$
(A3)
Using the weight
$$\omega (𝐱_i,𝐱_j)=\frac{|𝒟|}{|𝒟𝒟_{𝐱_i𝐱_j}|},$$
(A4)
we arrive at an estimator $`\widehat{f}_\mathrm{P}^\omega (r)`$ suggested by Stoyan & Stoyan (1994). In full analogy to the estimators for the two–point correlation function other weights, like the Ripley (Rivolo) weight or the isotropized version of eq. (A4), can be used (for details see Kerscher (1999)).
Instead of estimating $`f_\mathrm{P}`$ with unbiased estimators for the numerator and for the denominator in Eq. (13) separately, we suggest to simply use the ratio
$$\widehat{f}_\mathrm{P}(r)=\frac{_{ij=1}^N\text{1l}_{[r,r+\mathrm{\Delta }]}(|𝐱_i𝐱_j|)f(m_1,m_2)}{_{ij=1}^N\text{1l}_{[r,r+\mathrm{\Delta }]}(|𝐱_i𝐱_j|)}.$$
(A5)
This is motivated by the observation, that $`f_\mathrm{P}`$ is calculated from the marks under the condition that the two points are separated by $`r`$. Indeed we are not investigating the spatial distribution of the points, but “divide spatial two–point properties out”. Unfortunately, the unbiasedness of this estimator cannot be proven with the common methods used in the theory of point processes, but it seems intuitively clear that this estimator is unbiased. In sect. A.2 we show this using a numerical example; we illustrate furthermore that this estimator has preferable variance properties (this was also observed by Capobianco & Renshaw 1998).
### A.2 Comparison of the estimators
We use the marked Poisson process discussed in Sect. 3.2 to numerically investigate the properties of these estimators for the continuous mark–correlation functions. The sample mean of the estimators coincide with the theoretical mean value for all estimators. Thus, empirically, all estimators are unbiased. In Fig. 13 the variances of $`\gamma (r)/V`$ for the different estimators are shown. The variance of the minus estimator becomes unacceptably large especially on large scales. The estimators using a weighting with the set covariance or the isotropized set covariance show the same small variance, even smaller than the variance of the estimator using a weighting of Rivolo (Ripley) type. A detailed inspection shows, that the estimator using no boundary correction typically gives the smallest variance. A qualitatively similar behavior is found for the other mark–correlation functions. Therefore, and for reasons of computational simplicity, we mainly apply this estimator. We suggest to use it for all mark–weighted correlation functions as the natural and unbiased choice.
|
no-problem/0003/astro-ph0003384.html
|
ar5iv
|
text
|
# Cut-off radii of galactic disks
## 1 Introduction
Although cut-off radii of spiral galaxies are known for about 20 years no unique physical explanation has been given to describe this observational phenomenon. They were already mentioned by van der Kruit (vdk79 (1979)), who stated, based on photographic material, that the outer parts of disks of spiral galaxies ”do not retain their exponential light distribution to such faint levels”, whereas the exponential behaviour of the radial light distribution for the inner part was well accepted (de Vaucouleurs devau59 (1959), Freeman free (1970)). For three nearby edge-on galaxies he claimed, that the typical radial scalelength $`h`$ steepens from 5 kpc to about 1.6 kpc at the edge of the disk. This is confirmed by modern deep CCD imaging (Abe et al. abe (1999), Fry et al. fry (1999), Näslund & Jörsäter naes (1997)). In a fundamental series of papers van der Kruit & Searle (1981a , 1981b , 1982a , 1982b ) determined a three dimensional model for the luminosity density of the old disk population taking into account these sharp truncations at the cut-off radius $`R_{co}`$. They applied their model of a locally isothermal, selfgravitating, and truncated exponential disk to a sample of seven edge-on galaxies and found that all disks show a relatively sharp cut-off where the scalelength $`h`$ suddenly drops below 1 kpc, starting at radii of $`(4.2\pm 0.5)h`$. The cut-off radius of edge-on galaxies is detected at levels of 24-25 mag$`/\mathit{}\mathrm{}`$ which is about 2-3 mag brighter compared to face-on disks due to the integration along the line of sight. Therefore van der Kruit & Shostak (vdkshos82 (1982)) and Shostak & van der Kruit (shos (1984)) quote the only known cut-offs in the literature for face-on galaxies. In addition to the much lower brightness one has to deal with intrinsic deviations from the circular symmetry of the disk, for example from the young stellar population, hidden by an azimuthally averaged profile. In a subsequent paper van der Kruit (vdk88 (1988)) stated that out of the 20 face-on galaxies observed by Wevers et al. (wev (1986)) only four did not show any sign for a drop off as judged from the last three contours. Barteldrees & Dettmar (bdold (1989)) confirmed for the first time the existence of these truncations for a larger sample of edge-on galaxies using CCD surface photometry refining the previous photographic measurements.
These truncations are not the boundary of the galactic baryonic mass distribution, but such ’optical edges’ suggest dynamical consequences for the interpretation of observed rotation curves (Casertano caser (1983)), as well as for the explanation of warped disks (Bottema bott (1995)). Their sharpness restrict the radial velocity dispersion at the edge of the disk (van der Kruit & Searle 1981a ), and will therefore have important implication for viscous disk evolution (Thon & Meusinger thon (1998)). According to Zhang & Wyse (zhang (2000)) the disk cut-off radii constrain the specific angular momentum in a viscous galaxy evolution scenario.
In this letter we report the largest sample of well defined cut-off radii for edge-on galaxies derived by CCD surface photometry. Our sample (Pohlen et al. pohl (2000), Paper II) comprises 31 galaxies, including the 17 galaxies of Barteldrees & Dettmar (bd (1994), hereafter Paper I). Thereby we are able to derive first statistical conclusions and determine general correlations with other characteristic galaxy parameters in order to approach in the future a physical model explaining the observed phenomenon.
## 2 Observations and reduction
Sample selection, observations, and data reduction are described in detail in Paper I and II, and will be repeated only briefly here. We have compiled a homogeneous set of 31 galaxies with well defined models of their three-dimensional disk luminosity distribution, out of our sample of about 60 highly inclined disk galaxies, selected from the UGC (Nilson ugc (1973)) and ESO-Lauberts & Valentijn (lv (1989)) catalog. The data were obtained at the ESO/La Silla 2.2m and the Lowell Observatory 42-inch telescope. Images are taken either in Gunn g, r, i, or Johnson R filters, with resulting pixel scales of 0.36<sup>′′</sup> and 0.7<sup>′′</sup>, respectively. After standard reduction, images were rotated to the major axes of the disk. Although most of the images were taken during non-photometric nights we have tried to perform photometric calibration for each image by comparing simulated aperture measurements with published integrated aperture data, resulting in the best possible homogeneous calibration for the whole sample. The resulting typical values for the limiting surface brightness measured by a three sigma deviation on the background are: $`\mu _g25`$mag$`/\mathit{}\mathrm{}`$, $`\mu _{R,r}24.5`$mag$`/\mathit{}\mathrm{}`$, and $`\mu _i23.5`$mag$`/\mathit{}\mathrm{}`$. In order to obtain absolute values of the determined structural parameters, we estimated the distance of our galaxies according to the Hubble relation ($`H_0=75`$ km s<sup>-1</sup>Mpc<sup>-1</sup>) using published heliocentric radial velocities corrected for the Virgo centric infall.
## 3 Analysis
### 3.1 Disk model and fitting
Our model, as described in detail in Paper I and II, for the three-dimensional luminosity distribution for galactic disks is based upon the fundamental work of van der Kruit and Searle (1981a ):
$$\widehat{L}(R,z)=\widehat{L}_0\mathrm{exp}\left(\frac{R}{h}\right)f_n(z,z_0)\mathrm{H}(R_{co}R)$$
(1)
$`\widehat{L}_0`$ is the central luminosity density in units of \[$`L_{\mathrm{}}`$ pc<sup>-3</sup>\], $`R`$ and $`z`$ are the radial resp. vertical axes in cylindrical coordinates, $`h`$ is the radial scalelength, and $`z_0`$ the scaleheight. $`f_n(z,z_0)`$ describes three different fitting functions for the vertical distribution: exponential, sech, and the physically motivated isothermal (sech<sup>2</sup>) case (van der Kruit vdk88 (1988)). $`R_{co}`$ is the cut-off radius, where the stellar luminosity density is assumed to be zero outside, mathematically expressed by a Heaviside function H$`(x_0x)`$. These radii are defined at the position where the radial profiles bend nearly vertical into the noise, whereby a mean value is taken for the two different sides.
Depending on the inclination angle $`i`$, we numerically integrate this 3D-model along the line of sight and compare the two-dimensional result with the observed CCD image, leading to six free fitting parameters: the inclination $`i`$, the central luminosity density $`\widehat{L}_0`$, the scalelength $`h`$, and -height $`z_0`$, the cut-off radius $`R_{co}`$, and the function for the z-distribution $`f_n(z,z_0)`$. After our discussion about two different fitting methods in Paper II, we finally used our implemented “downhill simplex-method” (Nelder & Mead, nm (1965)) to minimize the difference between model and observed disk. The possible influence of these parameters on the neglected dust distribution is estimated in Paper II.
## 4 Results
### 4.1 Distribution of cut-off radii
Understanding the phenomenon of cut-off radii in galactic disk requires as an essential step a statistical study of galaxies covering the Hubble sequence. Figure 1 shows, already suggested by Barteldrees & Dettmar (bdold (1989)) for a smaller sample of 20 galaxies, that the distance independent ratio of cut-off radius to radial scalelength is significantly lower than derived from the often referred sample of van der Kruit & Searle (1982a ) with 7 galaxies. They reported a mean value of $`R_{co}/h=4.2\pm 0.5`$ (ranging from 3.4 to 5.3), whereas our sample gives a ratio of $`R_{co}/h=2.9\pm 0.7`$ (1.4$``$4.4) even below their minimal value. As obvious from Fig. 1 this difference is not caused by the larger range of Hubble types covered by our sample. The estimated error for this ratio due to the selection of the best fitting model described in Paper II is $`\pm 0.6`$ and has the same order as the quoted standard deviation. As shown in Paper II for two different dust distributions with values observed by Xilouris et al. (xil99 (1999)), the influence of the neglected dust on our fitting process will be an overestimation of the scalelength $`h`$, whereas $`R_{co}`$ is independent. For the worst case, defined by the largest measured values for $`h_d/h_{}`$, $`z_d/z_{}`$, and $`\tau _R`$, we find that this will chance our values for $`R_{co}/h`$ by $`+0.5`$. Applied to the mean we are in this case still $`0.8`$ below the value of van der Kruit & Searle (1982a ). We do not find a correlation between $`R_{co}/h`$ and the Hubble type, although it should be mentioned that in general for galaxies later than Scd the fitting process is strongly affected by intrinsic variation, e.g. individual bright HII-regions, which makes it impossible to fit our simple symmetric model. On the other side some early type galaxies and particularly lenticulars do not show any evidence for a cut-off at all. This is already suggested by van der Kruit (vdk88 (1988)) observing some early type face-on galaxies and will be discussed in detail in a forthcoming paper.
Figure 2 shows a possible correlation between $`R_{co}/h`$ and the scalelength in absolute units: Large disks with regard to their scalelengths $`h`$ are short in terms of their cut-off radii. Together with the fact that the cut-off occurs, within the errors of $`\pm 0.5`$mag, at nearly the same surface brightness level, this can be explained with a correlation between the central surface brightness and the scalelength of the galaxy, recently proposed by Scorza & van den Bosch (scovdb (1998)) for galactic disks of different sizes.
### 4.2 Comparison with literature
For the 30 galaxies with known radial velocities we find values for the scalelength of 3.1 up to 19.7 kpc with a median of 6.6 kpc. Van der Kruit (vdk87 (1987)) determines for a diameter limited sample of 51 galaxies scalelengths in the range of $`0.79.2`$ kpc with a maximum of the distribution at about 3 kpc. De Jong (dejong (1996)) derives for his sample of 86 face-on galaxies transformed to our $`H_0`$ a range of $`1.014.4`$ kpc with a median around $`3.0`$ kpc, whereas Courteau (cour (1996)) finds for 290 Sb-Sc galaxies a range of $`0.59.6`$ kpc with a maximum at 3.9 kpc; reduced to our $`H_0`$. In agreement with de Jong (dejong (1996)) we do not find a correlation of the scalelength with the Hubble type.
We find cut-off radii from $`11.134.5`$ kpc with a median at 20.2 kpc, compared to the only available sample of cut-off radii by van der Kruit & Searle (1982a ) with $`7.824.9`$ kpc for their 7 investigated galaxies. Although we do not find a tight correlation between catalogued surface brightness radii, e.g. $`D_{25}`$, and our cut-off radii, they can be used to compare the sizes of the galaxies within our sample. Rubin et al. (rubin (1980)) study 21 Sc galaxies, where they claim radii, characterized by the radius at the $`D_{25}`$ contour reduced to our $`H_0`$, of 81.3 kpc and 35.3 kpc for the two biggest ones, and Romanishin (roma (1983)) finds values of $`3073`$ kpc for 107 intrinsically large spiral galaxies.
We find a clear correlation between the determined cut-off radius and the distance of the galaxy. This implies that we pick intrinsically large galaxies at higher distances due to our selection criterion which is based on the angular diameter matching the filed of view.
### 4.3 Comparison with the Milky Way
It is of particular interest to compare our statistical result with the structural parameters derived for the Milky Way. Robin et al. (robin (1992)) as well as Ruphy et al. (ruphy (1996)) determine the radial structure of the galactic disk with a synthetic stellar population model using optical and NIR star-counts, respectively. They confirm a sharp truncation of the old stellar disk at $`14\pm 0.5`$ kpc and $`15\pm 2`$ kpc, respectively. Freudenreich (freu (1998)) fits a model for the old galactic disk to the NIR data obtained from the survey of the DIRBE experiment also confirms an outer truncation of the disk around $`12.4\pm 0.1`$ kpc. The result of both methods depend directly on the distance to the galactic center ($`R_0=8.5\pm 1`$ kpc). These values are in agreement with the findings of Heyer et al. (heyer (1998)), who measure a sharp decline in the CO mass surface density and conclude that the molecular disk is effectively truncated at $`R=13.5`$ kpc.
In contrast to former investigations (van der Kruit vdk86 (1986), Lewis & Freeman lewis (1989), Nikolaev & Weinberg nik (1997)) placing the Milky Way scalelength around $`45.5`$ kpc, Robin et al. (robin (1992)), Ruphy et al. (ruphy (1996)), and Freudenreich (freu (1998)) quote significantly lower scalelengths of $`2.5\pm 0.3`$ kpc, $`2.3\pm 0.1`$ kpc, and $`2.59\pm 0.02`$ kpc, respectively. This leads to values of $`5.6\pm 0.5`$, $`6.5\pm 1.2`$, and $`4.8\pm 0.1`$ for $`R_{co}/h`$. Whereas the first two values are significantly higher than any value found in our sample (even the highest value of van der Kruit & Searle is only 5.3) the latter determination by Freudenreich is consistent with our highest value of 4.4 within the errors.
If the Milky Way is a ’typical’ galaxy with $`R_{co}/h=2.9`$ the scalelength should be expected to be $`h4.1`$ kpc for $`R_{co}12`$ kpc.
### 4.4 Comparison with models
Only few theoretical models can be found in the literature addressing a physical description for the origin of cut-off radii.
Taking into account a basic picture of galaxy formation, starting with a rotating protocloud, Seiden et al. (seiden (1984)) explain in their framework of a stochastic, self-propagating star-formation theory (SSPSF) several properties of galactic disks. The crucial point is, that they assume a $`1/R`$ dependence instead of an exponential law for the total surface density. In this case they show that a feature similar to a cut-off radius automatically appears in the radial profile, which is directly linked with the scalelength. This is in contrast to Fig. 2, where $`R_{co}`$ and $`h`$ vary independently.
Van der Kruit & Searle (1981a ) proposed that within a scenario of slow disk formation (Larson larson (1976)) this radius might be that radius where disk formation time equals the present age of the galaxy. This isolated slow evolution is in contradiction to recent models preferring interaction and merging as a driver for galaxy evolution (Barnes barnes (1999)).
Later van der Kruit (vdk87 (1987)) proposed a working hypotheses which already includes some of the currently accepted ingredients for galaxy formation to explain the truncation as a result of the formation process. Galactic disks develop from collapsing, rotating proto-clouds. After the dark matter has settled into an isothermal sphere first star-formation in the center builds up a bulge component and the remaining material settles in gaseous form with dissipation in a flat disk under conservation of specific angular momentum. This leads to a constant value for $`R_{co}/h`$ of 4.5, which is in contrast to our observations.
In a recent paper about galaxy formation and viscous evolution Zhang & Wyse (zhang (2000)) additionally consider a self-consistent description of the disk-halo system by dropping the assumption of a static halo and find that the disk cut-off radii indeed constrain the specific angular momentum.
Kennicutt (kenni (1989)) shows that for a sample of 15 face-on spiral galaxies, analysing HI, CO and H<sub>α</sub> data, star-formation stops below a critical threshold value, which is associated with large scale gravitational instabilities. Taking into account the dynamical critical gas density $`\mathrm{\Sigma }_{\mathrm{crit}}`$ for a thin, rotating, isothermal gas disk proposed by Toomre (toom (1964)) he observes the abrupt decrease in star-formation at a radius where the measured gas density drops below $`\mathrm{\Sigma }_{\mathrm{crit}}`$. In the case of NGC 628 this radius coincides with $`R_{co}`$ determined by Shostak & van der Kruit (shos (1984)).
Although it is still unknown if the cut-off radius is an evolutionary phenomenon or has its origin in the galaxy formation process a star-formation threshold at the ’optical edge’ seems to be a promising approach to address this problem (Elmegreen & Parravano elm (1994), and references therein; Ferguson et al. ferg (1998)). This will be done in the future by enlarging the sample with a better defined selection criterion which also includes the environment.
|
no-problem/0003/astro-ph0003066.html
|
ar5iv
|
text
|
# Features of Nucleosynthesis and Neutrino Emission from Collapsars
## I Introduction
It will be a big progress that the fact that at least a part of the gamma-ray bursts (GRBs) comes from the hypernova (HN) explosions is being supported by the observations. It was reported for the first time that there seems to be a physical connection between GRB 980425 and SN 1998bw . They discovered an optical transient within the BeppoSAX Wide Field Camera error box of GRB 980425. Then they reported that the optical transient can be interpreted to be the light curve of SN 1998bw. As for the explosion energy of SN 1998bw, it was estimated to be as high as (20-50)$`\times 10^{51}`$ erg as long as we believe the explosion is spherically symmetric . This is the reason why SN 1998bw is called as a HN. The late afterglow of GRB 970228 also suggests the physical connection between GRB and HN . It was shown that the optical light curve and spectrum of the late afterglow of GRB 970228 are well reproduced by those of SN 1998bw transformed to the redshift of GRB 970228. The afterglow of GRB 980326 is also believed to be the evidence for the GRB/HN connection due to the same reason .
If we believe that a part of the GRBs comes from the explosion of massive stars, the explosion must be a jet-induced one because spherical explosion model has a difficulty in avoiding the baryon contamination problem . In fact, some observations on GRBs are interpreted as evidence for the jet-induced explosion. For example, the breaks in the rate of decline of several afterglows can be explained by the beaming effect . The light curve and spectrum of SN 1998bw also seem to suggest a jet-induced explosion . There are also some excellent numerical simulations on the jet-induced explosion of massive stars whose aim is to reproduce the fire ball , although the fire ball has not been reproduced yet.
Here we must note the following two points. (i) it is not determined that all of the GRBs come from HNe. (ii) the explosion energy of SN 1998bw may be small if the explosion is the jet-induced one. Taking these points into consideration, we can classify the relation of GRB, SN, and HN as shown in Figure 1. For example, region (a)/(d) means that HNe which didn’t/ did generate GRBs. We note that HN $``$ SN = $`\varphi `$ by definition. Here we defined that SN is the explosion of a massive star whose total explosion energy is about $`10^{51}`$ erg. HN is defined as the explosion of a massive star whose total explosion energy is significantly larger than $`10^{51}`$ erg. As for the region (b), other systems such as the merging neutron stars may belong to this region.
One of the most famous model to realize a GRB from a death of a massive star is the collapsar model . The definition of the collapsar is written in as a massive star whose iron core has collapsed to a black hole that is continuing to accrete at a very high rate. Woosley also pointed out that there will be two types for collapsars. One (type I collapsar) is that the central core immediately forms a black hole with an accretion disk. The other (type II collapsar) is that the central core forms a neutron star at first, but the neutron star collapses to be a black hole with an accretion disk due to the continuous fall back. In both types, a strong jet, which is required to produce a GRB, is generated around the polar region due to the pair-annihilation of neutrinos that come from the accretion disk and/or MHD processes. The remnants of a collapsar will belong to the regions (a), (b), (c), and (d) in Figure 1. When the explosion energy of a collapsar is small, it will be classified as SNR. When the hydrogen envelope exists, a collapsar can not produce a GRB.
Here we note that there are no observations that support directly the scenario of collapsars. This situation is a contrast to that of the scenario of collapse-driven SN, which is supported by the detection of neutrinos at Kamiokande and IMB . Thus we present in this study two observable indicators that reflect the mechanism of collapsars. These observations will affirm the difference between collapsars and collapse-driven SN clearly. These are products of explosive nucleosynthesis and neutrino emission. We will discuss these essential features in the following sections. We also discuss the possibility of detection of such observations taking the event rate into consideration.
In section II, we consider the explosive nucleosynthesis in the collapsar model. The luminosity and spectrum of neutrino from collapsars are shown in section III. Summary and discussion are presented in section IV.
## II Features of Remnants of Collapsars
If we believe the collapsar model, the system is highly asymmetric in order to generate a fire ball . Thus it is natural to consider that the product of explosive nucleosynthesis depends on the zenith angle (see details below). So we consider its detectability in this section.
Here we must note that the asymmetric explosion can occur in the collapse-driven supernova as long as the effects of rotation and/or magnetic field are taken into consideration, e.g. see . As a result, the products of explosive nucleosynthesis also depends on the zenith angle in such an asymmetric collapse-driven supernova . This means that we can not distinguish well whether it is the remnant of a collapsar or of a rotating collapse-driven supernova when we find an asymmetric SNR. In order to avoid such a problem, we should search for the hypernova remnants (HNRs), whose explosion energy can not be attained by the scenario of delayed explosion for the collapse-driven supernova. Thus we consider the detectability of the HNRs in this section.
To tell the truth, the products of explosive nucleosynthesis in collapsars are not known exactly. There are many possibilities. For example, it is pointed out that $`{}_{}{}^{56}\mathrm{Ni}`$ is synthesized by the wind blowing off the accretion disk in a type I collapsar as long as the disk viscosity is set to be high . In their simulations, the outflow containing much of $`{}_{}{}^{56}\mathrm{Ni}`$ is shown to be moving at 15 to 40 degrees off axis. However, the region where most of $`{}_{}{}^{56}\mathrm{Ni}`$ is contained may be around the polar region in a type I collapsar. This is the result of the explosive nucleosynthesis behind the strong jet. This picture is like the situation which occurs in the jet-like explosion of collapse-driven supernova . On the other hand, the chemical composition of the remnant of a type II collapsar may be spherically symmetric, because the launch of the jet is too late to cause the explosive nucleosynthesis .
Such a situation is a contrast to that about the explosive nucleosynthesis in SN. The results of numerical calculations on explosive nucleosynthesis in collapse-driven SNe are compared with the observations very carefully, e.g. see . Thus, observations of the HNRs are necessary in order to determine which model is realistic and which model is unrealistic. Such observations may also give a light on the occurrence frequency of the type I collapsar relative to the type II collapsar.
Here we estimate the chance probability to find the nearby HNRs whose chemical composition can be resolved by the latest X-ray telescopes such as the ESA’s X-ray Multi-Mirror ($`XMM`$) and Chandra ($`AXAF`$) satellites whose spatial resolution is of order of 1 arcsec.
At first, we consider the event rate of HN in a Galaxy. If we consider the event rate of GRB is equal to that of HN, the estimated HN rate becomes ($`10^6`$$`10^8`$) $`\mathrm{yr}^1`$ per Galaxy . If we take the beaming effect into account, the HN rate becomes larger than the observed GRB rate. On the other hand, the HN rate can be estimated to be $``$ $`10^3`$ $`\mathrm{yr}^1`$ per Galaxy, when we assume that the slope of the initial mass function is -1.35 , the maximum mass of a star is 50$`M_{}`$ , (10-30)$`M_{}`$ stars explodes as collapse-driven SNe , (30-50)$`M_{}`$ stars explodes as HNe, and the collapse-driven SN event rate is $`10^2`$ $`\mathrm{yr}^1`$ per Galaxy . This will be an upper limit for the HN event rate because all of the massive stars in the range (30-50)$`M_{}`$ are assumed to explode as HNe. So we consider that the HN rate is in the range ($`10^3`$$`10^8`$) $`\mathrm{yr}^1`$ per Galaxy.
In order to know the chemical composition of the ejecta, HNR must be so young that the remnant is not composed mainly by the inter-stellar medium (ISM) but by the HN ejecta. Here we consider the Fe distribution in the remnant because the main products of explosive nucleosynthesis, $`{}_{}{}^{56}\mathrm{Ni}`$, decays to Fe. We can estimate the shock radius, $`R_\mathrm{s}`$, at which the amount of Fe from the ejecta becomes equal to that from ISM in the remnant as follows:
$`R_\mathrm{s}=1.6\times 10\left({\displaystyle \frac{M_{\mathrm{Fe}}}{0.7M_{}}}\right)^{1/3}\left({\displaystyle \frac{1\mathrm{c}\mathrm{m}^3}{n}}\right)^{1/3}\left({\displaystyle \frac{1.36}{\mu }}\right)^{1/3}[\mathrm{pc}],`$ (1)
where $`M_{\mathrm{Fe}}`$, $`n`$, $`\mu `$ are the amount of Fe from the ejecta, mean ambient hydrogen density, and mean atomic weight of cosmic material per H atom . Here we assumed that the mass fraction of Fe in the ISM is equal to that in the solar system abundances . On the other hand, the shock radius in the adiabatic phase can be written as follows :
$`R_\mathrm{s}=7.9\left({\displaystyle \frac{E_{\mathrm{exp}}}{10^{52}\mathrm{erg}}}\right)^{1/5}\left({\displaystyle \frac{1\mathrm{c}\mathrm{m}^3}{n}}\right)^{1/5}\left({\displaystyle \frac{t}{10^3\mathrm{yr}}}\right)^{2/5}[\mathrm{pc}],`$ (2)
where $`E_{\mathrm{exp}}`$ and $`t`$ are total explosion energy and age of the remnant, respectively. In the case of SN 1998bw, $`M_{\mathrm{Fe}}`$ is estimated to be $`0.7M_{}`$ . If we consider that this is the standard case with HN, $`t`$ has to be less than the following value:
$`t6.4\times 10^3\left({\displaystyle \frac{n}{1\mathrm{c}\mathrm{m}^3}}\right)^{1/2}\left({\displaystyle \frac{10^{52}\mathrm{erg}}{E_{\mathrm{exp}}}}\right)^{1/2}\left({\displaystyle \frac{R_s}{16\mathrm{p}\mathrm{c}}}\right)^{5/2}[\mathrm{yr}].`$ (3)
That is, roughly speaking, the HNR whose age is less than $`10^4`$ yr must be searched for in order to know the chemical composition of the ejecta. As for the limit of the distance from the Earth to the target, it must be nearer than 3 Mpc in order to resolve the asymmetry of the chemical composition of the ejecta as long as the spatial resolution of the X-ray telescope is 1 arcsec.
Since there are 55 galaxies within 3 Mpc from our Galaxy , the number of the HNRs is estimated to be 5 $`\times `$ ($`10^2`$$`10^3`$), whose chemical composition can be spatially resolved. ($`10^3`$$`10^8`$) $`\mathrm{yr}^1`$ per Galaxy and $`10^4`$ yr are adopted for the HN event rate and the age of the oldest HNR, respectively. Using the optimistic estimate, the HNRs will be found more and we will be able to discuss on the chemical composition statistically.
Here we consider the report of Wang on NGC 5471B and MF83 in M101. They reported that NGC 5471B and MF83 may be the HNRs since they require explosion energies comparable to the energies frequently associated with GRBs. Since the distance of M101 from our Galaxy is about 7.2 $`\pm `$ 0.4 Mpc , it seems difficult to observe the distribution of the chemical composition of the remnants at a present state. However, we can say that the HNR event rate seems larger than the lower estimate for the GRB rate if these are really HNRs. Although other interpretations are possible for these highly luminous X-ray sources , search for the hypernova remnants nearby our Galaxy has a potential to reveal the mechanism of the GRB.
## III Neutrino Emission from Collapsars
The second important feature of collapsars is that no neutron star but an accretion disk around the black hole is formed. One of the most probable heating source for the jet formation is believed to be the $`\nu \overline{\nu }`$ annihilation emitted from the accretion disk . On the other hand, neutrinos are emitted from only the surface of a neutron star in SN explosion. In this section we discuss the differences of the energy spectrum of the emitted neutrinos between the collapse-driven SN and the collapsar.
In the case of SN, the energy spectrum of neutrinos is approximately represented by the thermal distribution, because the mean free path of neutrinos is much shorter than the radius of the neutron star . Strictly compared to the perfect Fermi-Dirac distribution with zero chemical potential, however, the high energy phase space is less populated . This is because the high energy tail is dumped due to much larger opacities ($``$ $`ϵ_\nu ^2`$). In addition, it is well-known that the total energy of neutrinos is determined by only the gravitational binding energy of the neutron star .
On the other hand, the energy spectrums of neutrinos are not dumped in collapsars, because the nucleon density of the accretion disk is much lower than that of a neutron star . Namely the energy spectrums of neutrinos emitted from collapsars are entirely proportional to the emission rates. Then the total energy of neutrinos could depend on a lot of physical parameters such as the total accreting mass $`M`$, the mass accretion rate $`\dot{M}`$, and so on. Therefore there will be a variety of total energies of the emitted neutrino for collapsars. MacFadyen and Woosley have shown that the accretion disk in a collapsar can be described well by the analytic solution derived by Popham et al. . According to their analytic solution, the neutrinos are mainly emitted from the region where $`T`$ = (1-10) MeV and $`\rho `$ = ($`10^9`$-$`10^{10}`$) g $`\mathrm{cm}^3`$.
If we assume that the density and the temperature are constant in the neutrino emitting region , we can estimate the energy spectrum of the emitted neutrinos from the accretion disk. For $`n+e^+p+\overline{\nu _e}`$, the spectrum of $`\overline{\nu _e}`$ in unit time, unit volume, and unit energy is represented by
$$\frac{d^2n_{\overline{\nu _e}}^{eN}}{dtdE_{\overline{\nu _e}}}(E_{\overline{\nu _e}})=\frac{G_F^2}{2\pi ^3}(1+3\stackrel{~}{C_A}^2)n_nE_{\overline{\nu _e}}^2\sqrt{(E_{\overline{\nu _e}}^2m_e^2)}\left(E_{\overline{\nu _e}}Q\right)\frac{1}{e^{(E_{\overline{\nu _e}}Q)/T}+1},$$
(4)
where $`E_{\overline{\nu _e}}`$ is energy of $`\overline{\nu _e}`$, $`T`$ is temperature, $`G_F`$ is Fermi coupling constant, $`\stackrel{~}{C}_A1.37`$ is normalized by the experimental value of neutron lifetime $`\tau _n887.6`$, $`n_n`$ is number density of neutron, Q $``$ 1.29 MeV, and $`m_e`$ is electron mass. For $`e^{}+e^{}\nu _e+\overline{\nu _e}`$, we obtain
$$\frac{d^2n_{\overline{\nu _e}}^{e^+e^{}}}{dtdE_{\overline{\nu _e}}}(E_{\overline{\nu _e}})=\frac{G_F^2}{9\pi ^4}(C_V^2+C_A^2)E_{\overline{\nu _e}}^3\frac{1}{e^{E_{\overline{\nu _e}}/T}+1}T^4_{m_e/T}^{\mathrm{}}\frac{(ϵ^2(m_e/T)^2)^{3/2}}{e^ϵ+1}𝑑ϵ,$$
(5)
where $`C_V=1/2+2\mathrm{sin}^2\theta _W`$, $`C_A`$ = 1/2, and $`\mathrm{sin}^2\theta _W0.231`$ is Weinberg angle , and we assume $`E_{\overline{\nu _e}}T`$.
In Figure 2(a) we plot the obtained spectrum of $`\overline{\nu _e}`$ emitted from the accretion disk ($`d^2n_{\overline{\nu _e}}/dtdE_{\overline{\nu _e}}d^2n_{\overline{\nu _e}}^{eN}/dtdE_{\overline{\nu _e}}+d^2n_{\overline{\nu _e}}^{e^+e^{}}/dtdE_{\overline{\nu _e}}`$) in unit time, unit volume, and unit energy. It should be noted that the high energy tail is not dumped at all because the nucleon density of the accretion disk is much lower than that of a neutron star and the mean free path is much longer. This is remarkable feature only for collapsars.
The luminosity of $`\overline{\nu _e}`$ can be obtained by integrating the spectrum as $`\dot{q}𝑑E_{\overline{\nu _e}}E_{\overline{\nu _e}}d^2n_{\overline{\nu _e}}/𝑑t𝑑E_{\overline{\nu _e}}`$. Then we obtain $`\dot{q}^{eN}4.6\times 10^{33}\rho _{10}T_{11}^6X_{\mathrm{nuc}}\mathrm{erg}\mathrm{cm}^3\mathrm{s}^1`$ and $`\dot{q}^{e^+e^{}}2.4\times 10^{33}T_{11}^9\mathrm{erg}\mathrm{cm}^3\mathrm{s}^1`$, where $`\rho _{10}`$ = $`\rho `$/$`10^{10}`$ g $`\mathrm{cm}^3`$, $`T_{11}`$ = $`T`$/$`10^{11}`$ K, and $`X_{\mathrm{nuc}}`$ is the mass fraction of nucleons. $`X_{\mathrm{nuc}}`$ is given by $`X_{\mathrm{nuc}}=30.97\rho _{10}^{3/4}T_{10}^{9/8}\mathrm{exp}(0.6096/T_{10})`$, where $`X_{\mathrm{nuc}}`$ $``$.
As is clear from the above relations, the luminosity of neutrinos from collapsars depends sensitively on the temperature. Therefore if the configuration of the accretion disk is modified by the change of the environment such as the mass accretion rate, mass of the progenitor, and the mass of the black hole, then the total luminosity and energy of neutrino from collapsars will be changed drastically. These points are entirely different from the normal collapse-driven SN because the total energy of neutrinos from SN is determined only by the gravitational binding energy of the central neutron star. The event numbers of $`\overline{\nu _e}`$ from HN expected at Super-Kamiokande is represented by
$$\frac{dR}{dE_{e^+}}=\frac{V_AN_p}{4\pi D^2}\sigma _{p\overline{\nu _e}}(E_{e^+})\frac{d^2n_{\overline{\nu _e}}}{dtdE_{\overline{\nu _e}}}(E_{e^+})\mathrm{\Delta }t,$$
(6)
where $`E_{e^+}=E_{\overline{\nu _e}}Q`$ is the energy of the positron which is scattered through $`p+\overline{\nu _e}n+e^+`$ in the detector, $`\sigma _{p\overline{\nu _e}}=\frac{G_F^2}{2\pi ^3}(1+3\stackrel{~}{C_A}^2)E_{e^+}\sqrt{E_{e^+}^2m_e^2}`$ is the cross section of the process, $`V_A`$ is the volume of the emitting region in accretion disk, $`N_p1.5\times 10^{35}`$ is the number of proton in Super-Kamiokande, $`D`$ is the distance from the earth to the collapsar, $`\mathrm{\Delta }tM/\dot{M}`$ is the duration of the emission. In Figure 2(b), we show the plot of the event number adopting an representative parameter set . We can find that the neutrino emission from a collapsar can be observed at Super-Kamiokande as long as it is located within $``$3 Mpc from the earth.
As for the estimate of the detection rate at Super-Kamiokande becomes as follows:
$`P5\times (10^210^7)[\mathrm{yr}^1],`$ (7)
where the same way of estimation is done as in section II. That is, ($`10^3`$$`10^8`$) $`\mathrm{yr}^1`$ per Galaxy and 55 are adopted for the HN event rate and the number of galaxies within 3 Mpc from our Galaxy. Using the optimistic event rate $`\mathrm{\hspace{0.33em}5}\times 10^2`$ per year, the detection probability of the collapsar can be as large as that of the collapse-driven SN.
## IV Summary and Discussion
In this study, characteristic products of nucleosynthesis and neutrino emission have been proposed as two indicators that will reflect the features of the collapsars.
We consider the detectability of the HNRs because we can not distinguish well whether it is the remnant of a collapsar or of a rotating collapse-driven supernova when we find an asymmetric SNR. As a result, the number of the HNRs is estimated to be 5 $`\times `$ ($`10^2`$$`10^3`$), whose chemical composition can be spatially resolved. Using the optimistic estimate, more HNRs will be found and it will be possible to discuss on the chemical composition statistically. Due to such observations, we will be able to determine which model is realistic and which model is unrealistic. Such observations may also give a light on the occurrence frequency of the type I collapsar relative to the type II collapsar. Moreover, we can say that the HNR event rate seems larger than the lower estimate for the GRB rate if NGC 5471B and MF83 in M101 are really HNRs. Although other interpretations are possible for these highly luminous X-ray sources , search for the hypernova remnants nearby our Galaxy has a potential to reveal the mechanism of the GRB.
Strictly speaking, there will be a little difference between the SNRs of collapsars and those of collapse-driven supernovae. We think that an extreme jet-induced explosion like collapsars will not happen in the case of SN. This is because almost all of the matter has to be ejected in order not to leave a black hole but to leave a neutron star at the center. That is, matter around the equatorial plane has to be also ejected, which will be observed as ‘jet-like‘ explosion like SN 1987A . On the other hand, an extreme jet-induced explosion is required in order to make fire balls for the model of the jet-induced HN. So, even if the matter around the equatorial plane is ejected due to some reasons in the case of HN too, the degree of jet-induced explosion will be very large and chemical composition will depend strongly on the zenith angle in the case of the type I collapsar.
It is also noted that the mass accretion rate becomes low if the matter around the equatorial plane is ejected from collapsars. This will result in the decline of the total energy of neutrinos emitted from the accretion disk. As a result, total explosion energy may become small in that case. It is reported that the explosion energy of GRB 980425, which is said to be associated with SN 1998bw, is quite lower than that of the usual GRBs . In the case of SN 1998bw, we think that the matter around the equatorial plane might be ejected from the system, which resulted in the formation of relatively weak jets and faint GRB 980425. This means that SN 1998bw and GRB 980425 may be classified in the region (e) in Figure 1. Of course, this picture requires that the system of SN 1998bw and GRB 980425 is highly asymmetric, because the total explosion energy of SN 1998bw is estimated to be (20-50)$`\times 10^{51}`$ when spherical explosion is assumed .
As for the (anti-)electron neutrino emission from the collapsars, its energy spectrum is mainly determined by the emission rate due to electron (positron) capture on proton (neutron). As the temperature becomes higher, contribution of the process of electron-positron pair annihilation can not be negligible. It is also noted that high energy tail is not dumped in the case of the collapsar because the density of emitting region is low. These features on energy spectrum are quite different from that of SN.
Total energy of neutrino depends on many physical quantum such as total accreting mass and mass accretion rate. It is noted the emission rate due to the electron capture on proton is proportional to $`T^6`$. So a little change in temperature results in great change in the neutrino flux. That is why there will be a variety of total luminosity of neutrino among collapsars, which is in striking contrast to the case of SN. As for the event rate, the detection probability of the collapsar can be as large as that of the collapse-driven SN if we use the optimistic event rate $`\mathrm{\hspace{0.33em}5}\times 10^2`$ per year at Super-Kamiokande.
Finally, we stress again that these features on nucleosynthesis and neutrinos will reveal the mechanism of GRB quite well. We hope the increase of further observations in the near future.
###### Acknowledgements.
This research has been supported in part by a Grant-in-Aid for the Center-of-Excellence (COE) Research (07CE2002) and for the Scientific Research Fund (199908802, 199804502) of the Ministry of Education, Science, Sports and Culture in Japan and by Japan Society for the Promotion of Science Postdoctoral Fellowships for Research Abroad.
|
no-problem/0003/hep-th0003249.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
The discovery that certain field theories admit concrete realizations as a string theory on a particular background has caused a great deal of excitement in recent years . However, attempts to apply these correspondences to study the details of these theories have only met with limited success so far. The problem stems from the fact that our understanding of both sides of the correspondence is limited. On the field theory side, most of what we know comes from perturbation theory where we assume that the coupling is weak. On the string theory side, most of what we know comes from the supergravity approximation where the curvature is small. There are no known situations where both approximations are simultaneously valid. At the present time, comparisons between the dual gauge/string theories have been restricted to either qualitative issues or quantities constrained by symmetry. Any improvement in our understanding of field theories beyond perturbation theory or string theories beyond the supergravity approximation is, therefore, a welcome development.
Previously we showed that Supersymmetric Discrete Light Cone Quantization (SDLCQ) of field theories can, in principle, be used to make a quantitative comparison with the supergravity approximation on the string theory side of the correspondence. We discussed this in two space-time dimensions where the SDLCQ approach works particularly well; however, it can in principle be extended to more dimensions.
We will study the field theory/string theory correspondence motivated by considering the near-horizon decoupling limit of a D1-brane in type IIB string theory . The gauge theory corresponding to this theory is the Yang-Mills theory in two dimensions with 16 supercharges. Its SDLCQ formulation was recently reported in , and recent work has put the use of SDLCQ for this class of problems on a stronger footing . This is probably the simplest known example of a field theory/string theory correspondence involving a field theory in two dimensions with a concrete Lagrangian formulation.
A convenient quantity that can be computed on both sides of the correspondence is the correlation function of gauge invariant operators . We will focus on two-point functions of the stress-energy tensor. This turns out to be a very convenient quantity to compute for reasons that are discussed in . Some aspects of this, as it pertains to a consideration of black hole entropy, were recently discussed in . In the DLCQ literature, the spectrum of hadrons is often reported . This would be fine for theories in a confining phase. However, we expect the SYM in two dimensions to flow to a non-trivial conformal fixed point in the infra-red. The spectrum of states will therefore form a continuum and will be cumbersome to handle. On the string theory side, entropy density and the quark anti-quark potential are frequently reported. The definition of entropy density requires that we place the field theory in a space-like box which is incommensurate with the light-like box of DLCQ. Similarly, a static quark anti-quark configuration does not fit very well inside a discretized light-cone geometry. A correlation function of point-like operators does not suffer from these problems.
## 2 Correlation functions in supergravity
The correlation function of the stress-energy tensor on the string theory side, with use of the supergravity approximation, was presented in , and we will only quote the result here. The computation is essentially a generalization of . The main conclusion on the supergravity side was reported recently in . Up to a numerical coefficient of order one, which we have suppressed, we found that
$$𝒪(x)𝒪(0)=\frac{N_c^{\frac{3}{2}}}{g_{YM}x^5}.$$
(1)
This result passes the following important consistency test. The SYM in 2 dimensions with 16 supercharges have conformal fixed points in both UV and IR with central charges of order $`N_c^2`$ and $`N_c`$, respectively. Therefore, we expect the two point function of the stress-energy tensor to scale like $`N_c^2/x^4`$ and $`N_c/x^4`$ in the deep UV and IR, respectively. According to the analysis of , we expect to deviate from these conformal behaviors and cross over to a regime where the supergravity calculation can be trusted. The crossover occurs at $`x=1/g_{YM}\sqrt{N_c}`$ and $`x=\sqrt{N_c}/g_{YM}`$. At these points, the $`N_c`$ scaling of (1) and the conformal result match in the sense of the correspondence principle .
## 3 Correlation functions in SUSY with 16 Super Charges
The challenge then is to attempt to reproduce the scaling relation (1), fix the numerical coefficient, and determine the details of the crossover behavior using SDLCQ. In order to actually evaluate the correlation functions, we must resort to numerical analysis.
The technique of SDLCQ is reviewed in , so we will be brief here. The basic idea of light-cone quantization is to parameterize space-time using light-cone coordinates $`x^+`$ and $`x^{}`$ and to quantize the theory making $`x^+`$ play the role of time. In the discrete light cone approach, we require the momentum $`p_{}=p^+`$ along the $`x^{}`$ direction to take on discrete values in units of $`p^+/K`$ where $`p^+`$ is the conserved total momentum of the system and $`K`$ is an integer commonly referred to as the harmonic resolution . One can think of this discretization as a consequence of compactifying the $`x^{}`$ coordinate on a circle with a period $`2L=2\pi K/p^+`$. The advantage of discretizing on the light cone is the fact that the dimension of the Hilbert space becomes finite. Therefore, the Hamiltonian is a finite dimensional matrix, and its dynamics can be solved explicitly. In SDLCQ one makes the DLCQ approximation to the supercharges, and these discrete representations satisfy the supersymmetry algebra. Therefore SDLCQ enjoys the improved renormalization properties of supersymmetric theories. Of course, to recover the continuum result, we must send $`K`$ to infinity and as luck would have it, we find that SDLCQ usually converges faster than the naive DLCQ. Of course, in the process the size of the matrices will grow, making the computation harder and harder.
Let us now return to the problem at hand. We would like to compute a general expression of the form $`F(x^{},x^+)=𝒪(x^{},x^+)𝒪(0,0)`$. In DLCQ, where we fix the total momentum in the $`x^{}`$ direction, it is more natural to compute the Fourier transform and express the transform in a spectral decomposed form
$$\stackrel{~}{F}(P_{},x^+)=\frac{1}{2L}𝒪(P_{},x^+)𝒪(P_{},0)=\underset{i}{}\frac{1}{2L}0|𝒪(P_{})|ie^{iP_+^ix^+}i|𝒪(P_{},0)|0.$$
(2)
The position-space form of the correlation function is recovered by Fourier transforming with respect to $`P_{}=K\pi /L`$. We can continue to Euclidean space by taking $`r=\sqrt{2x^+x^{}}`$ to be real. The result for the correlator of the stress-energy tensor was presented in , and we only quote the results here:
$$F(x^{},x^+)=\underset{i}{}\left|\frac{L}{\pi }0|T^{++}(K)|i\right|^2\left(\frac{x^+}{x^{}}\right)^2\frac{M_i^4}{8\pi ^2K^3}K_4\left(M_i\sqrt{2x^+x^{}}\right),$$
(3)
where $`M_i`$ is a mass eigenvalue and $`K_4(x)`$ is the modified Bessel function of order 4. In we found that the momentum operator $`T^{++}(x)`$ is given by
$$T^{++}(x)=\mathrm{tr}\left[(_{}X^I)^2+\frac{1}{2}\left(iu^\alpha _{}u^\alpha i(_{}u^\alpha )u^\alpha \right)\right],I,\alpha =1\mathrm{}8$$
(4)
where $`X`$ and $`u`$ are the physical adjoint scalars and fermions respectively, following the notation of . When discretized, these operators have the mode expansions
$`X_{i,j}^I`$ $`=`$ $`{\displaystyle \frac{1}{\sqrt{4\pi }}}{\displaystyle \underset{n=1}{\overset{\mathrm{}}{}}}{\displaystyle \frac{1}{\sqrt{n}}}\left[a_{ij}^I(n)e^{i\pi nx^{}/L}+a_{ji}^I(n)e^{i\pi nx^{}/L}\right],`$
$`u_{i,j}^\alpha `$ $`=`$ $`{\displaystyle \frac{1}{\sqrt{4L}}}{\displaystyle \underset{n=1}{\overset{\mathrm{}}{}}}\left[b_{ij}^\alpha (n)e^{i\pi nx^{}/L}+b_{ji}^\alpha (n)e^{i\pi nx^{}/L}\right].`$ (5)
The matrix element $`(L/\pi )0|T^{++}(K)|i`$ is independent of $`L`$ and can be substituted directly to give an explicit expression for the two-point function. We see immediately that the correlator has the correct small-$`r`$ behavior, for in that limit, it asymptotes to
$$\left(\frac{x^{}}{x^+}\right)^2F(x^{},x^+)=\frac{N_c^2(2n_b+n_f)}{4\pi ^2r^4}\left(1\frac{1}{K}\right).$$
(6)
On the other hand, the contribution to the correlator from strictly massless states is given by
$$\left(\frac{x^{}}{x^+}\right)^2F(x^{},x^+)=\underset{i}{}\left|\frac{L}{\pi }0|T^{++}(K)|i\right|_{M_i=0}^2\frac{6}{K^3\pi ^2r^4}.$$
(7)
It is important that this $`1/r^4`$ behavior at large $`r`$ not be confused with the $`1/r^4`$ behavior that we seek at large $`r`$. First of all, there is not supposed to be any massless physical bound state in this theory, and, secondly, it has the wrong $`N_c`$ dependence.
Relative to the $`1/r^4`$ behavior at small $`r`$, the $`1/r^4`$ behavior at large $`r`$ that we expect is down by a factor of $`1/N_c`$. Since we are doing a large-$`N_c`$ calculation, this behavior is suppressed. We can only hope to see the transition from the $`1/r^4`$ behavior at small $`r`$ to the region where the correlator behaves like $`1/r^5`$.
## 4 Discrete Symmetries of the Problem.
In order to calculate the correlation function we use the expression (2). This means that after diagonalizing the Hamiltonian $`P^{}`$ one should evaluate the projection of each eigenfunction on the specific state $`T^{++}(K)|0`$. The fact that we are only interested in states which have nonzero value of such projection leads to significant simplifications.
One can diagonalize any of the eight supercharges $`Q_\alpha ^{}`$. In the continuum limit, the result does not depend on the value of $`\alpha `$ that one chooses, but in DLCQ the situation is a little more subtle. As was shown in , while the spectrum of $`(Q_\alpha ^{})^2`$ is the same for all $`\alpha `$, the wave functions depend on the choice of supercharge. This dependence is an artifact of DLCQ and should disappear in the continuum limit. We refer to for the discussion of this issue. Here we will just pick one supercharge (for example, $`Q_1^{}`$). Since the state $`T^{++}(K)|0`$ is a singlet under R–symmetry acting on the “flavor” index of $`Q_\alpha ^{}`$, the correlator (2) does not depend on the choice of $`\alpha `$ even at finite resolution.
A significant simplification occurs at this stage. Suppose there exists an operator $`S`$ commuting with both $`P^{}`$ and $`T^{++}(K)`$ and such that $`S|0=s_0|0`$. Then the Hamiltonian and $`S`$ can be diagonalized simultaneously. From now on we assume that the set of states $`|i`$ is a result of such diagonalization. In this case, only states satisfying the condition $`S|i=s_0|i`$ contribute to the sum in (2), and we only need to diagonalize $`P^{}`$ in this sector. So if one finds a large enough set of appropriate operators $`S`$, then the size of the problem can be significantly reduced. By looking at the structure of the state $`T^{++}(K)|0`$ one can conclude, given arbitrary permutations $`P`$ and $`Q`$ of the $`8`$ flavor indices, that any transformation of the form
$`a_{ij}^I(k)f(I)a_{ij}^{P[I]}(k),f(I)=\pm 1`$
$`b_{ij}^\alpha (k)g(\alpha )b_{ij}^{Q[\alpha ]}(k),g(\alpha )=\pm 1`$ (8)
commutes with $`T^{++}(K)`$, and that the vacuum is an eigenstate of this transformation with eigenvalue $`1`$. The requirement for $`P^{}=(Q_1^{})^2`$ to be invariant under $`S`$ imposes some restrictions on the permutations. In fact, we will require that $`Q_1^{}`$ be invariant under $`S`$, in order to guarantee that $`P^{}`$ is invariant.
The form of the supercharge from is
$$Q_\alpha ^{}=_0^{\mathrm{}}[\mathrm{}]b_\alpha ^{}(k_3)a_I(k_1)a_I(k_2)+\mathrm{}+(\beta _I\beta _J^T\beta _J\beta _I^T)_{\alpha \beta }[\mathrm{}]b_\beta ^{}(k_3)a_I(k_1)a_J(k_2)+\mathrm{}$$
(9)
Here the $`\beta _I`$ are $`8\times 8`$ real matrices satisfying $`\{\beta _I,\beta _J^T\}=2\delta _{IJ}`$. We use a special representation for these matrices given in .
Let us consider the expression for $`Q_1^{}`$. The first part of the supercharge (the one which does not include $`\beta `$ matrices) is invariant under (8) as long as $`g(1)=1`$ and $`Q[1]=1`$. We will consider only such transformations. In order to analyze the symmetries of the $`\beta `$ terms, let us make the following observation. In the representation of $`\beta `$ matrices we have chosen, the expression $`_{IJ}^\alpha =\left(\beta _I\beta _J^T\beta _J\beta _I^T\right)_{1\alpha }`$ may take only the values $`\pm 2`$ or zero. Moreover, for any pair $`(I,J)`$ there is at most one value of $`\alpha `$ corresponding to nonzero $``$. This fact allows us to represent $``$ in a compact form. To do so, we introduce a new object $`\mu `$ defined by
$$\mu _{IJ}=\{\begin{array}{cc}\hfill \alpha ,& _{IJ}^\alpha =2\hfill \\ \hfill \alpha ,& _{IJ}^\alpha =2\hfill \\ \hfill 0,& _{IJ}^\alpha =0\text{ for all }\alpha .\hfill \end{array}$$
(10)
Our choice of $`\beta `$ matrices then leads to the following expression for $`\mu `$:
$$\mu =\left(\begin{array}{cccccccc}\hfill 0& \hfill 5& \hfill 7& \hfill 2& \hfill 6& \hfill 3& \hfill 4& \hfill 8\\ \hfill 5& \hfill 0& \hfill 3& \hfill 6& \hfill 2& \hfill 7& \hfill 8& \hfill 4\\ \hfill 7& \hfill 3& \hfill 0& \hfill 8& \hfill 4& \hfill 5& \hfill 6& \hfill 2\\ \hfill 2& \hfill 6& \hfill 8& \hfill 0& \hfill 5& \hfill 4& \hfill 3& \hfill 7\\ \hfill 6& \hfill 2& \hfill 4& \hfill 5& \hfill 0& \hfill 8& \hfill 7& \hfill 3\\ \hfill 3& \hfill 7& \hfill 5& \hfill 4& \hfill 8& \hfill 0& \hfill 2& \hfill 6\\ \hfill 4& \hfill 8& \hfill 6& \hfill 3& \hfill 7& \hfill 2& \hfill 0& \hfill 5\\ \hfill 8& \hfill 4& \hfill 2& \hfill 7& \hfill 3& \hfill 6& \hfill 5& \hfill 0\end{array}\right).$$
(11)
We are looking for a subset of transformations (8) that satisfy the conditions $`g(1)=1`$ and $`Q[1]=1`$ and leave the matrix $`\mu `$ invariant. The latter property means that
$$Q[\mu _{P[I]P[J]}]=g(\mu _{IJ})f(I)f(J)\mu _{IJ}.$$
(12)
Since the subset of transformations that we seek forms a subgroup $`R`$ of the permutation group $`S_8\times S_8`$, it is natural to look for the elements of $`R`$ that square to one. In the case of $`S_8\times S_8`$, it is known that products of such elements generate the whole group, and, as we will show later, the same is true for $`R`$. One can construct all $`Z_2`$ symmetries satisfying (12), but not all of them are independent. In particular if $`a`$ and $`b`$ are two such symmetries then $`aba`$ is also a $`Z_2`$ symmetry. By studying different possibilities we have found that there are $`7`$ independent $`Z_2`$ symmetries in the group $`R`$, and we have chosen them to be
| | $`a_1`$ | $`a_2`$ | $`a_3`$ | $`a_4`$ | $`a_5`$ | $`a_6`$ | $`a_7`$ | $`a_8`$ | $`b_2`$ | $`b_3`$ | $`b_4`$ | $`b_5`$ | $`b_6`$ | $`b_7`$ | $`b_8`$ |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 1 | $`a_7`$ | $`a_3`$ | $`a_2`$ | $`a_6`$ | $`a_8`$ | $`a_4`$ | $`a_1`$ | $`a_5`$ | $`b_2`$ | $`b_3`$ | $`b_4`$ | $`b_6`$ | $`b_5`$ | $`b_8`$ | $`b_7`$ |
| 2 | $`a_3`$ | $`a_6`$ | $`a_1`$ | $`a_5`$ | $`a_4`$ | $`a_2`$ | $`a_8`$ | $`a_7`$ | $`b_4`$ | $`b_3`$ | $`b_2`$ | $`b_5`$ | $`b_8`$ | $`b_7`$ | $`b_6`$ |
| 3 | $`a_8`$ | $`a_7`$ | $`a_6`$ | $`a_5`$ | $`a_4`$ | $`a_3`$ | $`a_2`$ | $`a_1`$ | $`b_3`$ | $`b_2`$ | $`b_4`$ | $`b_5`$ | $`b_7`$ | $`b_6`$ | $`b_8`$ |
| 4 | $`a_5`$ | $`a_4`$ | $`a_8`$ | $`a_2`$ | $`a_1`$ | $`a_7`$ | $`a_6`$ | $`a_3`$ | $`b_2`$ | $`b_7`$ | $`b_8`$ | $`b_5`$ | $`b_6`$ | $`b_3`$ | $`b_4`$ |
| 5 | $`a_8`$ | $`a_3`$ | $`a_2`$ | $`a_7`$ | $`a_6`$ | $`a_5`$ | $`a_4`$ | $`a_1`$ | $`b_5`$ | $`b_3`$ | $`b_7`$ | $`b_2`$ | $`b_6`$ | $`b_4`$ | $`b_8`$ |
| 6 | $`a_5`$ | $`a_8`$ | $`a_7`$ | $`a_6`$ | $`a_1`$ | $`a_4`$ | $`a_3`$ | $`a_2`$ | $`b_8`$ | $`b_5`$ | $`b_4`$ | $`b_3`$ | $`b_6`$ | $`b_7`$ | $`b_2`$ |
| 7 | $`a_4`$ | $`a_6`$ | $`a_8`$ | $`a_1`$ | $`a_7`$ | $`a_2`$ | $`a_5`$ | $`a_3`$ | $`b_2`$ | $`b_6`$ | $`b_5`$ | $`b_4`$ | $`b_3`$ | $`b_7`$ | $`b_8`$ |
Using Mathematica we explicitly constructed all the symmetries of the type (8) satisfying (12). We found that the group of such transformations has $`168`$ elements, and we have shown that all of them can be generated from the seven $`Z_2`$ symmetries mentioned above.
In our numerical procedure we use the $`Z_2`$ symmetries in the following way. Since all states relevant for the correlator are singlets under the symmetry group $`R`$, we join our states in classes and treat the whole class as a new state. For instance, the simplest nontrivial singlet looks like
$$|1=\frac{1}{8}\underset{I=1}{\overset{8}{}}\text{tr}\left(a^{}(1,I)a^{}(K1,I)\right)|0.$$
(13)
This means that if, during the construction of the basis, we encounter the state $`a^{}(1,1)a^{}(K1,1)|0`$ it will be replaced by the class representative (in this case, by the state $`|1`$). Such a procedure significantly decreases the size of the basis, while keeping all the information necessary for calculating the correlator.
## 5 Numerical Results
Our numerical results are presented in Figs. 1(a) and 1(b). Figure 1(a) is a log-log plot of $`r^4`$ times the correlator versus $`r`$, so that a $`1/r^4`$ behavior appears as a flat line and a $`1/r^5`$ behavior gives rise to a line with slope $`1`$. In Fig. 1(b) we plot the log-log derivative, which is computed from explicit differentiation inside the sum and amounts to a replacement of $`K_4(M_ir)`$ by $`M_iK_3(M_ir)`$.
Computing this correlator beyond the small-$`r`$ asymptotics represents a formidable technical challenge. In we were able to construct the mass matrix explicitly and compute the spectrum for $`K=2`$, $`K=3`$, and $`K=4`$. Even for these modest values of the harmonic resolution, the Hilbert space contained thousands of states. Previously in we used this spectrum and the associated wave function to calculate the correlator beyond the small-$`r`$ region. In the calculation we present here we have made three improvements which have allowed us to expand the space by a factor of approximately 1000. The first and most straightforward improvement was to rewrite the code in C++, which simply runs faster than the Mathematica code and can be exported to faster machines. The second was to use the discrete flavor symmetry to reduce the size of the problem at a given resolution. The third improvement is a numerical algorithm that replaces the explicit diagonalization with an efficient but accurate approximation.
This numerical algorithm follows from the observation that the contributions to the eigenstate sum are weighted by the square of the projection $`i|T^{++}(K)|0`$. The Lanczos diagonalization algorithm will naturally generate the states with nonzero projection if $`T^{++}(K)|0`$ is used as the starting vector. Let $`|u_1`$ be the normalized vector proportional to $`T^{++}(K)|0`$, set $`b_1=0`$, and construct a sequence of normalized vectors $`|u_n`$ according to the Lanczos iteration $`b_{n+1}|u_{n+1}=P^{}|u_na_n|u_nb_n|u_{n1}`$, with $`a_n=u_n|P^{}|u_n`$. The $`|u_n`$ form an orthonormal basis with respect to which $`P^{}`$ is tridiagonal and easily exponentiated. Because all of these vectors are generated by applying powers of $`P^{}`$ to $`|u_1`$, only those eigenvectors with nonzero projections on $`|u_1`$ can appear. Although generating a complete basis by iteration can yield the exact answer,<sup>1</sup><sup>1</sup>1Both this statement about the complete basis and the previous statement about nonzero projections will hold only in exact arithmetic. Round-off errors will eventually destroy these relationships as the Lanczos iteration proceeds. doing many fewer iterations, even 20, can be sufficient to capture the important contributions. Such an approach to the computation of a matrix element is related to work by Haydock and others on matrix elements of resolvents.
Before discussing our results we need to address the question of massless states. Our SDLCQ calculation of the spectrum of the (8,8) theory saw massless states , and we argued that they were not normalizable bound states. The argument in that paper was not completely correct but the conclusion remains true. We find that in these massless states the number of partons in all the contributions is either all even or all odd depending on whether the resolution is even or odd.
We have not, however, removed these unphysical states from the data sets but rather used them to obtain an estimate of the region in $`r`$ where the calculation breaks down. This region is where the unphysical massless states dominate the correlator sum. Unfortunately, this is also the region where we expect the true large-$`r`$ behavior to dominate the correlator, if only the extra states were absent. The correlator is only sensitive to the two-particle content of the wave function, and we see in Fig. 1(b) the characteristic behavior of the massless states at large $`r`$ only at even resolutions. In Fig. 1(a) for even resolution, the region where the correlator starts to behave like $`1/r^4`$ is clearly visible. In Fig. 1(b) we see that for even resolution the effect of the massless state on the derivative is felt at smaller values of $`r`$ where the even resolution curves start to turn up. We use these smaller values to estimate the value of $`r`$ where the large-$`N_c`$ approximation breaks down. We see that the value increases as we increase the resolution, as expected. Another estimate of where this approximation breaks down, that gives consistent values, is the set of points where the even and odd resolution derivative curves cross. We do not expect these curves to cross on general grounds, based on work in , where we considered a number of other theories.
A proof of the Maldacena conjecture would show up in Fig. 1(b) as a set of derivative curves that approached and then touched the line at $`1`$ as we increased the resolution. Convergence in the resolution, $`K`$, would appear as a flattening of the derivative curves at $`1`$ for the highest values of $`K`$.
We see that the derivative curves are approaching $`1`$ as we increase the resolution and appear to be within $`1015\%`$ before the approximation breaks down. There is however no indication of convergence yet; therefore, we cannot claim a numerical proof of the Maldacena conjecture.
## 6 Conclusion
In this article, we used the SDLCQ prescription for computing the correlation function of the stress-energy tensor $`T^{++}`$, which may be readily compared with predictions provided by a supergravity analysis following the conjecture of Maldacena . Such a comparison requires non-perturbative methods on the field theory side, and the SDLCQ approach is the only numerical method suited to this task. At the present time the calculation gives results that are within $`1015\%`$ of the predicted value; however, higher resolution calculations are needed to prove convergence. The results we present here increase the number of states by a factor of 1000 relative to . There are currently available methods that we believe could give us another factor of 100-1000; however, we have noted in our analysis of our numerical results that most of the contributions to the matrix element come from a very small number of eigenfunctions. An analytical understanding of this phenomenon could greatly accelerate the calculation.
Finally, we note that, in principle, we could study the proper $`1/r`$ behavior at large $`r`$ by computing the $`1/N_c`$ corrections. In the past we have computed such corrections in some theories. However, in the present case such a computation seems to be a very large project indeed.
## Acknowledgments
The authors would like to acknowledge A. Hashimoto for several very useful conversations. The calculations of matrices were done with a C++ code written in part by F. Antonuccio. This work was supported in part by the US Department of Energy.
|
no-problem/0003/nlin0003020.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
In this paper we present a comprehensive approach to the stationary flows of the Korteweg–de Vries (KdV) equation. This is a quite classical subject, and lies at the very heart of the modern theory of integrable systems in finite and infinite dimensions. Its formulation can be traced back at least to . Then, together with its generalizations, it has been subject of intensive studies. See, for example, and references therein. It was realized that such systems were among the prototypes of the class of Algebraically Completely Integrable Hamiltonian Systems , and, in particular, it has been proven that the classical Jacobi formulas for projective embeddings of hyperelliptic Jacobian varieties found a very natural realization in such problems.
Hamiltonian aspects of those flows were studied in a number of papers. To cite a few . Two main approaches emerged. In the first one, a Hamiltonian structure was given to the stationary KdV flows, looking at their variational properties . Namely, such flows were regarded as classical Euler–Lagrange equations associated to suitable Lagrangian and Hamiltonian densities. In the second one (see, e.g., ), a set of suitable canonical coordinates was introduced on the stationary manifolds, somehow dictated by the algebro–geometrical structure of the problem, and the fact that the flows were indeed Hamiltonian was verified at a later stage, via direct computation. In the same way, a set of action–angle variables was found.
After the discovery of the bi-Hamiltonian structure of the $`1+1`$ dimensional KdV hierarchy, the obvious problem of finding a corresponding bi-Hamiltonian structure for its stationary flows was studied. A solution for this was found with an ingenious study mainly based on the recursion relations found by Alber , the introduction of special sets of coordinates, and the use of the Miura transformation.
The aim of this paper is to give a rather new and systematic perspective to this circle of ideas, focusing our attention on the bi-Hamiltonian aspects of the problem. The next section is devoted to a quick description, in a simple example, of the stationary reductions of KdV and to an illustration of the main properties of these systems. Then, we will present the plan and the important points of the paper.
## 2 A Preliminary View
In this section we recall some known facts about the KdV hierarchy and its stationary reductions, and we present the problems to be tackled in the next sections.
The KdV equation is the most famous example in the class of the so-called integrable nonlinear PDEs. It possesses a number of remarkable properties, in particular:
1. It has an infinite sequence of integrals of motion;
2. It admits a Lax representation;
3. It is a bi-Hamiltonian system;
4. The integrals of motion are the coefficients of a Casimir of the Poisson pencil, so that the KdV equation can be seen as a Gel’fand–Zakharevich system (see below).
There are of course relations between these properties. For example, the conserved densities are the residues of the fractional powers of the Lax operator. Also, they can be extracted from the bi-Hamiltonian structure and shown to commute with respect to both Poisson brackets. The associated vector fields form the KdV hierarchy, whose first members are
$$\begin{array}{c}\frac{u}{t_1}=u_x,\frac{u}{t_3}=\frac{1}{4}(u_{xxx}6uu_x)\text{(KdV equation)}\\ \frac{u}{t_5}=\frac{1}{16}(u_{xxxxx}10uu_{xxx}20u_xu_{xx}+30u^2u_x).\end{array}$$
(2.1)
The KdV hierarchy can be used to find finite-dimensional reductions for the KdV equation, giving rise to explicit solutions. Indeed, the set of singular points of a (fixed) vector field of the hierarchy is a finite-dimensional manifold which is invariant under the flows of the other vector fields, due to the commutativity property. The (finite-dimensional) systems obtained by restricting the KdV hierarchy to such invariant manifolds are called the stationary reductions of KdV.
Let us consider explicitly the reduction $`\text{KdV}_5`$ corresponding to the third vector field of the hierarchy. The set of zeroes is given by
$$u_{xxxxx}10uu_{xxx}20u_xu_{xx}+30u^2u_x=0,$$
(2.2)
and its dimension is 5, since we can use the value of $`u`$, $`u_x`$, $`u_{xx}`$, $`u_{xxx}`$, and $`u_{xxxx}`$ at a fixed point $`x_0`$ (i.e., the Cauchy data) as global coordinates. For the sake of simplicity we put
$$u_0=u(x_0),u_1=u_x(x_0),u_2=u_{xx}(x_0),u_3=u_{xxx}(x_0),u_4=u_{xxxx}(x_0).$$
(2.3)
In order to compute the reduced equations of the first flow of (2.1), we have to derive it with respect to $`x`$, and to use the constraint (2.2) and its differential consequences to eliminate all the derivatives of order higher than 4. We obtain the equations
$$\begin{array}{c}\frac{u_0}{t_1}=u_1,\frac{u_1}{t_1}=u_2,\frac{u_2}{t_1}=u_3,\frac{u_3}{t_1}=u_4,\\ \frac{u_4}{t_1}=10u_0u_3+20u_1u_230u_{0}^{}{}_{}{}^{2}u_1.\end{array}$$
(2.4)
In the same way, for the KdV equation we get
$$\begin{array}{c}\frac{u_0}{t_3}=\frac{1}{4}(u_36u_0u_1)\hfill \\ \frac{u_1}{t_3}=\frac{1}{4}(u_46u_0u_26u_{1}^{}{}_{}{}^{2})\hfill \\ \frac{u_2}{t_3}=\frac{1}{4}(4u_0u_3+2u_1u_230u_{0}^{}{}_{}{}^{2}u_1)\hfill \\ \frac{u_3}{t_3}=\frac{1}{4}(4u_0u_4+6u_1u_3+2u_{2}^{}{}_{}{}^{2}30u_{0}^{}{}_{}{}^{2}u_260u_0u_{1}^{}{}_{}{}^{2})\hfill \\ \frac{u_4}{t_3}=\frac{1}{4}(10u_1u_4+10u_{0}^{}{}_{}{}^{2}u_3+10u_2u_3100u_0u_1u_260u_{1}^{}{}_{}{}^{3}120u_{0}^{}{}_{}{}^{3}u_1)\hfill \end{array}$$
(2.5)
As far as the restrictions of the other flows are concerned, it can be shown that they are linear combination of (2.4) and (2.5).
It is not surprising that the above mentioned properties of the KdV equation hold also for the $`\text{KdV}_5`$ system. However, to the best of our knowledge, it has not been made completely clear the way which these properties pass from the KdV hierarchy to its stationary reductions, especially as far as the bi-Hamiltonian structure is concerned. In any case, one can check that the functions
$$\begin{array}{c}H_0=\frac{1}{16}(2u_2u_4+6u_{0}^{}{}_{}{}^{2}u_4+u_{3}^{}{}_{}{}^{2}12u_0u_1u_3+16u_0u_{2}^{}{}_{}{}^{2}+12u_{1}^{}{}_{}{}^{2}u_260u_{0}^{}{}_{}{}^{3}u_2+36u_{0}^{}{}_{}{}^{5})\hfill \\ H_1=\frac{1}{4}(2u_0u_42u_1u_3+u_{2}^{}{}_{}{}^{2}20u_{0}^{}{}_{}{}^{2}u_2+15u_{0}^{}{}_{}{}^{4})\hfill \\ H_2=u_410u_0u_25u_{1}^{}{}_{}{}^{2}+10u_{0}^{}{}_{}{}^{3}\hfill \end{array}$$
(2.6)
are integrals of motion for (2.4) and (2.5). These systems have also a Lax formulation, i.e., they can be written as
$$\frac{L}{t_i}=[A_i,L],i=1,3,$$
(2.7)
where the Lax matrix $`L`$ depends on a parameter $`\lambda `$, and is given by
$$L=\frac{1}{16}\left(\begin{array}{cc}4u_1\lambda +u_36u_0u_1& 16\lambda ^28u_0\lambda +6u_{0}^{}{}_{}{}^{2}2u_2\\ \begin{array}{c}16\lambda ^3+8u_0\lambda ^2+2\lambda (u_2u_{0}^{}{}_{}{}^{2})+\\ u_48u_0u_26u_{1}^{}{}_{}{}^{2}+6u_{0}^{}{}_{}{}^{3}\end{array}& 4u_1\lambda u_3+6u_0u_1\end{array}\right).$$
(2.8)
The matrices $`A_i`$ can be easily contructed from $`L`$ (see Section 4).
Finally, there are two compatible Poisson structures giving a (bi)-Hamiltonian formulation of the $`\text{KdV}_5`$ systems. The corresponding Poisson tensors are
$$P_0=\left[\begin{array}{ccccc}0& 0& 0& 2& 0\\ 0& 0& 2& 0& 20u_0\\ 0& 2& 0& 20u_0& 20u_1\\ 2& 0& 20u_0& 0& 140u_{0}^{}{}_{}{}^{2}20u_2\\ 0& 20u_0& 20u_1& 140u_{0}^{}{}_{}{}^{2}+20u_2& 0\end{array}\right]$$
and
$$P_1=\left[\begin{array}{ccccc}0& \frac{1}{2}& 0& 3u_0& 6u_1\\ \frac{1}{2}& 0& 3u_0& 3u_1& 4u_215u_{0}^{}{}_{}{}^{2}\\ 0& 3u_0& 0& u_2+15u_{0}^{}{}_{}{}^{2}& u_3+30u_0u_1\\ 3u_0& 3u_1& u_215u_{0}^{}{}_{}{}^{2}& 0& \begin{array}{cc}u_440u_0u_2+& \\ 30u_{1}^{}{}_{}{}^{2}60u_{0}^{}{}_{}{}^{3}& \end{array}\\ 6u_1& 4u_2+15u_{0}^{}{}_{}{}^{2}& u_330u_0u_1& \begin{array}{cc}u_4+40u_0u_2& \\ 30u_{1}^{}{}_{}{}^{2}+60u_{0}^{}{}_{}{}^{3}& \end{array}& 0\end{array}\right].$$
If we call $`X_1`$ and $`X_3`$ the vector fields of $`\text{KdV}_5`$, then the following relations hold:
$$\begin{array}{c}P_0dH_2=0\\ X_1=P_0dH_1=P_1dH_2\\ X_3=P_0dH_0=P_1dH_1\\ P_1dH_0=0.\end{array}$$
(2.9)
They can be collected in the statement that the function $`H(\lambda ):=H_2\lambda ^2+H_1\lambda +H_0`$ is a Casimir of the Poisson pencil $`P_\lambda :=P_1\lambda P_0`$, that is,
$$P_\lambda dH(\lambda )=0.$$
(2.10)
Therefore, $`X_1`$ and $`X_3`$ are the bi-Hamiltonian vector fields associated with a polynomial Casimir of a Poisson pencil of maximal rank. In a word, they are Gel’fand–Zakharevich (GZ) systems .
The importance of the stationary reductions of the KdV hierarchy (and, more generally, of the stationary reductions of the Gel’fand–Dickey hierarchies) lies in the fact that the reduced equations can be solved by means of the classical method of separation of variables. This was noticed in the early works on the subject. It is also known how to construct the variables of separation starting from the Lax matrix. We will show that the separability of these systems is a particular instance of a general result, which is valid for quite a wide class of bi-Hamiltonian manifolds.
In Section 3 we give a rather unconventional presentation of the stationary reductions of KdV. Our priviledged starting point is a picture of the KP hierarchy as a system of ordinary differential equations, called the Central System (CS) in . Starting from there, by means of a double reduction process we can describe quite explicitly the stationary reductions of KdV, and, in Section 4, we are able to give a Lax representation of these systems, with a Lax matrix depending polynomially on a parameter. This representation is used to show that the flows are bi-Hamiltonian. This is done in two steps. First, in Section 5 we recall the bi-Hamiltonian structure on matrix polynomials and show that the Hamiltonian vector fields (with respect to the Poisson pencil) admit a Lax formulation. This property is conserved after a suitable bi-Hamiltonian reduction process. Then, in Section 6, we identify the phase space of the stationary reductions of KdV with a reduced bi-Hamiltonian manifold, and we show that they are GZ systems. In Section 7 we state (referring to for a more detailed discussion) a theorem ensuring that, under some additional assumptions, the GZ systems are separable in coordinates that are naturally associated with the bi-Hamiltonian structure. Finally, in Section 8 we show that this theorem can be applied to the stationary reductions of KdV, and that the variables of separation can be constructed algebraically.
Summing up, in this paper we present a somewhat self–contained approach to the study of stationary flows of KdV, and we use them as a laboratory to test ideas of the bi-Hamiltonian geometry, from the GZ theory to the separation of variables. In our opinion, such a set up provides a comprehensive formulation of results which, although for the most part already available in the literature, would perhaps acquire a deeper meaning under this perspective.
## 3 KdV Stationary Reductions
In this section we give a self-contained presentation of the stationary reductions of the KdV hierarchy, using the formalism developed in for the KP theory. Our starting point is the Central System (CS), a family of dynamical systems with $`\times `$ degrees of freedom. A first (stationary) reduction gives rise to the $`\text{CS}_2`$ hierarchy, with $``$ degrees of freedom. Then a further restriction leads to finite-dimensional systems that coincide with the stationary reductions of KdV.
We consider the space $``$ of sequences $`\{H^{(k)}\}_{k1}`$ of Laurent series having the form $`H^{(k)}=z^k+_{l1}H_l^kz^l`$, where $`H_l^k`$ are (complex) scalars that play the role of coordinates on $``$. On such phase space $``$ we define a family of vector fields as follows. We associate with a point $`\{H^{(k)}\}_{k1}`$ in $``$ the linear span $`H_+=H^{(0)},H^{(1)},H^{(2)},\mathrm{}`$, where $`H^{(0)}=1`$. The defining equation for the $`j`$–th vector field $`X_j`$ of the family, to be referred to as the Central System (CS), is the invariance relation
$$\left(\frac{}{t_j}+H^{(j)}\right)H_+H_+.$$
(3.1)
This relation is equivalent to the (explicit) equations
$$\frac{H^{(k)}}{t_j}=H^{(j)}H^{(k)}+H^{(j+k)}+\underset{l=1}{\overset{k}{}}H_l^jH^{(kl)}+\underset{l=1}{\overset{j}{}}H_l^kH^{(jl)},k1.$$
(3.2)
###### Remark 3.1
From (3.2) it is evident that the exactness property
$$\frac{}{t_k}H^{(j)}=\frac{}{t_j}H^{(k)}$$
(3.3)
holds. Moreover, it can be shown that the flows of the CS commute.
###### Remark 3.2
There is a very tight relation between the CS and the linear flows on the Sato Grassmannian . This relation is discussed in , where the classical result of Sato on the linearization of the KP hierarchy is recovered from the point of view of the bi-Hamiltonian geometry.
Since the CS is a family of commuting vector fields, we can reduce it in many different ways. By means of a suitable combination of such reduction processes, the so-called fractional KdV hierarchies were obtained in . Now we will show how the stationary reductions of KdV can be derived from the CS. The commutativity of the flows implies that the set $`𝒵_2`$ of zeroes of the vector field $`X_2`$, defined by the quadratic equations
$$H^{(k+2)}H^{(k)}H^{(2)}+\underset{l=1}{\overset{k}{}}H_l^2H^{(kl)}+H_1^kH^{(1)}+H_2^k=0,$$
(3.4)
is an invariant submanifold for CS. Moreover, on $`𝒵_2`$ we have
$$\frac{H^{(2)}}{t_j}=\frac{H^{(j)}}{t_2}=0,$$
(3.5)
due to the exactness property (3.3). Therefore, the manifold $`𝒵_2`$ is foliated by invariant submanifolds defined by the equation $`H^{(2)}=\text{constant}`$. Among all these leaves, the submanifold $`𝒮_2`$ defined by the simple constraint
$$H^{(2)}=z^2$$
(3.6)
is particularly relevant. At the points of $`𝒮_2`$ equation (3.4) takes the form
$$H^{(k+2)}=z^2H^{(k)}H_1^kH^{(1)}H_2^k,$$
(3.7)
and allows us to recursively compute the Laurent coefficients of $`H^{(k)}`$, for $`k>2`$, in terms of the coefficients of $`h:=H^{(1)}`$. Hence, $`𝒮_2`$ is parametrized by the coefficients $`\{h_l\}_{l1}`$ of $`h`$. Equation (3.7) also shows that $`z^2(H_+)H_+`$, so that on $`𝒮_2`$ the elements $`\{z^{2j},z^{2j}h\}_{j0}`$ form a basis in $`H_+`$. Thus, we have that
$$H^{(k)}=p_k(z^2)+q_k(z^2)h\text{on }𝒮_2,$$
(3.8)
where $`p_k`$ and $`q_k`$ are polynomials. This can also be seen directly from equation (3.7). Moreover, there is only one Laurent series of the previous form satisfying the asymptotic condition $`H^{(k)}=z^k+O(z^1)`$ as $`z\mathrm{}`$.
###### Definition 3.3
(see ). The restriction of CS to the invariant submanifold $`𝒮_2`$ is called the $`\text{C}S_2`$ hierarchy.
The restricted vector fields are given by
$$\frac{h}{t_j}=hH^{(j)}+H^{(j+1)}+\underset{l=1}{\overset{j}{}}h_lH^{(jl)}+H_1^j,j1,$$
(3.9)
where the $`H^{(j)}`$ must be written in terms of $`h`$ according to (3.8). If we denote with $`H_{}`$ the span of the negative powers of $`z`$, and with $`\pi _{}`$ the projection on $`H_{}`$ according to the decomposition $`H_+H_{}`$, then the equations (3.9) can be written in the more compact form
$$\frac{h}{t_j}=\pi _{}\left(q_jh^2\right).$$
(3.10)
Notice that $`H^{(2k)}=z^{2k}`$, so that (3.3) implies that the even flows of $`\text{CS}_2`$ are trivial.
The finite–dimensional systems that are the main subject of this paper are those obtained by restricting the $`\text{CS}_2`$ flows to the manifold of zeroes of the $`(2g+1)`$-st vector field of $`\text{CS}_2`$. We will call such systems the $`\text{KdV}_{2g+1}`$ systems, since they are (equivalent to) the stationary reductions of the KdV hierarchy, as we are going to show at the end of this section. The constraint which defines the phase space $`_{2g+1}`$ of the $`\text{KdV}_{2g+1}`$ system is
$$\frac{h}{t_{2g+1}}=\pi _{}\left(q_{2g+1}h^2\right)=0.$$
(3.11)
A direct inspection shows that this constraint gives all the coefficients of $`h`$ in terms of the first $`(2g+1)`$, i.e., $`h_1,\mathrm{},h_{2g+1}`$. In other words, the dimension of the phase space of the $`\text{KdV}_{2g+1}`$ system equals $`2g+1`$. The equations are given by the first $`2g+1`$ components of (3.10), after substituting the constraints (3.11). In the case of $`\text{KdV}_5`$ there are two independent vector fields:
$$\begin{array}{cc}\frac{h_1}{t_1}=2h_2\hfill & \frac{h_1}{t_3}=2h_4+2h_1h_2\hfill \\ \frac{h_2}{t_1}=2h_3h_{1}^{}{}_{}{}^{2}\hfill & \frac{h_2}{t_3}=2h_5+h_{2}^{}{}_{}{}^{2}+h_{1}^{}{}_{}{}^{3}\hfill \\ \frac{h_3}{t_1}=2h_1h_22h_4\hfill & \frac{h_3}{t_3}=2h_1h_4+4h_{1}^{}{}_{}{}^{2}h_22h_3h_2\hfill \\ \frac{h_4}{t_1}=2h_5h_{2}^{}{}_{}{}^{2}2h_1h_3\hfill & \frac{h_4}{t_3}=2h_{3}^{}{}_{}{}^{2}2h_2h_4+2h_1h_{2}^{}{}_{}{}^{2}+h_{1}^{}{}_{}{}^{4}+h_{1}^{}{}_{}{}^{2}h_3\hfill \\ \frac{h_5}{t_1}=4h_3h_2+2h_{1}^{}{}_{}{}^{2}h_24h_1h_4\hfill & \frac{h_5}{t_3}=2h_{1}^{}{}_{}{}^{2}h_44h_3h_4+2h_{1}^{}{}_{}{}^{3}h_2\hfill \end{array}$$
(3.12)
These are, up to the coordinate change (3.14), the equations (2.4) and (2.5).
We remark that along the flows of $`\text{KdV}_{2g+1}`$ the relations (3.3) take the form
$$\frac{H^{(2g+1)}}{t_j}=0,$$
(3.13)
showing that all the coefficients of $`H^{(2g+1)}`$ are integrals of motion. Therefore our presentation of the KdV stationary reductions carries directly the conserved quantities of the flows. Moreover, in the next section we will show that the Lax representation also arises in a natural way. We end this section with the following:
###### Remark 3.4
The usual KdV hierarchy in $`1+1`$ dimensions is described in as a projection of $`\text{CS}_2`$ along the integral curves of the first vector field of the hierarchy,
$$\frac{h}{t_1}=h^2+z^2+2h_1.$$
Indeed, if we put $`x=t_1`$ and $`u=2h_1`$, then the previous equation takes the form $`h_x+h^2=u+z^2`$ and allows us to write the $`h_j`$ as polynomials in $`u`$ and its $`x`$-derivatives:
$$\begin{array}{c}h_1=\frac{1}{2}u\hfill \\ h_2=\frac{1}{4}u_x\hfill \\ h_3=\frac{1}{8}(u_{xx}u^2)\hfill \\ h_4=\frac{1}{16}(u_{xxx}4uu_x)\hfill \\ h_5=\frac{1}{32}(u_{xxxx}6uu_{xx}5u_x^2+2u^3)\hfill \\ \mathrm{}\hfill \end{array}$$
(3.14)
Thus equations (3.9) become partial differential equations for the variable $`u`$, and are the KdV hierarchy. But we can also use the system (3.14) to recover $`\text{CS}_2`$ from the KdV hierarchy, so that we can pass back and forth from one hierarchy to the other. This shows that the $`\text{KdV}_{2g+1}`$ systems that we have introduced coincide with the usual stationary reductions of KdV. The first $`(2g+1)`$ equations of the system (3.14) represent the change between our coordinates $`(h_1,\mathrm{},h_{2g+1})`$ and the ones usually considered in the literature, namely $`(u,u_x,\mathrm{},u^{(2g+1)})`$.
## 4 The Lax Representation
In this section we show that there is a quite natural (Zakharov-Shabat) zero-curvature representation for the $`\text{CS}_2`$ system, entailing a Lax representation for the $`\text{KdV}_{2g+1}`$ hierarchy.
We know from the previous section that in the $`\text{CS}_2`$ theory every element in $`H_+`$ can be written as a linear combination of 1 and $`h`$ with coefficients that are polynomials in $`\lambda :=z^2`$. Then to each point of the manifold $`𝒮_2`$ (that is, to each series $`h=z+_{l1}h_lz^l`$) we can associate a family of $`2\times 2`$ matrices $`𝖵^{(j)}(\lambda )`$ depending polynomially on $`\lambda `$, defined by the relation
$$\left(\frac{}{t_j}+H^{(j)}\right)\left[\begin{array}{c}1\\ h\end{array}\right]=𝖵^{(j)}\left[\begin{array}{c}1\\ h\end{array}\right].$$
(4.1)
Since the even flows are trivial, we will be interested only in the matrices of odd index. The first three of them are given by
$`𝖵^{(1)}=\left[\begin{array}{cc}0& 1\\ \lambda +2h_1& 0\end{array}\right]𝖵^{(3)}=\left[\begin{array}{cc}h_2& \lambda h_1\\ \lambda ^2+h_1\lambda +2h_3h_{1}^{}{}_{}{}^{2}& h_2\end{array}\right]`$
$`𝖵^{(5)}=\left[\begin{array}{cc}h_2\lambda h_4+h_1h_2& \lambda ^2h_1\lambda h_3+h_{1}^{}{}_{}{}^{2}\\ \begin{array}{c}\lambda ^3+h_1\lambda ^2+h_3\lambda +\\ 2h_52h_1h_3h_{2}^{}{}_{}{}^{2}+h_{1}^{}{}_{}{}^{3}\end{array}& h_2\lambda h_1h_2+h_4\end{array}\right].`$
The commutativity of the flows and the “abelian” zero-curvature relation (3.3) imply that
$$\left(\frac{}{t_j}𝖵^{(k)}\frac{}{t_k}𝖵^{(j)}+[𝖵^{(k)},𝖵^{(j)}]\right)\left[\begin{array}{c}1\\ h\end{array}\right]=0.$$
(4.5)
Since the entries of the matrix appearing in the previous equation are polynomials in $`\lambda `$, and the elements $`\{\lambda ^j,\lambda ^jh\}_{j0}`$ are linearly independent in $`H_+`$, it follows that the zero–curvature relations
$$\frac{}{t_j}𝖵^{(k)}\frac{}{t_k}𝖵^{(j)}+[𝖵^{(k)},𝖵^{(j)}]=0$$
(4.6)
hold.
If we restrict to the set $`_{2g+1}`$ of the stationary points of the $`(2g+1)`$–st vector field of $`\text{CS}_2`$, the zero–curvature representation naturally gives rise to Lax equations for the matrix $`𝖵^{(2g+1)}`$,
$$\frac{}{t_k}𝖵^{(2g+1)}=[𝖵^{(k)},𝖵^{(2g+1)}].$$
(4.7)
The following proposition will be useful in Section 6, and shows that these Lax equations faithfully represent the $`\text{KdV}_{2g+1}`$ system.
###### Proposition 4.1
The matrices $`𝖵^{(2k+1)}`$ of the $`\text{CS}_2`$ hierarchy have the following properties:
1. The matrix $`𝖵^{(2k+1)}`$ depends only on $`(h_1,\mathrm{},h_{2k+1})`$, and the map $`(h_1,\mathrm{},h_{2k+1})𝖵^{(2k+1)}`$ is injective;
2. The trace of $`𝖵^{(2k+1)}`$ is zero;
3. For $`ik`$ one has
$$𝖵^{(2i+1)}=(\lambda ^{ik}𝖵^{(2k+1)})_+\alpha _{ik}\left[\begin{array}{cc}0& 0\\ 1& 0\end{array}\right],$$
(4.8)
where $`()_+`$ denotes the projection on the nonnegative powers of $`\lambda `$, and $`\alpha _{ik}`$ is the entry $`(1,2)`$ of the coefficient of $`\lambda ^{ki1}`$ in $`𝖵^{(2k+1)}`$.
Proof. First of all we observe that, almost by definition,
$$𝖵^{(2k+1)}=\left[\begin{array}{cc}p_{2k+1}& q_{2k+1}\\ \begin{array}{c}\lambda ^{k+1}+_{l=0}^kh_{2l+1}\lambda ^{kl}+\\ _{l=1}^kh_{2l}p_{2k2l+1}+H_1^{2k+1}\end{array}& _{l=1}^kh_{2l}q_{2k2l+1}\end{array}\right].$$
(4.9)
Then, we notice that equation (3.7) implies the recursion formulas
$$p_{2k+1}=\lambda p_{2k_1}H_2^{2k1},q_{2k+1}=\lambda q_{2k_1}H_1^{2k1}.$$
(4.10)
Thus, by induction, we obtain
$$p_{2k+1}=\underset{l=1}{\overset{k}{}}H_2^{2l1}\lambda ^{kl},q_{2k+1}=\lambda ^k\underset{l=1}{\overset{k}{}}H_1^{2l1}\lambda ^{kl}.$$
(4.11)
In order to express the coefficients $`H_2^{2l1}`$ and $`H_1^{2l1}`$ in terms of the $`h_l`$, we use the identity
$$H_l^{2k1}=H_{2k+l2}^1\underset{i=1}{\overset{k+l3}{}}H_{l+2i2}^1H_1^{2k2i1},$$
which can be proved by induction on $`k`$ using again (3.7). In particular, we have
$`H_1^{2k1}`$ $`=`$ $`H_{2k1}^1{\displaystyle \underset{i=1}{\overset{k2}{}}}H_{2i1}^1H_1^{2k2i1}`$ (4.12)
$`H_2^{2k1}`$ $`=`$ $`H_{2k}^1{\displaystyle \underset{i=1}{\overset{k1}{}}}H_{2i}^1H_1^{2k2i1}.`$ (4.13)
This allows us to control the appearance of the $`h_i=H_i^1`$ in $`p_{2l+1}`$ and $`q_{2l+1}`$, and leads to the proof of the first assertion.
The second statement tantamounts to $`p_{2k+1}+_{l=1}^kh_{2l}q_{2k2l+1}=0`$, and is easily proved by inserting (4.11) and using (4.13).
As far as the last assertion is concerned, we use the following consequences of (4.11):
$$(\lambda ^{ik}p_{2k+1})_+=p_{2i+1},(\lambda ^{ik}q_{2k+1})_+=q_{2i+1}.$$
(4.14)
This gives, using (4.9),
$$(\lambda ^{ik}𝖵^{(2k+1)})_+=𝖵^{(2i+1)}H_1^{2i+1}\left[\begin{array}{cc}0& 0\\ 1& 0\end{array}\right].$$
(4.15)
Since $`H_1^{2i+1}`$ is the coefficient of $`\lambda ^{ki1}`$ in $`q_{2k+1}`$, equation (4.9) shows that we are done.
$`\mathrm{}`$
So we have seen that the double reduction process of the Central System outlined in Section 3 provides us with a natural Lax representation of the (commuting) vector fields of the $`\text{KdV}_{2g+1}`$ system. Actually, as it was explained in , the Central System can be seen as an outgrowth of the bi-Hamiltonian properties of the KdV hierarchy. It is thus natural to look for a bi-Hamiltonian structure of the $`\text{KdV}_{2g+1}`$ system. Unfortunately, we are not in a position to derive such a property from the Central System itself, but rather we have to rely on the Lax representation discussed so far. Namely, in the next two sections we will establish the bi-Hamiltonian nature of $`\text{KdV}_{2g+1}`$, showing that it comes from the general theory of bi-Hamiltonian systems defined on matrices depending polynomially on a parameter.
## 5 Lax Equations and bi-Hamiltonian Systems
In the previous section we have associated with every point of the phase space $`_{2g+1}`$ of $`\text{KdV}_{2g+1}`$ a Lax matrix $`𝖵^{(2g+1)}`$, and we have seen that this matrix gives a Lax representation of the flows. To give these flows a bi-Hamiltonian formulation, we will address in this section a general problem, concerning the relation between Lax matrices and bi-Hamiltonian structures. We will describe a class of bi-Hamiltonian manifolds whose (bi-)Hamiltonian flows have a Lax formulation, and show that this formulation survives a reduction process of Marsden-Ratiu type. Since $`𝖵^{(2g+1)}`$ depends polynomially on $`\lambda `$, it is quite natural to consider the multi-Hamiltonian structures defined on $`𝔤`$–valued polynomials (see ), where $`𝔤`$ is a Lie algebra of matrices such that the trace of the product is nondegenerate. More precisely, for a fixed matrix $`A𝔤`$, let us consider the space
$$_A:=\{X(\lambda )=\lambda ^{n+1}A+\underset{i=0}{\overset{n}{}}\lambda ^iX_iX_i𝔤\},$$
(5.1)
which is clearly in a 1-1 correspondence with the space $`_{i=0}^n𝔤`$ of $`(n+1)`$–tuples of matrices in $`𝔤`$. The tangent and the cotangent space at a point of $`_A`$ can also be identified with $`_{i=0}^n𝔤`$, using the pairing
$$(V_0,\mathrm{},V_n),(W_0,\mathrm{},W_n)=\underset{i=0}{\overset{n}{}}\mathrm{Tr}(V_iW_i).$$
(5.2)
If $`F`$ is a function on $`_A`$, we will denote its differential by
$$dF=(\frac{F}{X_0},\mathrm{},\frac{F}{X_n}).$$
It is known that on $`_A`$ there is an $`(n+2)`$–dimensional web of (compatible) Poisson brackets, and that this web is associated with a family of classical $`R`$–matrices. Nevertheless, it turns out that in our case the relevant Poisson pair is given by the first two brackets of the above mentioned family. The first Poisson tensor, as a map from the cotangent to the tangent space, is given by
$$P_0:\left(\begin{array}{c}W_0\\ W_1\\ \mathrm{}\\ \mathrm{}\\ W_{n1}\end{array}\right)\left(\begin{array}{c}\dot{X}_0\\ \dot{X}_1\\ \mathrm{}\\ \mathrm{}\\ \dot{X}_{n1}\end{array}\right)=\left(\begin{array}{ccccc}[X_1,]& [X_2,]& \mathrm{}& \mathrm{}& [A,]\\ [X_2,]& \mathrm{}& \mathrm{}& [A,]& 0\\ \mathrm{}& \mathrm{}& & & \\ \mathrm{}& & & & \\ [A,]& 0& \mathrm{}& & 0\end{array}\right)\left(\begin{array}{c}W_0\\ W_1\\ \mathrm{}\\ \mathrm{}\\ W_{n1}\end{array}\right)\text{ ,}$$
(5.3)
while the second one is
$$P_1:\left(\begin{array}{c}W_0\\ W_1\\ \mathrm{}\\ \mathrm{}\\ W_{n1}\end{array}\right)\left(\begin{array}{c}\dot{X}_0\\ \dot{X}_1\\ \mathrm{}\\ \mathrm{}\\ \dot{X}_{n1}\end{array}\right)=\left(\begin{array}{ccccc}[X_0,]& 0& \mathrm{}& \mathrm{}& 0\\ 0& [X_2,]& [X_3,]& \mathrm{}& [A,]\\ 0& [X_3,]& \mathrm{}& & \\ \mathrm{}& \mathrm{}& & & \\ 0& [A,]& \mathrm{}& \mathrm{}& 0\end{array}\right)\left(\begin{array}{c}W_0\\ W_1\\ \mathrm{}\\ \mathrm{}\\ W_{n1}\end{array}\right)\text{ .}$$
(5.4)
The associated Poisson brackets are given by $`\{F,G\}_i=dF,P_idG`$, where $`i=1,2`$, and $`F`$, $`G`$ are functions on $`_A`$. The Poisson tensors (5.3) and (5.4) satisfy the remarkable property that every linear combination of them is still a Poisson tensor. For this reason one says that they are compatible, and that $`_A`$ is a bi-Hamiltonian manifold.
Let us consider now the Poisson pencil $`P_\lambda :=P_1\lambda P_0`$. It is important to notice that its Hamiltonian vector fields admit a Lax representation, as shown in the following:
###### Proposition 5.1
Let $`F`$ be a function on $`_A`$, and $`{\displaystyle \frac{X(\lambda )}{t_\lambda }}=P_\lambda dF`$ the Hamiltonian vector field associated by $`P_\lambda `$ to $`F`$. Then,
$$\frac{X(\lambda )}{t_\lambda }=[\frac{F}{X_0},X(\lambda )].$$
(5.5)
Proof. Use the expressions (5.3) and (5.4) to compute the vector field $`P_\lambda dF`$, then identify the parameter $`\lambda `$ appearing in the Poisson pencil with the $`\lambda `$ in $`X(\lambda )`$.
$`\mathrm{}`$
At this point we want to enlarge the class of bi-Hamiltonian manifolds giving rise to systems admitting a Lax representation, having also in mind the case of the stationary reductions of KdV. To this aim, it is important to recall a reduction theorem allowing us to “move” the bi-Hamiltonian structure from a given (“big”) manifold to a smaller one. This result is a particular case of a theorem by Marsden and Ratiu for Poisson manifolds , and can be applied to a general bi-Hamiltonian manifold. Here, for the sake of simplicity, we will describe it only in the case at hand. The central point is that a Lax representation can be found also for the vector fields that are Hamiltonian with respect to the reduced Poisson pencil.
The first step of the reduction process is to fix a symplectic leaf $`𝒮`$ of $`P_0`$. Then we introduce the distribution $`D=P_1(\text{Ker}P_0)`$, which is integrable thanks to the compatibility between $`P_0`$ and $`P_1`$. From the explicit form (5.3)–(5.4) of the Poisson tensors, it is easy to see that the vector fields in $`D`$ have the Lax form $`\dot{X}(\lambda )=[W_0,X(\lambda )]`$, for a suitable $`W_0`$. Let us denote by $`E`$ the intersection of $`D`$ with $`T𝒮`$. The statement of the bi-Hamiltonian reduction theorem is that the quotient manifold $`𝒩=𝒮/E`$ inherits from $`_A`$ a bi-Hamiltonian structure. In order to compute the reduced Poisson bracket $`\{f,g\}_\lambda `$ between two functions $`f`$, $`g`$ on $`𝒩`$, we consider them as functions on $`𝒮`$, invariant along the leaves of $`E`$. Then, we choose functions $`F`$ and $`G`$ on $`_A`$ which extend $`f`$ and $`g`$, and annihilate the distribution $`D`$. Their Poisson bracket $`\{F,G\}_\lambda `$ is still invariant along $`D`$, and therefore defines a function on $`𝒩`$, which is independent of the choice of the prolongations $`F`$ and $`G`$.
Let us consider now a given Hamiltonian vector field $`X_f`$ (with respect to the Poisson pencil) on $`𝒩`$, with Hamiltonian $`f`$. If $`F`$ is a prolongation of $`f`$, the vector field $`X_F:=P_\lambda dF`$ is easily seen to be tangent to $`𝒮`$ and to project onto $`X_f`$. We are going to show that $`X_f`$ inherits a Lax representation from the one of $`X_F`$. To this aim, we suppose that there exists a submanifold $`𝒬`$ of $`𝒮`$, which is transversal to the distribution $`E`$. In other words, $`𝒬`$ is the image of a section of the bundle $`\pi :𝒮𝒩`$. Then $`𝒬`$ is diffeomorphic to $`𝒩`$, and inherits a bi-Hamiltonian structure. The representative of $`X_f`$ on $`𝒬`$ is simply found by decomposing the restriction of $`X_F`$ to $`𝒮`$ according to the splitting $`T𝒮=T𝒬E`$. Since we have seen that $`E`$ is spanned by vector fields having a Lax form, we have proved the following:
###### Proposition 5.2
Let $`𝒬𝒮`$ be transversal to $`E`$. Then, the vector fields on $`𝒩`$ which are Hamiltonian with respect to the Poisson pencil admit a Lax representation on $`𝒬`$.
Therefore the bi-Hamiltonian reduction implies, in this case, a reduction of the Lax formulation.
## 6 The bi-Hamiltonian Structure of the KdV Stationary Reductions
The aim of this section is to show that the $`\text{KdV}_{2g+1}`$ systems introduced in Section 3 admit a bi-Hamiltonian formulation. To do this, we are going to exploit the Lax representation found in Section 4 and the results of the preceding section.
The form of the Lax matrix $`𝖵^{(2g+1)}`$ found in Section 3 suggests to choose $`𝔤=𝔰𝔩(2)`$, $`n=g`$, and
$$A=\left[\begin{array}{cc}0& 0\\ 1& 0\end{array}\right].$$
Therefore, the dimension of $`_A`$ is $`3(g+1)`$. The Lax matrix $`𝖵^{(2g+1)}`$ defines an embedding of the $`\text{KdV}_{2g+1}`$ phase space into $`_A`$. At this point two natural questions arise:
1. Does this submanifold inherit from $`_A`$ the bi-Hamiltonian structure?
2. If so, are the vector fields of $`\text{KdV}_{2g+1}`$ bi-Hamiltonian with respect to this structure?
We will see that the answer to both questions is yes.
In order to answer the first question we need a careful description of the symplectic leaves of $`P_0`$, as given by
###### Lemma 6.1
The symplectic leaves of $`P_0`$ have dimension $`2(g+1)`$. Moreover, let $`H(\lambda ):_A`$ be defined as $`H(X(\lambda )):=\frac{1}{2}\mathrm{Tr}X(\lambda )^2`$ and let the $`H_i`$ be the coefficient of $`\lambda ^i`$ in $`H(\lambda )`$. Then, the functions $`H_{2g+1},\mathrm{},H_{g+1}`$ are functionally independent Casimirs of $`P_0`$. Consequently, the symplectic leaves of $`P_0`$ are the level surfaces of the previous Casimirs.
Proof. From (5.3) the kernel of $`P_0`$ is easily seen to be given by the covectors $`[W_0,\mathrm{},W_g]^T`$ such that $`W_i=\alpha _iA+_{l=1}^i\alpha _{il}X_{g+1l}`$, where the $`\alpha _i`$, $`i=0,\mathrm{},g`$, are arbitrary. This shows that $`\text{dim}(\text{Ker}P_0)=n`$, so that the dimension of the symplectic leaves is $`2(g+1)`$. In order to check that $`H_{2g+1},\mathrm{},H_{g+1}`$ are Casimirs, it is sufficient to verify that the differential of $`H_i`$ is the 1-form $`[X_i,X_{i1},\mathrm{},X_{ig}]^T`$, where $`X_k:=0`$ if $`k<0`$.
$`\mathrm{}`$
One can easily show that the symplectic leaf defined by $`H_i=c_i`$, with $`g+1i2g+1`$, can be parametrized as
$$X(\lambda )=\lambda ^{g+1}A+\underset{j=0}{\overset{g}{}}\lambda ^j\left[\begin{array}{cc}p_j& r_j\\ q_j& p_j\end{array}\right],$$
(6.1)
where $`p_j`$ and $`q_j`$ are free parameters, and $`r_j`$ is a function of $`(p_{j+1},q_{j+1},\mathrm{},p_g,q_g)`$ and the values $`(c_{g+j+1},\mathrm{},c_{2g+1})`$ of the Casimirs.
As far as the distribution $`D=P_1(\text{Ker}P_0)`$ is concerned, in this case it is tangent to the symplectic leaves of $`P_0`$. Indeed, from the explicit form (5.3)–(5.4) of the Poisson tensors it is easy to see that $`D`$ is the 1-dimensional distribution spanned by the vector field
$$\dot{X}(\lambda )=[A,X(\lambda )].$$
(6.2)
This also shows that the integral leaves of $`D`$ are simply the orbit of the action given by simultaneous conjugation of the isotropy subgroup of $`A`$, but we will never use this fact.
Now we are ready to endow the phase space $`_{2g+1}`$ of $`\text{KdV}_{2g+1}`$ with the structure of a bi-Hamiltonian manifold. This follows from the fact that the map assigning to each point of $`_{2g+1}`$ the corresponding Lax matrix $`𝖵^{(2g+1)}`$ defines a submanifold of a suitable symplectic leaf of $`P_0`$, which is transversal to the distribution $`E`$. This is shown in the following:
###### Proposition 6.2
Let us take the symplectic leaf $`\stackrel{~}{𝒮}`$ defined by $`H_{2g+1}=1`$ and $`H_i=0`$ for $`g+1i2g`$. Then, $`𝖵^{(2g+1)}(h_1,\mathrm{},h_{2g+1})\stackrel{~}{𝒮}`$ for all $`(h_1,\mathrm{},h_{2g+1})_{2g+1}`$. Moreover, the image $`\stackrel{~}{𝒬}`$ of the previous map is transversal to $`E`$.
Proof. By the definition of $`\stackrel{~}{𝒮}`$, we have to show that $`\frac{1}{2}\mathrm{Tr}\left(𝖵^{(2g+1)}\right)^2=\lambda ^{2g+2}+_{i=0}^gH_i\lambda ^i`$. To this aim, we observe that equation (4.1) and the stationarity of $`t_{2g+1}`$ imply that $`H^{(2g+1)}(z)`$ is an eigenvalue of $`𝖵^{(2g+1)}`$. The other eigenvalue is given by $`H^{(2g+1)}(z)`$, because $`𝖵^{(2g+1)}`$ depends only on $`\lambda =z^2`$. Therefore, $`\mathrm{Tr}\left(𝖵^{(2g+1)}\right)^2=(H^{(2g+1)}(z))^2+(H^{(2g+1)}(z))^2`$ has the desired form. Finally, referring to the parametrization (6.1), one can easily prove that the submanifold $`p_g=0`$ is transversal to the distribution $`E`$.
$`\mathrm{}`$
Hence, the $`\text{KdV}_{2g+1}`$ phase space inherits from $`\stackrel{~}{𝒬}`$ (and from the quotient space $`\stackrel{~}{𝒩}=\stackrel{~}{𝒮}/E`$) a bi-Hamiltonian structure. To compute this structure, it is convenient to use the formalism discussed in , whose aim is to avoid dealing with the explicit form of the projection $`\pi :\stackrel{~}{𝒮}\stackrel{~}{𝒩}`$.
Now we want to show that the $`\text{KdV}_{2g+1}`$ flows are indeed bi-Hamiltonian with respect to the above Poisson pencil. The Lax representation (4.7) and the form (4.8) of the Lax pair suggest to consider the vector fields on $`_A`$ given by
$$\frac{X(\lambda )}{t_i}=[\left(\lambda ^{ig}X(\lambda )\right)_+,X(\lambda )],i=1,0,\mathrm{},g1.$$
(6.3)
They are Hamiltonian with respect to the Poisson pencil on $`_A`$, with Hamiltonian function given by $`X(\lambda )\left(\lambda ^{ig}H(X(\lambda ))\right)_+`$, where $`H(X(\lambda ))=\frac{1}{2}\mathrm{Tr}X(\lambda )^2`$. Furthermore, we can state
###### Proposition 6.3
On the bi-Hamiltonian manifold $`_A`$ the function $`H(\lambda )=_{i=0}^{2g+1}H_i\lambda ^i`$ is a Casimir of the Poisson pencil. The bi-Hamiltonian vector field $`Y_{gi}:=P_0dH_{i1}=P_1dH_i`$ has the Lax representation (6.3).
Proof. One can easily see that $`dH(\lambda )=[X(\lambda ),\lambda X(\lambda ),\mathrm{},\lambda ^gX(\lambda )]^T`$. Thus from (5.3) and (5.4) it follows that $`P_\lambda dH(\lambda )=0`$. The vector fields $`Y_i`$ are Hamiltonian with respect to $`P_\lambda `$, since $`Y_{gi}=P_1dH_i=P_\lambda d(\lambda ^iH(\lambda ))_+`$. Thus the Lax representation (6.3) is a consequence of Proposition 5.1.
$`\mathrm{}`$
###### Remark 6.4
The previous proposition is a particular case of a general result , stating that Ad-invariant polynomial functions on a Lie algebra $`𝔤`$ give rise to Casimir of the Poisson pencil on $`_A`$.
In order to obtain the $`\text{KdV}_{2g+1}`$ system from the bi-Hamiltonian vector fields (6.3), we remark that, from general results of the bi-Hamiltonian theory:
1. The functions $`H_i`$ are invariant along the distribution $`E`$, and therefore can be projected on the quotient $`\stackrel{~}{𝒩}`$.
2. The vector fields (6.3) are tangent to $`\stackrel{~}{𝒮}`$ and project onto $`\stackrel{~}{𝒩}`$.
3. Their projections are the bi-Hamiltonian vector fields associated with the projected functions.
We observe from (6.3) that the vector field $`Y_1=P_0dH_g`$ is tangent to the distribution $`E`$. This means that the function $`H_g`$, on the quotient $`\stackrel{~}{𝒩}`$, is a Casimir of the reduction of $`P_0`$. Hence, the polynomial $`H_0+H_1\lambda +\mathrm{}+H_g\lambda ^g`$ is a Casimir of the reduced Poisson pencil. The other vector fields in (6.3) project on the stationary reductions of KdV, as shown in
###### Proposition 6.5
The projections on $`\stackrel{~}{𝒩}`$ of the vector fields (6.3), for $`i=0,\mathrm{},g1`$, coincide with the $`\text{KdV}_{2g+1}`$ systems.
Proof. The right place to compare the two hierarchies of vector fields is the transversal submanifold $`\stackrel{~}{𝒬}`$ defined as the image of the map $`𝖵^{(2g+1)}:_{2g+1}\stackrel{~}{𝒮}`$. Hence, we must project the Lax equations (6.3) on $`T\stackrel{~}{𝒬}`$ along $`E`$. This leads to
$$[\left(\lambda ^{ig}X(\lambda )\right)_+,X(\lambda )]\alpha [A,X(\lambda )],$$
(6.4)
where $`\alpha `$ is fixed by the condition that this vector be tangent to $`\stackrel{~}{𝒬}`$. This means that the entry $`(1,1)`$ of the coefficient of $`\lambda ^g`$ in (6.4) must be zero. If we write $`[\left(\lambda ^{ig}X(\lambda )\right)_+,X(\lambda )]=[X(\lambda ),\left(\lambda ^{ig}X(\lambda )\right)_{}]`$, then we obtain that $`\alpha `$ is the entry $`(1,2)`$ of the $`\lambda ^{gi1}`$–coefficient of $`X(\lambda )`$, so that equation (4.8) concludes the proof.
$`\mathrm{}`$
Therefore, we have shown that the $`\text{KdV}_{2g+1}`$ flows are bi-Hamiltonian. Moreover, they are associated with a Casimir of the Poisson pencil, having a polynomial dependence on $`\lambda `$. Since this is a particular instance of a general theory developed by Gel’fand and Zakharevich , we will say that the vector fields of $`\text{KdV}_{2g+1}`$ are GZ systems. The next section is devoted to the separability of such systems.
## 7 Separability of bi-Hamiltonian Systems
We have just seen that the stationary reductions of KdV are examples of GZ systems. In this section we show that the bi-Hamiltonian structure of such systems allows one to solve them by separation of variables. Under special circumstances, separability of GZ systems was proven in . We refer to for complete proofs and a more detailed discussion.
Let $``$ be a $`(2n+1)`$-dimensional manifold endowed with a pencil $`P_\lambda =P_1\lambda P_0`$ of Poisson tensors. We suppose that the rank of $`P_\lambda `$ is generically $`2n`$, so that (locally) there exists a polynomial Casimir function $`H(\lambda )=_{i=0}^nH_i\lambda ^i`$ of $`P_\lambda `$ (see ). The associated GZ systems $`P_0dH_i=P_1dH_{i+1}`$ are obviously tangent to the symplectic leaves of $`P_0`$, and give rise to Liouville integrable systems. Since $`H_n`$ is a Casimir of $`P_0`$, such leaves are the level surface of $`H_n`$. Let us denote by $`\omega `$ the symplectic form given by the restriction of $`P_0`$ to a (fixed) symplectic leaf; if $`X_f:=P_0df`$, where $`f`$ is any function on $``$, then
$$\omega (X_f,X_g)=\{f,g\}_0.$$
In order to exploit the existence of the other Poisson bracket, we make an additional assumption. We suppose that there exists a vector field $`Z`$ on $``$ such that
1. It is transversal to the symplectic leaves of $`P_0`$;
2. The functions invariant along $`Z`$ form a Poisson subalgebra with respect to the bracket $`\{,\}_\lambda `$ associated with the Poisson pencil.
The second condition means that the bi-Hamiltonian structure can be projected on the quotient space of the integral leaves of $`Z`$. The first one tells us that such quotient can be identified with a symplectic leaf $`𝒮_c`$ of $`P_0`$, which is therefore a bi-Hamiltonian manifold. Moreover, we can define on $`𝒮_c`$ a Nijenhuis tensor $`N`$ as
$$\omega (X_f,NX_g)=\{f,g\}_1,$$
where $`f`$ and $`g`$ are functions on $``$ invariant along $`Z`$. Thus $`𝒮_c`$ is said to be a Poisson–Nijenhuis (PN) manifold (see and references cited therein).
The vector field $`Z`$ allows us to use the Poisson pencil to construct variables of separation for the GZ systems on $`𝒮_c`$. Hence in this case the Poisson pencil not only provides us with a commuting family of vector fields, but also gives coordinates for which the corresponding equations of motion can be solved by separation of variables. Indeed, one can show that the Nijenhuis tensor $`N`$ has $`n`$ functionally independent eigenvalues $`(\lambda _1,\mathrm{},\lambda _n)`$. Then (see, e.g., ), there exist $`n`$ complementary coordinates $`(\mu _1,\mathrm{},\mu _n)`$ on $`𝒮_c`$ such that $`\omega `$ takes the canonical form $`\omega =_{i=1}^nd\lambda _id\mu _i`$ and the adjoint $`N^{}`$ of $`N`$ takes the diagonal form
$$N^{}d\lambda _j=\lambda _jd\lambda _j,N^{}d\mu _j=\lambda _jd\mu _j.$$
Such coordinates are called Darboux–Nijenhuis (DN) coordinates. They are the separating coordinates for the GZ systems. Indeed, let us normalize $`Z`$ in such a way that $`Z(H_n)=1`$. Then the differentials of the restrictions $`\widehat{H}_i`$ of the Hamiltonians $`H_i`$ to $`𝒮_c`$ generate a subspace which is invariant with respect to $`N^{}`$. More precisely, we have that
$$\left[\begin{array}{c}N^{}d\widehat{H}_0\\ \mathrm{}\\ \mathrm{}\\ \mathrm{}\\ N^{}d\widehat{H}_{n1}\end{array}\right]=\left[\begin{array}{ccccc}0& 0& \mathrm{}& 0& c_0\\ 1& 0& \mathrm{}& & c_1\\ 0& 1& & & c_2\\ \mathrm{}& & & & \mathrm{}\\ 0& 0& \mathrm{}& 1& c_{n1}\end{array}\right]\left[\begin{array}{c}d\widehat{H}_0\\ \mathrm{}\\ \mathrm{}\\ \mathrm{}\\ d\widehat{H}_{n1}\end{array}\right],$$
(7.1)
where $`c_i=Z(H_i)`$. This implies also that
$$\text{minimal polynomial of }N=\lambda ^n\underset{i=0}{\overset{n1}{}}c_i\lambda ^i=Z(H(\lambda )).$$
(7.2)
Moreover, one can check that the Frobenius matrix $`𝖥`$ defined by (7.1) satisfies the condition
$$N^{}d𝖥=𝖥d𝖥,$$
(7.3)
where $`d𝖥`$ is the matrix whose entries are the differentials of the entries of $`𝖥`$, and, on the left–hand side, $`N^{}`$ acts separately on each entry. Conditions (7.3) and (7.1) imply that the Hamilton-Jacobi equations for the $`\widehat{H}_i`$ are (collectively) separable in the DN coordinates. In fact, the (transpose of the) Vandermonde matrix constructed with the $`\lambda _j`$ diagonalizes $`𝖥`$, and applied to $`[\widehat{H}_0,\mathrm{},\widehat{H}_{n1}]^T`$ gives a Stäckel vector, in the sense that its $`j`$–th component depends only on $`(\lambda _j,\mu _j)`$.
As far as the explicit construction of the DN coordinates is concerned, we have seen that the $`\lambda _j`$ are the roots of the minimal polynomial $`Z(H(\lambda ))`$ of $`N`$. On the contrary, the coordinates $`\mu _j`$ must be computed (in general) by a method involving quadratures. However, in the case at hand there is a recipe that is particularly useful in the applications. Let us consider the Hamiltonian vector field $`Y`$ on $`𝒮_c`$, associated with $`\frac{1}{2}\mathrm{Tr}N=_{i=1}^n\lambda _i`$ by the symplectic form $`\omega `$. If the (restriction of the) Casimir $`\widehat{H}(\lambda )`$ satisfies the condition $`Y^r(\widehat{H}(\lambda ))=0`$ for some $`r`$, then the coordinates
$$\mu _j=\frac{Y^{r2}(\widehat{H}(\lambda _j))}{Y^{r1}(\widehat{H}(\lambda _j))}$$
form with the $`\lambda _j`$ a set of DN coordinates. Hence in this case the bi-Hamiltonian structure provides us with a method to algebraically construct the separation variables.
## 8 Separability of the Stationary Reductions
In this section we will show that the $`\text{KdV}_{2g+1}`$ system belongs to the class of separable GZ systems discussed above. It is convenient to show that the conditions on the vector fields $`Z`$ and $`Y`$ are fulfilled on the “big” bi-Hamiltonian manifold $`_A`$ and then to reduce everything.
Regarding the transversal vector field $`Z`$, we introduce on $`_A`$ the vector field $`Z^_A`$ defined as
$$\dot{X}_0=A,\dot{X}_i=0\text{ for all }i=1,\mathrm{},g\text{.}$$
(8.1)
It is tangent to the symplectic leaves of the Poisson tensor $`P_0`$ given by (5.3) (since it is easily seen to belong to its image) and it can be projected on the quotient space $`\stackrel{~}{𝒩}=\stackrel{~}{𝒮}/E`$ (since it commutes with the generator (6.2) of the distribution $`E`$). Using again the form (5.3) and (5.4) of the Poisson tensors on $`_A`$, one can check that the functions invariant along $`Z^_A`$ form a Poisson subalgebra with respect to $`P_\lambda `$. This property is trivially conserved after the reduction on $`\stackrel{~}{𝒩}`$. Finally, we have to show that the reduced vector field is transversal to the symplectic leaves of (the reduction of) $`P_0`$. This follows from the fact that, at the points where $`\mathrm{Tr}(X_gA)=1`$,
$$L_{Z^_A}(\frac{1}{2}\mathrm{Tr}X(\lambda )^2)=\mathrm{Tr}\left(X(\lambda )A\right)=\lambda ^g+\mathrm{},$$
so that $`L_{Z^_A}H_g=1`$. This also shows that the reduction of $`Z^_A`$ has the right normalization.
Thus, we have shown that on $`\stackrel{~}{𝒩}`$, which is diffeomorphic to the phase space of $`\text{KdV}_{2g+1}`$, there exists a vector field that satisfies the hypotheses of the previous section. Therefore, the stationary reductions of KdV can be solved by separation of variables in the DN coordinates. We are left with the problem of finding explicitly these coordinates.
To this aim, we introduce the vector field $`Y^_A`$ defined on $`_A`$ as
$$\dot{X}_0=[A,X_g],\dot{X}_i=0\text{ for all }i=1,\mathrm{},g\text{.}$$
(8.2)
It is also tangent to the symplectic leaves of the Poisson tensor $`P_0`$ and can be projected on the quotient space $`\stackrel{~}{𝒩}`$. Moreover,
1. $`Y^_A`$ is (up to a sign) the Hamiltonian vector field associated by means of $`P_0`$ with the Lie derivative along $`Z^_A`$ of the coefficient $`H_{g1}`$ of $`\frac{1}{2}\mathrm{Tr}X(\lambda )^2`$;
2. We have that $`L_{Y^_A}^2(H(\lambda ))=2\left(\mathrm{Tr}(AX_g)\right)^2`$.
The first assertion can be checked after noticing that $`Z^_A(H_{g1})=\mathrm{Tr}(X_{g1}A)`$, and that the differential of this function is $`[0,\mathrm{},0,A,0]^T`$. The second assertion simply follows from the fact that $`H(\lambda )=\frac{1}{2}\mathrm{Tr}X(\lambda )^2`$.
The same properties hold also on $`\stackrel{~}{𝒩}`$: If $`Z`$ and $`Y`$ are the reductions of $`Z^_A`$ and $`Y^_A`$, we have that $`Y`$ is the Hamiltonian vector field associated with $`Z(H_{g1})`$ by the reduction of $`P_0`$. Furthermore, we have that $`L_Y^2(H(\lambda ))=2`$, since $`\mathrm{Tr}(AX_g)=1`$ at the points of $`\stackrel{~}{𝒮}`$. Let us now restrict ourselves to a symplectic leaf of the reduction of $`P_0`$. Since $`Z(H_{g1})=c_{g1}=\frac{1}{2}\mathrm{Tr}N`$, the vector field $`Y`$ can be used to construct the $`\mu _j`$ coordinates according to the recipe given at the end of the previous section. The conclusion is:
1. The $`\lambda _j`$ are the roots of the polynomial $`Z(H(\lambda ))=\mathrm{Tr}(AX(\lambda ))=\mathrm{Tr}(A𝖵^{(2g+1)})`$, that is, the entry $`(1,2)`$ of the matrix $`𝖵^{(2g+1)}`$;
2. The $`\mu _j`$ are given by $`\mu _j=f(\lambda _j)`$, where
$$f(\lambda )=\frac{Y(H(\lambda ))}{Y^2(H(\lambda ))}=\frac{1}{2}\mathrm{Tr}\left(X(\lambda )[A,X_g]\right)=\text{entry }(2,2)\text{ of }𝖵^{(2g+1)}.$$
We remark that our general theory of separability gives, in this particular case, the same construction of the variables of separation holding for systems admitting Lax with parameter formulation (see, e.g., ). In fact, writing the Lax matrix (4.9) as
$$𝖵^{(2g+1)}=\left[\begin{array}{cc}V_g(\lambda )& U_g(\lambda )\\ W_g(\lambda )& V_g(\lambda )\end{array}\right],$$
the equation of the associated spectral curve $`C`$ is
$$\mu ^2=U_g(\lambda )W_g(\lambda )+V_g(\lambda )^2.$$
Since $`U_g(\lambda _j)=0`$ and $`\mu _j=V_g(\lambda _j)`$, we see that $`g`$ points $`(\lambda _j,\mu _j)`$ lie on $`C`$.
We close this section with a description, from our point of view, of the example of $`\text{KdV}_5`$ we started with in Section 2. We consider the bi-Hamiltonian manifold
$$_A=\{A\lambda ^3+X_2\lambda ^2+X_1\lambda +X_0X_i=\left[\begin{array}{cc}p_i& r_i\\ q_i& p_i\end{array}\right]\},$$
whose Poisson tensors are given by (5.3) and (5.4). The reduction process described in Section 5 allows us to pass to the transversal submanifold
$$A\lambda ^3+\left[\begin{array}{cc}0& 1\\ q_2& 0\end{array}\right]\lambda ^2+\left[\begin{array}{cc}p_1& q_2\\ q_1& p_1\end{array}\right]\lambda +\left[\begin{array}{cc}p_0& q_{2}^{}{}_{}{}^{2}q_1\\ q_0& p_0\end{array}\right],$$
which is diffeomorphic to the phase space $`_5`$ of $`\text{KdV}_5`$. The correspondence is given through the Lax matrix $`𝖵^{(5)}`$ displayed in Section 4. The resulting change of variables is explicitly given by
$$\begin{array}{c}h_1=q_2,h_2=p_1,h_3=q_1,h_4=p_0p_1q_2,\\ h_5=q_1q_2+\frac{1}{2}p_{1}^{}{}_{}{}^{2}\frac{1}{2}q_{2}^{}{}_{}{}^{3}+\frac{1}{2}q_0.\end{array}$$
The Poisson pencil on $`_5`$ turns out to be
$$P_\lambda =\left[\begin{array}{ccccc}0& 1& 0& h_1+\lambda & h_2\\ 1& 0& 2h_1\lambda & h_2& h_3+\frac{1}{2}h_{1}^{}{}_{}{}^{2}2h_1\lambda \\ 0& 2h_1+\lambda & 0& h_3h_{1}^{}{}_{}{}^{2}+2h_1\lambda & h_4h_1h_2\\ h_1\lambda & h_2& & 0& \begin{array}{c}h_5+3h_1h_3\frac{1}{2}h_{2}^{}{}_{}{}^{2}h_{1}^{}{}_{}{}^{3}\\ (2h_3+h_{1}^{}{}_{}{}^{2})\lambda \end{array}\\ & & & & 0\end{array}\right]$$
(8.3)
Its Casimir $`H(\lambda )=H_0+H_1\lambda +H_2\lambda ^2`$ can be computed with the trace of the square of the Lax matrix:
$$\begin{array}{c}H_0=h_3h_{2}^{}{}_{}{}^{2}2h_3h_5+h_{1}^{}{}_{}{}^{5}+2h_1h_{3}^{}{}_{}{}^{2}2h_1h_2h_43h_{1}^{}{}_{}{}^{3}h_3+2h_{1}^{}{}_{}{}^{2}h_5+h_{4}^{}{}_{}{}^{2}\hfill \\ H_1=2h_2h_42h_1h_5+3h_{1}^{}{}_{}{}^{2}h_3h_1h_{2}^{}{}_{}{}^{2}h_{3}^{}{}_{}{}^{2}h_{1}^{}{}_{}{}^{4}\hfill \\ H_2=2h_{1}^{}{}_{}{}^{3}4h_1h_3+2h_5\hfill \end{array}$$
The two vector fields of the $`\text{KdV}_5`$ hierarchy are given by (3.12). The symplectic leaves of $`P_0`$ are the level surfaces of $`H_2`$. The vector field $`Z^_A`$ is $`/q_0`$, while its projection $`Z`$ on $`_5`$ is $`(1/2)/h_5`$. On the symplectic leaf $`𝒮_c`$ defined by $`H_2=c`$ we can use $`(h_1,h_2,h_3,h_4)`$ as global coordinates, and the corresponding Poisson pencil is simply obtained by deleting the last row and the last column in (8.3). The minimal polynomial of the Nijenhuis tensor on $`𝒮_c`$ is
$$Z(H(\lambda ))=\lambda ^2h_1\lambda +h_1^2h_3,$$
and $`\lambda _1`$, $`\lambda _2`$ are its roots. To find $`\mu _1`$ and $`\mu _2`$ we have to use
$$Y^_A=r_2\frac{}{p_0}+2p_2\frac{}{q_0},$$
whose reduction on $`_5`$ is $`Y=/h_4`$. Since $`Y(H(\lambda ))=2h_2\lambda +2h_42h_1h_2`$ and $`Y^2(H(\lambda ))=2`$, the coordinates $`\mu _1`$ and $`\mu _2`$ are the values of the polynomial
$$h_2\lambda +h_4h_1h_2$$
for $`\lambda =\lambda _1,\lambda _2`$. In order to check that the DN coordinates are separation variables for the restrictions $`\widehat{H}_0`$ and $`\widehat{H}_1`$ of the Hamiltonians to $`𝒮_c`$, we simply have to compute
$$\left[\begin{array}{cc}1& \lambda _1\\ 1& \lambda _2\end{array}\right]\left[\begin{array}{c}\widehat{H}_0\\ \widehat{H}_1\end{array}\right]=\left[\begin{array}{c}c\lambda _{1}^{}{}_{}{}^{2}+\mu _{1}^{}{}_{}{}^{2}\lambda _{1}^{}{}_{}{}^{5}\\ c\lambda _{2}^{}{}_{}{}^{2}+\mu _{2}^{}{}_{}{}^{2}\lambda _{2}^{}{}_{}{}^{5}\end{array}\right],$$
from which the form of the spectral curve can also be seen.
## 9 Final Remarks
1. The results outlined in Section 7 are proved in for a class of bi-Hamiltonian manifolds whose rank is not maximal. This means that our approach to the stationary reductions of KdV can be directly generalized to the stationary reductions of the Gel’fand–Dickey hierarchies. A step in this direction has already been taken in , whose results should be compared with those of . We will treat this problem in a future publication.
2. The separation variables provided by the bi-Hamiltonian method coincide, in the KdV case, with the ones obtained by algebro-geometric constructions. It would be interesting to compare in more general cases these two methods. A first result has been obtained in , where the “spectral Darboux coordinates” of are shown to be DN coordinates for a suitable pair of compatible Poisson brackets.
3. Another Marsden-Ratiu reduction of the manifold $`_A`$ of Section 5 has been performed in , for an arbitrary simple Lie algebra $`𝔤`$. That reduction leads to a bigger quotient space, and allows one to reduce all the multi–Hamiltonian structure of $`_A`$ and to obtain, in the case $`𝔤=𝔰𝔩(2)`$, the Mumford systems . A further restriction to the level surface of some Casimirs gives the same reduced phase space obtained in Section 6, where only two Poisson brackets survive.
### Acknowledgments
J.P.Z. and M.P. were partially supported by FAPERJ through grant E-26/170.501/99-APV. J.P.Z is grateful to SISSA for its hospitality. We thank G. Tondo for useful discussions at the early stages of this work. G.F. wishes to thank B. Dubrovin for useful discussions and remarks. M.P. is grateful to IMPA and SISSA for their hospitality.
|
no-problem/0003/cond-mat0003462.html
|
ar5iv
|
text
|
# Enhanced Pinning of Vortices in Thin Film Superconductors by Magnetic Dot Arrays
## Abstract
We study the pinning of vortices in thin film superconductors by magnetic dots in the London approximation. A single dot is in general able to pin multiple field-induced vortices, up to a saturation number $`n_s`$, which can be much larger than one. However, the magnetic field of the dot also creates intrinsic vortices and anti-vortices, which must be accounted for. In a ferromagnetic dot array, the intrinsic anti-vortices are pinned only interstitially. Much stronger pinning effect is expected of an antiferromagnetic dot array. Possible realizations of various magnetic configurations are discussed.
Artificially patterned sub-micron magnetic structures hold a great promise not only for magnetic device and storage technology, but also as a tool of fundamental research when used as a means of controlling other physical systems, such as the two-dimensional electron gas and vortices in superconductors . It is an experimental fact that when a type-II superconducting film is deposited over a regular array of magnetic dots, resistivity of the film exhibits a series of sharp minima as a function of the applied magnetic field, with the positions of the minima being integer multiples of the geometrical “matching field” of the magnetic lattice . In contrast, no periodic pinning was found in a similar system with a regular array of non-magnetic defects . This suggests the importance of the magnetization of the dots in producing low resistivity, although the exact flux pinning mechanism has not been fully understood. A recent study by Lyuksyutov and Pokrovsky explored various statistical mechanics issues that might arise from the complexity of the magnet-superconductor interaction. In this work, we focus on the pinning mechanism itself, the understanding of which will lead us to desirable magnetic dot structures that enhance vortex pinning for a range of applied magnetic fields.
We study the low temperature properties of a superconducting thin film such as Nb deposited on top of a regular array of magnetic dots, separated by a thin insulating layer to suppress the proximity effect (Fig. 1). The superconductor has a magnetic penetration length $`\lambda `$ much larger than the coherence length $`\xi `$, and the film is taken to be a homogeneous thin plate of thickness $`d\lambda `$. For the magnetic dots, we assume the magnetization on each dot to be quenched in, and pointing normal to the layer. In what follows, we will investigate how the interaction between vortices in the superconductor is affected by the magnetic dots, and explore which quenched configurations of magnetization are favorable for vortex pinning at low temperatures. At the end, we will discuss how the desired configuration(s) of magnetization might be achieved in practice.
We first describe the case of a single dot with magnetization $`𝐌(𝐫)`$ to be specified shortly. Modeling the thin-film superconductor as an ideal sheet current $`𝐊_s(\stackrel{}{\rho })`$ at the $`z=0`$ plane (with $`\stackrel{}{\rho }[x,y]`$ denoting the position vector), Maxwell’s equation of magnetostatics becomes
$$^2𝐀(𝐫)=\frac{4\pi }{c}𝐊_s(\stackrel{}{\rho })\delta (z)+4\pi \times 𝐌(𝐫),$$
(1)
in the gauge $`𝐀=0`$. We will describe the superconductor in the London approximation , which has $`𝐊_s(\stackrel{}{\rho })=(c/4\pi \mathrm{\Lambda })[\stackrel{}{\mathrm{\Phi }}(\stackrel{}{\rho })𝐀(\stackrel{}{\rho })]`$, where $`\mathrm{\Lambda }\lambda ^2/d`$ is the relevant magnetic length scale for the superconducting film, and $`\stackrel{}{\mathrm{\Phi }}(\stackrel{}{\rho })`$ is the London vector which, for a collection of vortices located at points $`\stackrel{}{\rho }_j`$ with respective quantization (charges) $`ϵ_j`$, is
$$\stackrel{}{\mathrm{\Phi }}(\stackrel{}{\rho })=\frac{\varphi _0}{2\pi }\underset{j}{}ϵ_j\frac{\widehat{z}\times (\stackrel{}{\rho }\stackrel{}{\rho }_j)}{(\stackrel{}{\rho }\stackrel{}{\rho }_j)^2}.$$
(2)
The above form of $`𝐊_s`$ holds everywhere except at the vortex cores, which are normal regions of radius $`\xi `$ around each singularity, and for as long as $`\widehat{z}𝐀(\stackrel{}{\rho })=0`$, which will be satisfied in our case. Using the superposition principle, we write $`𝐀(𝐫)=𝐀_s(𝐫)+𝐀_m(𝐫)`$, where $`𝐀_s`$ is the magnetic vector potential due to the supercurrent, and $`𝐀_m`$ is the vector potential due to the magnetic dot. In what follows, we will model a magnetic dot by a perfect dipole of magnetic moment $`𝐦m\widehat{z}`$, placed at a distance $`\mathrm{}`$ below the superconducting plane and the origin. Thus, $`𝐀_m(𝐫)=m(\widehat{z}\times 𝐫)/|𝐫+\mathrm{}\widehat{z}|^3`$. The dipole model is exact, of course, only for homogeneous spherical magnetic dots. Nevertheless, we will use this dipolar approximation throughout, in the hope that the results will give the correct order of magnitude for a variety of magnets whose magnetizations are normal to the plane.
The free energy of the entire system may be written as
$`F`$ $`=`$ $`{\displaystyle \frac{1}{2c}}{\displaystyle d^2\stackrel{}{\rho }\left[\frac{4\pi \mathrm{\Lambda }}{c}|𝐊_s(\stackrel{}{\rho })|^2+𝐀_s(\stackrel{}{\rho })𝐊_s(\stackrel{}{\rho })\right]}`$ (4)
$`𝐦\times 𝐀_s|_{𝐫=\mathrm{}\widehat{z}}.`$
The term in $`[\mathrm{}]`$ is the sum of the kinetic and magnetic field energies of the supercurrent; the last term is the potential energy of the magnetic dipole in the magnetic field of the supercurrent. Using Eq. (1), we can integrate out the vector potential and express the free energy completely in terms of the vortices in the superconductors. We have $`F=F_{vv}+F_{vm}`$, where
$$F_{vv}=\frac{\varphi _0^2}{16\pi ^2\mathrm{\Lambda }}\underset{i,j}{}ϵ_iϵ_jU(|\stackrel{}{\rho }_j\stackrel{}{\rho }_i|),$$
(5)
is the vortex-vortex interaction energy (including the vortex self-energy), with $`U(\rho )`$ being the Pearl potential whose asymptotic behavior is
$$U(\rho )\{\begin{array}{cc}\mathrm{ln}(\mathrm{\Lambda }/\xi ),\hfill & \rho \xi ,\hfill \\ (1/2)\mathrm{ln}(\mathrm{\Lambda }/\rho ),\hfill & \xi \rho \mathrm{\Lambda },\hfill \\ \mathrm{\Lambda }/(2\rho ),\hfill & \rho \mathrm{\Lambda },\hfill \end{array}$$
(6)
and
$$F_{vm}=\frac{\varphi _0m}{\pi \mathrm{\Lambda }^2}\underset{j}{}ϵ_jV(|\stackrel{}{\rho }_j|),$$
(7)
is the vortex-magnet interaction, with
$`V(\rho )`$ $``$ $`{\displaystyle _0^{\mathrm{}}}𝑑\kappa {\displaystyle \frac{\kappa e^{\kappa \mathrm{}/\mathrm{\Lambda }}}{2\kappa +1}}J_0(\kappa \rho /\mathrm{\Lambda })`$ (8)
$``$ $`\{\begin{array}{cc}\mathrm{\Lambda }/2\mathrm{},\hfill & \rho \mathrm{}\mathrm{\Lambda },\hfill \\ \mathrm{\Lambda }/2\rho ,\hfill & \mathrm{}\rho \mathrm{\Lambda },\hfill \\ 2(\mathrm{\Lambda }/\rho )^3,\hfill & \rho \mathrm{\Lambda }.\hfill \end{array}`$ (12)
We will assume that $`\rho \mathrm{\Lambda }`$ and $`\mathrm{}\xi `$, which correspond to the more interesting and experimentally realistic cases. A single magnetic dipole with $`m>0`$ is in general able to bind more than one vortex. The attractive force on a vortex due to the magnet is the strongest at a distance $`\mathrm{}`$ from the magnet. Since the vortex-vortex repulsion decays only logarithmically with distance, we conclude that all bound vortices are concentrated in an area of radius $`\mathrm{}`$ around the dot. When their number is large (and it can be), they can be considered a single multiply quantized vortex, since $`\mathrm{}\xi `$ by assumption.
The maximum charge $`n_s`$ of the bound vortex can be found as follows: Suppose the net vorticity of the sample (as enforced by the application of an external magnetic field) is $`N>n_s`$, and out of these $`N`$ vortices, there is a vortex of charge $`n`$ at the origin above the dot, with the remaining $`Nn`$ vortices singly quantized and far removed from each other and from the origin. The total free energy of such a system is simply
$$F(n)=\frac{\varphi _0^2}{16\pi ^2\mathrm{\Lambda }}(n^2+Nn)\mathrm{ln}\frac{\mathrm{\Lambda }}{\xi }+\frac{\varphi _0m}{\pi \mathrm{\Lambda }^2}nV(0).$$
(13)
$`n_s`$ can then be identified as the value of $`n`$ which minimizes $`F(n)`$, with the result
$$n_s=\text{int}\left[\frac{1}{2}+\frac{4\pi m}{\varphi _0\mathrm{}\mathrm{ln}(\mathrm{\Lambda }/\xi )}\right],$$
(14)
where $`\text{int}[x]`$ denotes the nearest integer to $`x`$. The numerical value of $`n_s`$, which we shall call hereafter the saturation number, may be much larger than 1. As an example, let us consider a “typical” magnetic dot made of Co/Pt multilayer, with magnetization $`500`$ emu cm<sup>-3</sup> , dot size $`(0.25\mu \mathrm{m})^2`$ and height $`h40`$ nm. Taking $`\mathrm{}`$ to be $`h/2`$, we get $`m/\mathrm{}3\varphi _0`$. The factor $`\mathrm{ln}(\mathrm{\Lambda }/\xi )`$ is given by the superconducting film itself; for Nb thin films somewhat below $`T_c`$ with $`\xi 20`$ nm, $`\lambda /\xi 15`$ and thickness $`d\xi `$ one has $`\mathrm{ln}(\mathrm{\Lambda }/\xi )5`$. In this case $`n_s8`$. This example illustrates that magnetic pinning may be many times more effective than pinning by material defects (e.g., holes) at the same density, since the holes can directly bind only one vortex per site. The ability of a single magnet to bind so many vortices is perhaps the most desirable property of the magnetic dot system, at least for the purpose of vortex pinning; this property has however not been recognized previously .
Extrapolating the above-described properties of a single dot to systems with many dots, one might naively conclude that an array of dots in the ferromagnetic configuration would provide the strongest pinning. Indeed, if the applied magnetic field is $`n_s`$ times the matching field $`B_\varphi `$, then each dot will bind $`n_s`$ field-induced vortices, a task not achievable by an array of non-magnetic pins if $`n_s`$ is large. However, the ferromagnetic dot array suffers from a different problem: It has difficulty pinning at small applied fields, i.e., for $`Bn_sB_\varphi `$, because of the appearance of intrinsic vortices and anti-vortices created by the magnetic field of the magnets themselves.
To understand the effect associated with intrinsic vortices, let us again consider the single-dot system, but now in the absence of any external magnetic field. For weak magnets, no vortex is created in the superconducting film. Vortices eventually appear in the film for sufficiently strong magnets. This first happens when there is a doubly quantized vortex bound directly above the magnet, with two single anti-vortices straddling it, each at a distance $`\rho _0`$ away . The anti-vortices must be present in a large film (of linear size $`\mathrm{}`$), since the net magnetic flux through the $`x`$-$`y`$ plane due to the magnet is zero. (This important fact was left out of the model in Ref. .) Annihilation of the anti-vortices with the nucleus is prevented by the short-range repulsion between the anti-vortices and the magnet. To find the onset of intrinsic vortices, we compute the free energy $`F(\rho )`$ obtained by applying the forms of the vortex-vortex and vortex-magnet interactions \[Eqs. (5) and (7)\] to the vortex-antivortex configuration described above. We take $`\rho _0`$ to be the value of $`\rho `$ which minimizes $`F(\rho )`$, with the result $`\rho _0=16\pi m/(3\varphi _0)`$. Intrinsic vortices appear when $`F(\rho _0)<0`$, since the free energy of the system without vortices is set at $`F=0`$ by definition. In the experimentally relevant limit $`\rho _0\xi `$, this occurs when the magnetization reaches a critical value $`m_c=\frac{3}{8\pi }\varphi _0\mathrm{}\mathrm{ln}(\mathrm{\Lambda }/\xi ).`$ Using this expression for $`m_c`$, we can rewrite Eq. (14) as $`n_s=\mathrm{int}[\frac{1}{2}+\frac{3m}{2m_c}]`$. Note that $`n_s=2`$ when $`m=m_c`$. Thus, intrinsic vortices and anti-vortices will appear once the magnet becomes a better pinning site than a simple void, i.e., for $`n_s>1`$.
Next, let us consider the behavior of a ferromagnetic (FM) square array of dots with $`m>m_c`$ and lattice constant $`a\rho _0`$, again in the absence of any external magnetic field. In the Co/Pt example mentioned above, $`a\rho _01\mu \mathrm{m}`$. In this case, the intrinsic anti-vortices are very loosely associated with the magnetic dots and form a classical plasma, which interacts with the FM array only interstitially. For $`n_s1`$, the interstitial pinning is of high order and therefore very weak. Consequently, the anti-vortices can be set into motion by a small applied current or thermal excitation, leading to dissipation even in the absence of any applied field! This is certainly not a desirable feature for the purpose of vortex pinning.
The pinning ability of the FM array increases for increasing external fields, whose effect is mainly to annihilate a fraction of the intrinsic anti-vortices at interstitial sites. The annihilation is complete when the applied field reaches the order of $`n_sB_\varphi `$ as already described above. Thus maximum critical current is obtained at this field. Through this consideration, we see that the main effect produced by the ferromagnetic array is to shift the zero of the magnetic field to some large value, i.e., $`n_sB_\varphi `$. This can be utilized in special applications which requires a high critical current for a given field, but is not good in general, where high critical current is demanded for a range of external fields.
The pinning properties of the sample in low applied fields can be significantly improved if the magnets form a quenched square antiferromagnetic (AFM) array, where the intrinsic anti-vortices produced by those magnets in the $`+\widehat{z}`$ (or “up”) direction tend to be attracted to the magnets in the $`\widehat{z}`$ (or “down”) direction. For strong magnets whose binding distance $`\rho _0`$ (between the magnet and its satellite anti-vortices) is of the order $`a`$, the anti-vortices will be pinned strongly to magnets pointed in the opposite direction, leading to a much larger critical current needed to dislodge the intrinsic vortices and anti-vortices at low fields.
In the absence of any external field, the free energy per unit cell of an AFM square array is obtained from Eqs. (5) and (7), assuming that $`n`$ intrinsic vortices are localized on each upward magnets and $`n`$ intrinsic anti-vortices are localized on each downward magnet. To sum the resulting alternating series of potential energies, a good approximation is to take the two largest terms of the series, which has a minimum when $`n=n_s`$.
In the presence of an external magnetic field, the field-induced vortices are subject to a periodic potential landscape created by the magnets and the bound intrinsic vortices and anti-vortices. To characterize this potential, we place a “test” vortex of charge $`+1`$ at some position $`\stackrel{}{\rho }`$, and compute the free energy $`W(\stackrel{}{\rho })`$ experienced by this test vortex, due to interaction with all the intrinsic vortices and anti-vortices, $`n_{i,j}=(1)^{i+j}n_s`$, and with all the magnets, $`m_{i,j}=(1)^{i+j}m`$, on the respective lattice sites $`\stackrel{}{R}_{i,j}=a(i\widehat{x}+j\widehat{y})`$. The resulting expression
$`W(\stackrel{}{\rho })`$ $`=`$ $`{\displaystyle \frac{\varphi _0^2}{8\pi ^2\mathrm{\Lambda }}}{\displaystyle \underset{i,j}{}}n_{i,j}U(|\stackrel{}{\rho }\stackrel{}{R}_{i,j}|)`$ (16)
$`+{\displaystyle \frac{\varphi _0}{\pi \mathrm{\Lambda }^2}}{\displaystyle \underset{i,j}{}}m_{i,j}V(|\stackrel{}{\rho }\stackrel{}{R}_{i,j}|),`$
is plotted in Fig. 2 for the example in the text with $`n_s=8`$ and $`a=\rho _0/2`$.
The form of $`W(\stackrel{}{\rho })`$ clearly indicates that the test vortex will be attracted towards the nearest “up” magnet. Note that even though a magnet is already saturated with intrinsic vortices, it can still bind more field-induced vortices because there is no core energy cost associated with the latter. The maximum number of field-induced vortices $`n_c`$ that an upward magnet can bind is estimated as follows: Every field-induced vortex (per primitive cell) that is attracted to the upward magnet changes the potential landscape for the test vortex. These bound vortices surround the nucleus, effectively increasing its charge. A test vortex will no longer tend to the magnet when the repulsion from the effective charge of the nucleus becomes equal to the maximum attractive force due to the magnet. The latter force is strongest at a finite distance $`D`$; in the point dipole approximation, Eq. (7), $`D\mathrm{}`$. More realistically, $`D`$ will be of order of the radius of the magnetic dot. The balance of forces gives
$$n_cn_s\left(\frac{2\mathrm{}}{D}\mathrm{ln}\frac{\mathrm{\Lambda }}{\xi }1\right).$$
(17)
In our example system, where $`D0.1`$ $`\mu `$m, $`n_cn_s`$.
The analysis described in this study suggest very different behaviors for a superconducting thin film on a (quenched) FM or AFM dot array. For the FM array, we expect the system to have low critical current $`J_c`$ at low fields, with successively increasing $`J_c`$ when the applied field is an integer multiple of the matching field, and with the maximal $`J_c`$ obtained at $`B=n_sB_\varphi `$. For the AFM array, we expect the pinning to be the strongest at low fields, with successively lower $`J_c`$ as the applied field reaches higher and higher orders of the matching field, and with the main effect diminishing beyond a field value of $`n_cB_\varphi /2`$. Thus, for different applications, one might want to use one or the other type of arrays, or some combination.
It remains to address how the quenched magnetic dot configurations can be achieved and maintained in an applied field. The FM array is most easily prepared by aligning the magnetic moments in a strong magnetic field in the $`z`$ direction prior to measurement. It is less straightforward to ensure the AFM arrangement. Fortunately, we are aided in this case by the fact that the AFM state is the ground state of a square lattice of magnetic dipoles as long as the out-of-plane direction (i.e., the $`\widehat{z}`$ axis) is the “easy” magnetization axis. The latter can be arranged by the construction of the individual magnetic dot, e.g., by using the multi-layer Co/Pt structure. The appearance of intrinsic vortices further stabilizes the AFM structure. Thus, the AFM array will in fact form spontaneously upon cooling from high temperature in zero applied field, based on purely energetic considerations. There are, of course, kinetic constraints, such as coercive effects and the mobility of the domain wall separating the two degenerate states of the antiferromagnet, that may prevent a perfect antiferromagnet from being formed at a reasonable time. These kinetic constraints are, on the other hand, necessary to prevent the magnets to undergo a spin-flop type transition to the FM phase upon increasing the external magnetic field. One can also envision building an array of microscopic superconducting current rings below the film, and electronically control the sense of current in each ring. This way, arbitrary quenched magnet configurations can be specified.
A practical way to achieve effects similar to that of the AFM array is to bury under the superconductor an array of thin magnetic bars (Fig. 3). The shape anisotropy of the bar will force the magnetization to be along the length of the magnet. The field outside the magnet is the same as that of two oppositely charged magnetic monopoles located at its endpoints, creating an effect similar to that of an antiferromagnetic pair of dipoles whose moments are normal to the sample . A crude approximation to this geometry is the use of thin magnetic disks whose diameter is a significant fraction of the lattice constant. The anisotropic shape of the disk induces a magnetization parallel to the superconducting plane, similar to that of the magnetic bar. This was in fact the geometry used by Martín et al. .
We gratefully acknowledge useful conversations with A. Hoffmann and I. K. Schuller. This research is supported by the NSF through grant no. DMR9801921, and by the UC-CLC program.
|
no-problem/0003/quant-ph0003076.html
|
ar5iv
|
text
|
# A Magnetic Resonance Force Microscopy Quantum Computer with Tellurium Donors in Silicon
## I Introduction
Recently, Kane proposed a silicon-based nuclear spin quantum computer. This proposal linked the theoretical field of quantum computation with the well-developed silicon industry technology. It was proposed in to use nuclear spins, $`I=1/2`$, of impurity phosphorus atoms (<sup>31</sup>P) in silicon (<sup>28</sup>Si) as qubits for quantum computation. Selective one-qubit rotation of nuclear spins can be implemented by combining the action of electrostatic gates and resonant radio frequency (rf) pulses. The electrostatic gate increases the size of the electron cloud of the selected phosphorus atom changing the hyperfine interaction between the electron spin ($`S=1/2`$) and the nuclear spin of the phosphorus atom. A two-qubit quantum CONTROL-NOT (CN) gate can be implemented by combining resonant rf pulses by applying two electrostatic gates acting on the neighboring phosphorus atoms. Under the action of electrostatic gates, the electron clouds of the neighboring atoms increase their size and overlap. This causes an exchange interaction between electron spins which, in turn, generates an indirect coupling between the associated nuclear spins.
To measure the state of the nuclear spin, it was proposed in to transfer the state of the nuclear spin to the electron spin. Then, using an electrostatic gate, one induces a transfer of the electron from the measured phosphorus atom to the auxiliary phosphorus atom. Because of the Pauli principle, this transfer is possible only if the electron spins of the measured atom and the auxiliary atom have opposite directions. The change of the charge of the auxiliary atom can be measured by a single-electron transistor. This attractive proposal may, however, face difficulties associated with: (a) the precise manipulation of the electron clouds using electrostatic gates, (b) the complicated diagram of transferring the nuclear spin state to the electron spin state, and (c) the application of the single-electron transistor. Vrijen et al. proposed a way to overcome these problems. But their proposal implements qubits in the electron spins of the phosphorus atoms. It is clear that unlike nuclear spins, electron spins cannot be isolated from their surroundings. So, the price of simplification of the original proposal seems to be very high.
In our previous paper we proposed the MRFM quantum computer. This proposal relies on rapidly developing MRFM methods which promise single spin detection combining magnetic resonance techniques, atomic force microscopy and novel optical methods for detection of mechanical vibrations -. It would be very attractive to apply the idea of the MRFM quantum computer to paramagnetic impurities in silicon. However, the phosphorus atom does not fit our proposal because the large size of its electron cloud and relatively weak hyperfine interaction. In this paper, we propose a MRFM quantum computer based on silicon with tellurium impurities. Unlike phosphorus, a tellurium atom in silicon is a “deep donor” with a small size of the electron cloud and with extremely large hyperfine interaction. Application of tellurium impurities in silicon could combine advantages of MRFM with well-developed techniques of silicon technology. ispell tellurium1.tex In section II, we discuss the design of this quantum computer. In section III, we describe quantum computation using this nuclear spin quantum computer. We discuss a one-qubit rotation, a two-qubit quantum CN gate, the measurement of the state of a nuclear spin, and the initialization of the nuclear spins in their ground states.
## II MRFM Si:Te Nuclear Spin Quantum Computer
A principal diagram of the proposed quantum computer is shown in Fig. 1. Tellurium-125 donors are placed near the surface of the silicon-28. We assume that silicon contains only <sup>28</sup>Si non-magnetic nuclei. <sup>29</sup>Si magnetic nuclei whose natural abundance is 4.7% must be eliminated. The tellurium contains <sup>125</sup>Te nuclei, whose natural abundance is only 7%. If a host atom in silicon is replaced by tellurium donor, two extra electrons are available. The properties of tellurium donors in silicon have been investigated elsewhere. (See, for example, and the references therein.) It was found that most of the implanted tellurium atoms occupy substitutional sites. The ground states of tellurium donors, as well as those of other atoms with two extra electrons, are referred as “deep impurity levels” in contrast to “shallow” impurities like phosphorus with one extra electron whose ground state energies are of the order 50 meV. Because of the two extra electrons, tellurium donors form singly-ionized A-centers, Te<sup>+</sup>, and neutral B-centers, Te<sup>0</sup>. The temperature-independent ground state energies were found in to be 410.8 meV for A-centers, and 198.8 meV for B-centers.
Electron spin resonance (ESR) for A-centers can be described by the simple spin Hamiltonian,
$$=g_e\mu _B\stackrel{}{B}\stackrel{}{S}+g_n\mu _n\stackrel{}{B}\stackrel{}{I}A\stackrel{}{S}\stackrel{}{I},$$
$`(1)`$
where $`\stackrel{}{S}`$ is the electron spin (S=1/2) of <sup>125</sup>Te<sup>+</sup>; $`\stackrel{}{I}`$ is a nuclear spin (I=1/2) of <sup>125</sup>Te nuclei; $`\mu _B`$ and $`\mu _n`$ are the Bohr and nuclear magnetons, $`g_e`$ and $`g_n`$ are the electron and nuclear $`g`$-factors: $`g_e2`$, $`g_n0.882`$; A is the constant of the isotropic hyperfine (hf) interaction, $`A/2\pi \mathrm{}3.5`$ GHz. The first two terms in the Hamiltonian (1) have the same signs because the nuclear magnetic moment of <sup>125</sup>Te is negative, as is the electron magnetic moment. For the same reason, we put in (1) a negative sign for the hyperfine interaction.
We propose to use as qubits the nuclear spins of the <sup>125</sup>Te donors (A-centers). We assume that future advances in silicon technology will allow one to place a regular chain of <sup>125</sup>Te donors near the surface of silicon with the distance between donors being approximately 5 nm. (See Fig. 1.) To initialize the ground states of the nuclear spins and to measure their final states, we propose using MRFM. For this purpose, the ferromagnetic particle, $`P`$, in Fig. 1, attached to the end of the cantilever, can move along the impurity chain selecting an appropriate tellurium ion. To implement quantum computation, we propose using the same (but non-vibrating) ferromagnetic particle which can move along the impurity chain. Next, we shall describe the operation of the proposed quantum computer.
## III Quantum Computer Operation
Following our proposal , we assume that electron spins are polarized in the positive $`z`$-direction. (As an example, for $`B_0=10`$T and at a temperature of 1K, the probability for an electron to change its direction is approximately $`1.4\times 10^6`$.) From the other side, approximately 44% of nuclear spins are in their excited states. To detect these nuclear spins one moves the ferromagnetic particle placed on the cantilever to a selected tellurium atom. Assuming that the distance between the ferromagnetic particle and the selected ion is 10 nm, the radius of the ferromagnetic particle is 5 nm, and the magnetic induction of the ferromagnetic particle is: $`\mu _0M2.2`$ T, one finds that the shift of the ESR frequency for the selected ion is, $`\mathrm{\Delta }f_e1.5`$ GHz. (The corresponding magnetic field produced by the ferromagnetic particle is approximately $`5.4\times 10^2`$ T .) The “natural” ESR frequency for $`B_0=10`$ T is, $`f_e280`$ GHz. The hyperfine shift of the ESR frequency is, $`f_{hf}=A/4\pi \mathrm{}1.75`$ GHz . For an ion, the magnetic dipole field produced by its two neighbor electron spins was estimated as $`1.5\times 10^5`$ T. The magnetic dipole field produced by all other electron spins does not exceed $`3\times 10^6`$ T . The corresponding shifts of the ESR frequency are, $`f_{ed}0.42`$ MHz and $`f_{ed}^{}<0.08`$ MHz. We assume that the amplitude of the rf pulse, $`B_1`$, in frequency units (the Rabi frequency) is greater than $`f_{ed}`$. So, the dipole contribution to the ESR frequency can be ignored. Thus, applying the rf pulses with the frequency,
$$ff_e+f_{hf}+\mathrm{\Delta }f_e,$$
$`(2)`$
one induces oscillations of the electron spin of a selective tellurium ion only if the nuclear spin of the ion is in its ground state. The oscillating electron spin, in turn, induces resonant vibrations of the cantilever which can be detected by MRFM methods. A discussion on modified MRFM techniques for detection of a single electron spin and related estimates for the MRFM quantum computer can be found in our previous papers .
Thus, tellurium atoms detected by MRFM have their nuclear spins in their ground state. To drive other nuclear spins to their ground states, one moves the non-vibrating ferromagnetic particle to a selected tellurium atom whose nuclear spin is in the excited state. The “natural” NMR frequency for <sup>125</sup>Te nuclear spin in an external magnetic field of 10 T is 134.5 MHz. The hyperfine “shift”, $`f_{hf}1.75`$ GHz is larger than the “natural” frequency. The additional shift caused by the magnetic field of the ferromagnetic particle is, $`\mathrm{\Delta }f_n0.73`$ MHz. Applying an rf $`\pi `$-pulse with frequency,
$$f=f_n+f_{hf}+\mathrm{\Delta }f_nf_{nd}f_{nd}^{},$$
$`(3)`$
one drives the nuclear spin into its ground state.
In Eq. (3), the frequency, $`f_{nd}`$, is the NMR shift caused by the dipole field of the electron spins of the neighbor ions, and $`f_{nd}^{}`$ is caused by electron spins of all other ions. For <sup>125</sup>Te, the frequency $`f_{nd}200`$ Hz, and $`f_{nd}^{}<40`$ Hz. Applying an rf pulse with a nuclear Rabi frequency larger than $`f_{nd}`$, one can neglect the dipole contribution. The same method can be used to implement a one-qubit rotation. To implement a two-qubit gate, we propose using the magnetic dipole interaction between electron spins of tellurium ions. For this purpose, one moves the non-vibrating ferromagnetic particle to a tellurium ion containing a control nuclear spin (a control qubit). Then, one applies an rf pulse with frequency, $`f=f_e+f_{hf}+\mathrm{\Delta }f_e`$. This pulse drives the electron spin into its excited state if the control nuclear spin is in the ground state. Next, one moves the non-vibrating ferromagnetic particle to the tellurium ion containing the target nuclear spin (a target qubit). Now, it is important to use a selective rf pulse whose frequency is,
$$f=f_n+f_{hf}+\mathrm{\Delta }f_nf_{nd}^{},$$
$`(4)`$
and whose Rabi frequency is less than 200 Hz. This pulse changes the state of the target nuclear spin if the dipole contribution from neighbor electron spins cancels out: $`f_{nd}=0`$. It happens only if the control nuclear spin was in the ground state. Finally, one moves the non-vibrating ferromagnetic particle back to the ion containing the control nuclear spin and applies a $`\pi `$-pulse with the frequency (2), to return the electron spin in the ground state (if it was in the excited state). Thus, three rf pulses together implement an “inverse” quantum CN gate: the target qubit changes its state if the control qubit is in the ground state. to implement the “standard” quantum CN gate one can apply a rf $`\pi `$-pulse with the frequency (3). In this case, the target nuclear spin changes its direction if the electron spin of the neighbor ion did not change its state, i.e. if the control nuclear spin was in the excited state. The final measurement of the nuclear state can be implemented using MRFM in the same way as the measurement of the initial nuclear states.
## IV Conclusion
We describe a MRFM nuclear spin quantum computer using tellurium-125 singly ionized donors placed near the surface of silicon-28. Our proposal relies on the expected advances in MRFM which promises the detection of a single electron and it relies on further developments in silicon technology.
Acknowledgments
We thank P.C. Hammel for valuable discussions. This work was supported by the Department of Energy under contract W-7405-ENG-36 and by the National Security Agency.
Figure captions
Fig. 1: A diagram of the proposed quantum computer. The circles indicate $`{}_{}{}^{125}Te^+`$ ions implanted near the surface of the silicon substrate; $`I`$ is the nuclear spin; $`S`$ is the electron spin (electron spins are shown in their ground states); $`P`$ is the ferromagnetic particle (vibrating or non-vibrating); $`d`$ =15 nm; $`a`$=5 nm.
|
no-problem/0003/astro-ph0003264.html
|
ar5iv
|
text
|
# Steepening of Afterglow Decay for Jets Interacting with Stratified Media
## 1 Introduction
In a recent paper Chevalier and Li (1999) pointed out that some of the GRB afterglow light-curves are best modeled when the density of the circum-burst medium is taken to fall off as $`r^2`$ (this is referred to as the wind model). These afterglows show no evidence for a jet, i.e. their light-curves follow a power-law decline without any break. This is puzzling since collimated outflows are expected in the collapsar model for GRBs (MacFadyen, Woosley & Heger 2000). We offer a possible explanation for this puzzle by showing that the light-curve resulting from the interaction of a jet with a pre-ejected wind falls off as a power-law whose index changes very slowly with time.
We carry out a detailed modeling of the multi-wavelength afterglow flux data for GRB 990510, which provides the best evidence for a jet propagation in a uniform density medium (Harrison et al. 1999, Stanek et al. 1999), to show that effects associated with a finite jet opening-angle are insufficient to explain the observed rapid steepening of the light-curve.
In §2 we calculate the propagation of a jet in a stratified medium and in §3 we describe the calculation of the synchrotron emission and afterglow light-curve.
## 2 Dynamics of Expanding Jets
The dynamical evolution of jets and its synchrotron emission have been previously investigated by a number of people, e.g. Rhoads (1999), Panaitescu & Mészáros (1999), Sari, Piran & Halpern (1999), Moderski, Sikora, & Bulik (2000), Huang et al. (2000). The evolution of the Lorentz factor ($`\mathrm{\Gamma }`$) can be calculated from the following set of equations
$$\frac{dM_1}{dr}=2\pi Ar^{2s}(1\mathrm{cos}\theta ),$$
(1)
$$\frac{d\theta }{dr}=\frac{1}{fr(\mathrm{\Gamma }^21)^{1/2}}+\frac{\mathrm{\Theta }\theta }{r},$$
(2)
$$M_0\mathrm{\Gamma }+M_1(\mathrm{\Gamma }^21)=M_0\mathrm{\Gamma }_0,$$
(3)
where $`\theta `$ is the half-opening angle of the jet, $`M_0`$, and $`\mathrm{\Gamma }_0`$ are the initial mass and Lorentz factor of the ejecta, $`M_1`$ is the swept-up mass, $`\rho (r)=Ar^s`$ is the density of the circum-stellar medium, and $`f=c/c_s`$ is the ratio of the speed of light to that of the jet sideways expansion; $`f`$ is a parameter of order unity whose effect can be absorbed in $`\mathrm{\Gamma }_0`$, and which has little effect on the light-curve. $`\mathrm{\Theta }`$ is the angle between the velocity vector at the jet edge and the jet axis (in the lab frame) and is determined by the modification of particle trajectory due to the sideways expansion. The last equation above expresses the conservation of energy and it applies to an adiabatic shock when the heating of the original baryonic material of rest mass $`M_0`$ by the reverse shock is ignored.
The above equations can be combined and rewritten in the following non-dimensional form which is applicable for relativistic as well as non-relativistic jet dynamics
$$\frac{dy_1}{dx}=\frac{x^{2s}(y_1^2\mathrm{\Gamma }_0^2)^2y_2^2}{2y_1y_1^2\mathrm{\Gamma }_0^2},$$
(4)
$$\frac{dy_2}{dx}=\frac{1}{f\theta _0\mathrm{\Gamma }_0x(y_1^2\mathrm{\Gamma }_0^2)^{1/2}}+\frac{\mathrm{\Theta }\theta }{x\theta _0},$$
(5)
where $`x=r/R_{da}`$, $`y_1=\mathrm{\Gamma }/\mathrm{\Gamma }_0`$, $`y_2=\theta /\theta _0`$, and
$$R_{da}=\left(\frac{E}{\pi Ac^2\theta _0^2\mathrm{\Gamma }_0^2}\right)^{1/(3s)},$$
(6)
is the deacceleration radius. $`E=M_0\mathrm{\Gamma }_0`$ is the energy in the explosion and $`\theta _0`$ is the initial half-opening angle of the jet. The above equations show that for the wind and the uniform ISM models $`\mathrm{\Gamma }t_{obs}^{1/4}`$ & $`t_{obs}^{3/8}`$, respectively, as long as $`\mathrm{\Gamma }\theta _0^1`$, where $`t_{obs}=𝑑t(1v)`$ is the observer time, $`t`$ being the lab frame time and $`v`$ the jet velocity in units of $`c`$.
Equations (4) and (5) are solved, subject to the boundary conditions $`y_1=y_2=1`$ for $`x1`$, to determine $`\mathrm{\Gamma }`$ and $`\theta `$ as functions of $`r`$. For a relativistic jet with $`\mathrm{\Theta }=\theta `$, i.e. fluid velocity in the radial direction, equation (5) reduces to
$$\frac{dy_1}{dx}=\frac{x^{2s}y_1^3y_2^2}{2y_1},\frac{dy_2}{dx}=\frac{1}{f(\theta _0\mathrm{\Gamma }_0)xy_1}.$$
(7)
The solution of the equations (4) and (5) is a two-parameter family of functions, however in the relativistic case the solution depends only on the product $`\theta _0\mathrm{\Gamma }_0`$.
One can solve equation (7) approximately, ignoring the very early time behavior, to determine the time when the sideways expansion alters significantly the jet dynamics. The two relations in equation (7) can be combined to yield a first order differential equation for $`y_1y_2y`$ which is given by
$$\frac{dy}{d\xi }\frac{y^3}{2}+\frac{1}{\eta \xi },$$
(8)
with $`\eta =f(3s)(\theta _0\mathrm{\Gamma }_0)`$, a constant, and $`\xi =x^{3s}/(3s)`$. An approximate solution to this equation is
$$y\frac{1}{2\xi ^{1/2}}+\left(\frac{2}{\eta \xi }\right)^{1/3}.$$
(9)
Thus, $`y\mathrm{\Gamma }\theta `$ decreases monotonically with radius or time. The transition to jet sideways expansion starts when the two terms in the above equation become equal, i.e. $`\xi (\eta /16)^2`$, and lasts for an interval in $`\xi `$ for which $`y`$ decreases by a factor of $`3`$, or $`x`$ increases by a factor of $`3^{3/(3s)}`$. The Lorentz factor continues to fall during the transition by a factor of a few. Therefore, the transition time divided by the time at the start of the transition (in observer frame), during which $`\alpha _1d\mathrm{ln}(\mathrm{\Gamma }1)/d\mathrm{ln}t_{obs}`$ increases from $`(3s)/(82s)`$ to approximately $`1/2`$, is approximately 9x3<sup>3/(3-s)</sup>. The solution to $`y_1`$ and $`y_2`$ can be obtained by inserting the expression for $`y`$ into equation (7). However, $`y_1`$ and $`y_2`$ determined this way have much larger error than $`y`$ and should not be used for any serious calculation.
We solve equations (4) and (5) numerically and show the results for $`x(t_{obs})`$ and $`\alpha _1(t_{obs})`$ in Figure 1. Note that the change to $`\alpha _1`$ from one asymptotic value, corresponding to spherical shell expansion, to another, when sideways expansion is well underway, takes a long time; the ratio of the final to the initial time for a change in $`\alpha _1`$ of 0.1 for a uniform ISM is $`10^2`$ whereas for $`s=2`$ the ratio is $`10^3`$. For the parameters chosen here $`\alpha _1=0.5`$ when $`\mathrm{\Gamma }`$ is of order a few. In the non-relativistic phase of the jet expansion $`\alpha _1=1.2`$, as for a Sedov-Taylor spherical shock wave.
## 3 Synchrotron Emission from Relativistic Jets
The synchrotron spectrum in the co-moving frame is taken to be a sequence of power-laws with breaks at the self-absorption, synchrotron peak, and cooling frequencies, as presented in Sari, Narayan & Piran (1998); these frequencies can be found in eg. Panaitescu & Kumar (2000). All of our numerical results, unless otherwise stated, are obtained by integrating emission over equal arrival time surface. Ignoring the radial structure of the jet, the flux received by an observer located on the jet axis is given by
$$f_\nu (t_{obs})=\frac{1}{8\pi d^2}_{r_{min}}^{r_{max}}\frac{P_\nu ^{}^{}(r)}{\gamma ^3[1v\mathrm{cos}\psi (r,t_{obs})]^2}\frac{dr}{r},$$
(10)
where $`P_\nu ^{}^{}`$ is the co-moving power per frequency at $`\nu ^{}=\gamma (1v\mathrm{cos}\psi )\nu `$, $`r\mathrm{cos}\psi =ctct_{obs}`$ and $`r_{min}`$ and $`r_{max}`$ are solutions of
$$ct(r_{max})r_{max}=ct(r_{min})r_{min}\mathrm{cos}\theta (r_{min})=t_{obs}.$$
(11)
We ignore the angular integration when discussing the analytical calculation of the observed flux and its power-law decline with time. The observed flux at a frequency that is greater than both the cooling frequency, $`\nu _c`$, and the synchrotron peak, $`\nu _m`$, is proportional to
$$f_\nu t_{obs}^{\frac{1}{2}(4s)\frac{1}{4}sp}\mathrm{\Gamma }^{\frac{1}{2}(p+2)(4s)}\mathrm{min}\{(\theta _0\mathrm{\Gamma }_0)^2,y^2\}.$$
(12)
At early times when $`\mathrm{\Gamma }\theta ^1`$ and $`\mathrm{\Gamma }t_{obs}^{(3s)/(82s)}`$, the flux decays as $`t_{obs}^{(3p2)/4}`$. At late times when $`\mathrm{\Gamma }\theta \mathrm{}<1`$ the power-law index for the flux $`\beta d\mathrm{ln}f_\nu /d\mathrm{ln}t_{obs}=(4s)[\alpha _1(p+2)1]/2+sp/4+\alpha _2`$, where $`\alpha _22d\mathrm{ln}y/d\mathrm{ln}t_{obs}`$.
There are two effects that determine the evolution of $`\beta `$. One of them, the edge effect, is purely geometrical and results from the angular opening $`\mathrm{\Gamma }^1`$ of the relativistic observing cone becoming larger than the jet opening angle $`\theta `$, i.e. the observer “sees” the edge of the jet. The increase to $`\beta `$ resulting from it is $`\alpha _2\mathrm{}<(3s)/(4s)`$; $`\alpha _2`$ decreases with time and therefore the jump in $`\beta `$ is smaller for larger $`\theta _0`$. The dimensionless time for $`\beta `$ to increase by $`\alpha _2`$ depends on the angular position of the observer w.r.t. the jet axis and is approximately the ratio of the time when the observer sees the far edge of the jet to the time when the near side of the jet becomes visible. This time is given by
$$R_{t_e}\left[\frac{\theta _0+\varphi _0}{\theta _0\varphi _0}\right]^{(82s)/(3s)}=\left[\frac{1+P_{\varphi _0}^{1/2}}{1P_{\varphi _0}^{1/2}}\right]^{(82s)/(3s)},$$
(13)
where $`P_{\varphi _0}`$ is the probability that the observer lies within an angle $`\varphi _0`$ of the jet axis. For $`P_{\varphi _0}=0.25`$, $`R_{t_e}`$ is 18.7 (81) for $`s=0`$ (2), and during this time $`\beta `$ increases by approximately 0.7 (0.4). The dependence of $`R_{t_e}`$ on $`\varphi _0`$ becomes much weaker when the emission is integrated over equal arrival time surface (Figure 2). This is because the effect of angular integration is to smear the jet-edge by an angle $`1/\mathrm{\Gamma }\theta _0/2`$, which sets the minimum value of $`R_{t_e}`$ to be about 10 (10<sup>2</sup>) for uniform (wind) models.
The other effect which leads to a steepening of the afterglow decay is dynamical and is caused by the lateral spreading of the jet. During the relativistic phase the increase to $`\beta `$ from the sideways expansion is $`\delta \beta =(p+2)(4s)\delta \alpha _1/2+\delta \alpha _2`$; $`\delta \alpha _1`$ and $`\delta \alpha _2`$ can be read from Figure 1. Since $`\alpha _1`$ does not asymptote to 0.5 so $`\beta p`$ during the relativistic sideways expansion of the jet. <sup>1</sup><sup>1</sup>1It should be noted that the asymptotic behavior $`\beta p`$ for $`s=0`$ (Rhoads 1999) is achieved only for extremely narrow jets ($`\theta _0\mathrm{}<1^o`$), so that the jet remains relativistic for a sufficiently long time after it starts expanding sideways. It nevertheless serves as a useful, quick way of estimating $`p`$ approximately from the late time light-curve, when $`\beta `$ is no longer increasing. The value of $`\beta `$ does, however, approach $`p`$ because $`\delta \alpha _11/(82s)`$ sometime before the jet becomes non-relativistic and $`\alpha _20`$ at this time, thereby giving $`\beta p`$ (see eq. and Figures 1 & 2); $`\beta `$ can exceed $`p`$, as can be seen in Figure 2, however the decrease in $`\alpha _2`$ during the mildly relativistic phase prevents $`\beta `$ from getting much larger than $`p`$. This result can be extended to any observing frequency $`\nu >\nu _m`$ after an appropriate modification of equation (12). For instance, to consider the case of $`\nu _c>\nu >\nu _m`$ the right side of the equation should be multiplied by a factor of $`(t_{obs}\mathrm{\Gamma }^2)^{(13s/4)}`$, which has little effect on the evolution of $`\beta `$. The time scale for the increase in $`\beta `$ due to sideways expansion is of order $`10^2`$ ($`10^3`$) for s=0 (2) (see Figure 2). Therefore this effect is smaller than that resulting from seeing the jet edge, and it extends over a much longer time.
To conclude, we wish to emphasize that for most jets propagating in a uniform ISM we are likely to see an increase to $`\beta `$ of only 0.6–0.9; the remainder of the increase takes place on a long time scale, and thus is hard to detect. For jets in a windy medium, $`s=2`$, $`\beta `$ changes by less than about $`0.5`$ and the transition time $`R_{t_e}10^3`$. Such a gradual increase to the afterglow light-curve power-law index is extremely difficult to detect (see Figure 2). For instance, if the edge of the jet becomes visible at $`t_{obs}1`$ day, the difference in the optical flux at the end of 10 days with and without jet is $`0.25`$ mag, which can be easily missed. Thus, the GRBs studied by Chevalier and Li (1999), which show evidence for the wind model, could in fact have had a collimated ejection of material.
### 3.1 The Afterglow of GRB 990510
The optical emission of the afterglow of GRB 990510 was measured in the V, R and I bands between 0.15 and 7 days after the burst and showed the power-law index of the light-curve, $`\beta `$, to have increased from $`0.82\pm 0.02`$ to $`2.18\pm 0.05`$ (Harrison et al. 1999) or from $`0.76\pm 0.01`$ to $`2.40\pm 0.02`$ (Stanek et al. 1999) during a dimensionless time $`R_{t_e}30`$ which, as described previously, is not possible to obtain through the effects of the jet sideways expansion alone. Therefore there must be some contribution to the light-curve steepening due to the passage of one (or both) of the spectral breaks: the synchrotron peak $`\nu _m`$ and the cooling frequency $`\nu _c`$.
In Figure 3 we show a comparison between the light-curves of GRB 990510 in the $`V`$, $`R`$, $`I`$ bands and the 8.7 GHz radio data, with a model where the cooling frequency $`\nu _c`$ crosses the optical band at $`t_{obs}1`$ day. The steepening of the light-curve has little dependence on the observing band because the ratio of the largest to the smallest optical wavelength is $`1.5`$. Moreover, the integration over angle spreads in time the steepening of $`\beta `$, making it nearly achromatic. An increase of $`\beta `$ by $`0.8`$ is caused by the jet edge and the sideways expansion, and an increase of 0.25 results from the passage of $`\nu _c`$ through the observing band. A further increase of $`\beta `$ of $`0.15`$ is caused by the passage of $`\nu _m`$ through the observing band at $`t_{obs}0.03`$ day (see lower panel of Figure 3); the transition time for $`\beta `$ to increase by $`(3p1)/4`$ due to the $`\nu _m`$ crossing is about a decade in the observer frame as a result of integration over equal arrival time surface, hence one should be careful in deducing $`p`$ from $`\beta `$ at early times. All these together give rise to a light-curve that is consistent with the data. The model is also consistent with the HST $`V`$-band observation carried out at about a month after the burst (Fruchter et al 1999). The parameters for the fit are given in the caption for fig. 3 which yields the energy in the burst to be 2x10<sup>49</sup> erg. Correcting for the radiative losses the energy in the burst increases by a factor of a few to $`\mathrm{}<10^{50}`$ erg. We estimate the uncertainty in model parameters by varying them in such a way that the numerically calculated light-curve lies within 3-$`\sigma `$ of the observed data points. We find the uncertainty in the jet opening angle and the burst energy to be a factor of two, and $`ϵ_e`$, $`n`$ and $`ϵ_B`$ are found to be uncertain by factors of about 4, 40 and 7 respectively; we note that the radio observations are very important in constraining the model parameters. The electron index $`p`$ is constrained by the observed $`\beta `$ before and after the $`1`$ day break; the error in $`p`$ is $`5\%`$.
The optical emission of the afterglow of GRB 990510 can also be explained by a model where the synchrotron peak frequency crosses the observed band at $`0.1`$ day. Its effect on $`\beta `$ persists for up to $`1`$ day and yields an increase of $`\beta `$ of $`0.5`$ during the early observations. The parameters for the second model differ from the one described above (see Figure 3) somewhat. In particular, $`ϵ_e`$ is larger by a factor of two, the energy per solid angle is smaller by a factor of two, and $`\theta _0`$ is $`20\%`$ larger.
## 4 Conclusions
One of the main results of this work is to show that afterglows from well collimated Gamma-Ray Burst remnants going off in a medium with density decreasing as $`r^2`$ show little evidence for light-curve steepening due to jet edge and sideways expansion. This could explain the lack of breaks in the afterglows of GRB 980326 and GRB 980519, which Chevalier & Li (1999) found to offer support for the wind model. Jets can perhaps be detected by the measurement of time dependent polarization.
In a collimated outflow the sharpest break in the light-curve is produced in a uniform density circum-stellar medium, and is associated with the edge of the jet coming within the relativistic beaming cone (the edge effect). The magnitude of this break is $`0.7`$ (0.4) for a uniform ISM (wind model) and occurs over about 1 decade (2 decades) in time. Further steepening of the light-curve, associated with the sideways expansion of the jet, occurs on a much longer time scale of $`R_{t_e}`$10<sup>2</sup> (10<sup>4</sup>), i.e. weeks to months.
The power-law index for the light-curve of GRB 990510 increased between days 0.8 and 3 by about 1.35. This is too large and too fast to result from jet edge & sideways expansion effects. However, the observations can be explained if either the cooling or the synchrotron peak frequency passed through the observing band at about 1 or 0.1 day, respectively. Models that are consistent both the optical and radio data of this afterglow have an opening angle of $`5^o`$ and energy in the explosion is $`\mathrm{}<10^{50}`$ erg (see Figure 3).
For the afterglow of GRB 990123 the power-law index of the light-curve increased by 0.55 between days 1.5 and 3, which can be explained by the edge effect alone (Mészáros & Rees 1999).
We thank Peter Mészáros and Vahe Petrosian for useful discussions.
REFERENCES
Beuermann, K., Reinsch, K., & Hessman, F. 1999, GCN #331
Chevalier, R.A., and Li, Z.-Y. 1999, 520, L29
Fruchter, A. et al. 1999, GCN #386
Harrison, F., et al. 1999, ApJ, 523, L121
Huang, Y., Dai, Z., & Lu, T. 2000, A&A, submitted (astro-ph/0002433)
Kulkarni, S. et al. 1999, Nature, 398, 389
Marconi, G. et al. 1999, GCN #329, #332
MacFadyen, A., Woosley, S.E., & Heger, A. 2000, ApJ, submitted (astro-ph/9910034)
Mészáros , P. & Rees, M.J. 1999, MNRAS, 306, L39
Moderski, R., Sikora, M., & Bulik, T. 2000, ApJ, 529, 151
Panaitescu, A. & Mészáros, P. 1999, ApJ, 526, 707
Panaitescu, A. & Kumar, P. 2000, astro-ph/0003246
Pietrzynski, G. & Udalski, A. 1999, GCN #319, #328
Rhoads, J. 1999, ApJ, 525, 737
Sari, R., Piran, T., & Halpern, J. 1999, ApJ, 519, L17
Sari, R., Piran, T., & Narayan, R. 1998, ApJ, 497, L17
Stanek, K., Garnavich, P., Kaluzny, J., Pych, W., & Thompson, I. 1999, ApJ, 522, L39
|
no-problem/0003/nucl-th0003031.html
|
ar5iv
|
text
|
# Hot nuclear matter in the modified quark-meson coupling model with quark-quark correlations
## I introduction
The MQMC model has been recently used to study cold and hot nuclear matter. In this model, nucleons are assumed to be nonoverlapping MIT bags interacting through scalar $`\sigma `$ and vector $`\omega `$ mean fields coupled to the quarks themselves. In analogy with the nontopological soliton model, the bag parameter is assumed to decrease when the scalar mean field $`\sigma `$ increases, which makes the bag parameter medium- or density-dependent. However, as a result of introducing this medium-dependent bag parameter, it is found that as the baryonic density $`\rho _B`$ increases, the nucleon bag radius increases. At some value of $`\rho _B`$, the bags start to overlap. Since the MQMC model assumes that the bags do not overlap, the use of the this model has been limited to small and moderate baryonic densities.
One way to extend the model is to include short-range quark-quark correlations which become important when the bags overlap at high baryonic densities. These correlations are introduced by adding extra repulsive scalar and vector contact forces between the quarks in the overlapping bags to reduce the overlapping domain between the nucleons. We will follow Saito et al. and introduce these correlations in a simple geometrical way by defining a critical rigid-ball nucleonic radius $`R_c`$ which, assuming close packing, can be related to the baryonic density by $`R_c=\left(1/4\sqrt{2}\rho _B\right)^{1/3}`$. Hence, for a given nuclear density $`\rho _B`$, the nucleon bags are assumed to overlap only when the bag radius $`R`$ is larger than $`R_c`$. When the nucleon bags overlap, the quarks in the bags correlate with each other by a repulsive potential to reduce the overlapping effect by shrinking the size of the bags. Within this model, we shall study the quark-quark correlations for hot nuclear matter as well as their effects on the phase transition from the hadronic phase to the quark-gluon-plasma (QGP) phase. The QGP is considered as an ideal gas of noninteracting quarks and gluons inside a bubble or bag with bag parameter $`B`$. It is interesting to study the variation of the phase transition with the strength of the quark-quark correlation and to examine the possibility that the bag parameter for the QGP is also medium-dependent as in the MQMC model.
The outline of the paper is as follows. In Sect. II, the quark-quark correlations for the overlapping bags are generalized to the case of hot nuclear matter. In Sect. III we introduce a simple model for the QGP. Finally, Sect. III is devoted to our results and conclusions.
## II Quark-quark correlations
In dense nuclear matter, the nucleons are expected to overlap and the quarks in one nucleon correlate with the quarks in another. This correlation depends basically on how much the nucleons overlap with each other. The probability $`P(R_c/R)`$ for two nucleons, each of radius $`R`$, to overlap can be estimated, using a simple geometrical approach, to be
$`P\left({\displaystyle \frac{R_c}{R}}\right)=\left[1{\displaystyle \frac{3}{4}}\left({\displaystyle \frac{2R_c}{R}}\right)+{\displaystyle \frac{1}{16}}\left({\displaystyle \frac{2R_c}{R}}\right)^3\right]\theta \left({\displaystyle \frac{R_c}{R}}\right)\theta \left(1{\displaystyle \frac{R_c}{R}}\right).`$ (1)
Since the quark-quark correlations are of short range it is reasonable to approximate them by a contact interaction . In the mean-field approximation, the Dirac equation for the quark field inside a nucleon bag is given by
$`\left[i\gamma (m_qg_\sigma ^q\sigma +f_s^q<\overline{\psi }_q\psi _q>)(g_\omega ^q\omega +f_v^q<\psi _q^{}\psi _q>)\beta \right]\psi _q=0`$ (2)
where $`m_q`$ is the current quark mass, $`f_{s(v)}^q`$ is the coupling constant for scalar (vector)-type short-range correlations while $`<\overline{\psi }_q\psi _q>`$ and $`<\psi _q^{}\psi _q>`$ are the average values of the quark scalar density and quark density. The latter, following Ref., are approximated by $`<\overline{\psi }_q\psi _q>=\frac{m_\sigma ^2}{g_\sigma }\sigma `$ and $`<\psi _q^{}\psi _q>=3\rho _B`$. In the present work, as suggested by Ref., the correlation potentials are taken as
$`f_s^q<\overline{\psi }_q\psi _q>=\alpha P\left(R_c/R\right)\sigma ,`$ (3)
and
$`f_v^q<\psi _q^{}\psi _q>=\beta P\left(R_c/R\right)\rho _B,`$ (4)
where $`\alpha `$ and $`\beta `$ are parameters used to control the strengths of the scalar and vector quark-quark correlations. Note that as defined here $`\alpha `$ is a dimensionless parameter, while $`\beta `$ has the dimensions of $`1/(\text{Energy})^2`$ . The coupling constants $`g_\sigma ^q`$ and $`g_\omega ^q`$ for the scalar and vector mean fields are determined by reproducing the properties of normal nuclear matter.
The single-particle quark and antiquark energies in units of $`R^1`$ are given as
$`ϵ_\pm ^{n\kappa }=\mathrm{\Omega }^{n\kappa }\pm \left[g_\omega ^q\omega +f_v^q<\psi _q^{}\psi _q>\right]R,`$ (5)
where
$`\mathrm{\Omega }^{n\kappa }=\sqrt{x_{n\kappa }^2+R^2m_{}^{}{}_{q}{}^{2}}`$ (6)
and $`m_q^{}=m_q^0g_\sigma ^q\sigma +f_s^q<\overline{\psi }_q\psi _q>`$ is the effective quark mass. The boundary condition at the bag surface is given by
$`i\gamma \widehat{n}\psi _q^{n\kappa }=\psi _q^{n\kappa },`$ (7)
which determines the quark momentum $`x_{n\kappa }`$ in the state characterized by specific values of $`n`$ and $`\kappa `$. The quark chemical potential $`\mu _q`$, assuming that there are three quarks in the nucleon bag, is determined through
$`n_q`$ $`=`$ $`3`$ (8)
$`=`$ $`3{\displaystyle \underset{n\kappa }{}}\left[{\displaystyle \frac{1}{e^{(ϵ_+^{n\kappa }/R\mu _q)/T}+1}}{\displaystyle \frac{1}{e^{(ϵ_{}^{n\kappa }/R+\mu _q)/T}+1}}\right].`$ (9)
The total energy from the quarks and antiquarks is
$`E_{\text{tot}}=3{\displaystyle \underset{n\kappa }{}}{\displaystyle \frac{\mathrm{\Omega }^{n\kappa }}{R}}\left[{\displaystyle \frac{1}{e^{(ϵ_+^{n\kappa }/R\mu _q)/T}+1}}+{\displaystyle \frac{1}{e^{(ϵ_{}^{n\kappa }/R+\mu _q)/T}+1}}\right].`$ (10)
The bag energy is given by
$`E_{\text{bag}}=E_{\text{tot}}{\displaystyle \frac{Z}{R}}+{\displaystyle \frac{4\pi }{3}}R^3B(\sigma ).`$ (11)
where $`B(\sigma )`$ is the bag parameter. The medium effects are taken into account for the bag parameter
$`B=B_0\mathrm{exp}\left({\displaystyle \frac{4g_\sigma ^B\sigma }{M_N}}\right)`$ (12)
where $`B_0`$ corresponds to a free nucleon and $`g_\sigma ^B`$ is an additional parameter. The spurious center-of-mass momentum of the bag is subtracted to obtain the effective nucleon mass
$`M_N^{}=\sqrt{E_{\text{bag}}^2<p_{\text{cm}}^2>}`$ (13)
where
$`<p_{\text{cm}}^2>={\displaystyle \frac{<x^2>}{R^2}}`$ (14)
and
$`<x^2>=3{\displaystyle \underset{n\kappa }{}}x_{n\kappa }^2\left[{\displaystyle \frac{1}{e^{(ϵ_+^{n\kappa }/R\mu _q)/T}+1}}+{\displaystyle \frac{1}{e^{(ϵ_{}^{n\kappa }/R+\mu _q)/T}+1}}\right].`$ (15)
The bag radius $`R`$ is obtained by minimizing the effective nucleon mass with respect to the bag radius
$`{\displaystyle \frac{M_N^{}}{R}}=0.`$ (16)
The pressure is given by
$`P={\displaystyle \frac{1}{3}}{\displaystyle \frac{\gamma }{(2\pi )^3}}{\displaystyle d^3k\frac{k^2}{ϵ^{}}(f_B+\overline{f}_B)}+{\displaystyle \frac{1}{2}}m_\omega ^2\omega ^2{\displaystyle \frac{1}{2}}m_\sigma ^2\sigma ^2,`$ (17)
where $`\gamma =4`$ is the spin-isospin degeneracy factor and $`f_B`$ and $`\overline{f}_B`$ are the Fermi-Dirac distribution functions for the nucleons and antinucleons
$`f_B={\displaystyle \frac{1}{e^{(ϵ^{}\mu _B^{})/T}+1}}`$ (18)
$`\overline{f}_B={\displaystyle \frac{1}{e^{(ϵ^{}+\mu _B^{})/T}+1}},`$ (19)
with $`ϵ^{}=\sqrt{k^2+M_{N}^{}{}_{}{}^{2}}`$ and $`\mu _B^{}=\mu _B3\left[g_\omega ^q\omega +f_v^q<\psi _q^{}\psi _q>\right]`$ being the nucleonic effective energy and effective chemical potential, respectively. The chemical potential $`\mu _B`$ for a given density $`\rho _B`$ is determined self-consistently by the subsidiary constraint
$`\rho _B={\displaystyle \frac{\gamma }{(2\pi )^3}}{\displaystyle d^3k(f_B\overline{f}_B)}`$ (20)
with
$`\omega ={\displaystyle \frac{g_\omega }{m_\omega ^2}}\rho _B.`$ (21)
The scalar mean field $`\sigma `$ is determined through maximizing the pressure $`\frac{P}{\sigma }=0`$ which yields the self-consistency condition (SCC) for the $`\sigma `$ field. Since the scalar type correlation does not directly involve the $`\sigma `$ field, the SCC is not formally modified by it and is therefore identical to that found in our earlier work . The correlations do however affect the $`\sigma `$ field indirectly through the quark wave functions.
## III The Quark-Gluon Plasma phase
In the QGP phase we assume that we have only $`u`$ and $`d`$ quarks confined inside a bag with bag parameter B. This parameter can be interpreted as the energy per unit volume needed to create a bubble or bag in which the noninteracting quarks and gluons are confined. The total baryonic density is given by
$`\rho _B={\displaystyle \frac{1}{3}}{\displaystyle \frac{\gamma _Q}{(2\pi )^3}}{\displaystyle d^3k\left[n_k(T)\overline{n}_k(T)\right]}.`$ (22)
The quark and antiquark distribution functions are given by
$`n_k={\displaystyle \frac{1}{\mathrm{exp}[(k\frac{1}{3}\mu _B)/T]+1}},`$ (23)
$`\overline{n}_k={\displaystyle \frac{1}{\mathrm{exp}[(k+\frac{1}{3}\mu _B)/T]+1}},`$ (24)
respectively, where $`\mu _B`$ is the baryon chemical potential, and we have assumed that the quarks have a baryon number of $`1/3`$. At finite $`\rho _B`$, Eq.(22) is inverted to find the baryon chemical potential $`\mu _B`$. The pressure of the quark-gluon plasma is given by
$`P=B+{\displaystyle \frac{1}{3}}{\displaystyle \frac{\gamma _G}{2}}{\displaystyle \frac{T^4\pi ^2}{15}}+{\displaystyle \frac{1}{3}}{\displaystyle \frac{\gamma _Q}{(2\pi )^3}}{\displaystyle d^3kk\left[n_k(T)+\overline{n}_k(T)\right]}`$ (25)
where $`k=|\stackrel{}{k}|`$ and $`\gamma _Q=12`$ for quarks and $`\gamma _G=16`$ for gluons. The thermodynamic conditions for phase equilibrium between the baryon-meson phase and the QGP phase are satisfied by assuming mechanical, chemical and thermal equilibrium between the two phases namely, $`P_{\text{MQMC}}=P_{\text{QGP}}`$ and $`\mu _{B}^{}{}_{\text{MQMC}}{}^{}=\mu _{B}^{}{}_{\text{QGP}}{}^{}`$ for a given $`T`$. These conditions determine the transition line between the hadron-meson phase and the QGP phase.
We consider two cases. In the first case, the bag parameter in the QGP phase is considered as medium-independent and has the same value as that for a free nucleon. This case may be appropriate for most experimental situations aiming at producing the QGP in small chunks of hot nuclear matter produced in heavy ion collisions. In the second case, the bag parameter in the QGP is assumed to be medium-dependent, and is taken to be equal to the bag parameter of the nucleon in the MQMC model at the same density. This corresponds to the idealized case of producing the QGP in a bubble in infinite hot nuclear matter and may approximately apply to the production of the QGP in central collisions between very massive nuclei.
## IV Results and Discussions
We have studied nuclear matter at finite temperature using the MQMC model which takes the medium-dependence of the bag parameter of the nucleon into account. We choose a direct coupling of the bag parameter to the scalar mean field $`\sigma `$ as given in Eq.12. The bag parameters are those given in Ref. where they are chosen to reproduce the free nucleon mass $`M_N`$ at its experimental value of 939 MeV and a bag radius $`R_0=0.60`$ fm. For $`g_\sigma ^q=1`$, the values of the vector meson coupling and the parameter $`g_\sigma ^B`$, as fitted from the normal saturation properties of nuclear matter, are given as $`g_\omega ^2/4\pi =5.24`$ and $`(g_\sigma ^B)^2/4\pi =3.69`$. The current quark mass $`m_q`$ is taken equal to zero. For the short-range quark-quark correlation strengths we use values comparable to those in Ref. who, in the present notation, use $`\alpha 9`$ and $`\beta 34\text{GeV}^2`$. The latter value of $`\beta `$ is needed to reproduce the empirical value of the energy per nucleon for symmetric nuclear matter in the high density region $`\rho _B/\rho _0=2.54`$, where $`\rho _0=0.17\text{fm}^3`$ is normal nuclear density. We carried out calculations for the cases of $`\alpha =`$ 0, 5, 7 and 9 as well as $`\beta =`$ 0, 30 and 60 GeV<sup>-2</sup>. The scalar quark-quark correlations affect the nucleon’s size and mass while the vector quark-quark correlations determine the pressure and energy density of nuclear matter.
Fig. 1 displays isotherms of the effective mass $`M_N^{}`$ vs the baryonic density $`\rho _B`$ for various strengths of the scalar quark-quark correlations. As already mentioned, the vector correlations do not have an effect on the mass. For low values of $`\alpha `$, $`M_N^{}`$ has the usual trend of decreasing with $`\rho _B`$. However, as $`\alpha `$ increases $`M_N^{}`$ tends to saturate and, for still larger values of $`\alpha `$, the effective mass even starts to increase slightly at high density, especially at the lower temperatures. This novel feature is quite interesting as it is questionable that the monotonic decrease of $`M_N^{}`$ with density can continue unchecked for higher and higher densities.
Fig. 2 displays isotherms of $`R/R_0`$ vs the baryon density for several values of $`\alpha `$. Without correlations, i.e. $`\alpha =0`$, $`R/R_0`$ increases monotonically with $`\rho _B`$. However, when the correlations are introduced, $`R/R_0`$ starts to decrease, rather abruptly, when the bags start to overlap. This abruptness is due to the simple geometrical way in which the correlations are introduced. As $`\alpha `$ is increased further, $`R/R_0`$ decreases more steeply at high densities. The repulsive nature of the quark-quark correlations shrinks the bag size.
Fig. 3 displays the transition line between the baryon-meson phase and the QGP phase in the $`(T,\mu _B)`$ plane, while Fig. 4 displays it in the $`(T,\rho _B)`$ plane. This phase transition line is determined by equalizing the pressure $`p`$ and chemical potential $`\mu _B`$ in both phases for a given temperature $`T`$. We have considered two cases for the bag parameter of the QGP bubble. In case I, the bag parameter is taken to be medium-independent and fixed at its free-space value. As the density $`\rho _B`$ is increased, the nucleon bags start to overlap and the transition line becomes sensitive to the quark-quark correlations for $`\mu _B`$ larger than about 950 MeV, corresponding to densities larger than about 2.5 times normal nuclear matter density. For temperatures $`T<60`$ MeV, the phase transition takes place at rather large densities and large chemical potentials. The quark-quark correlations are found to move the transition line to lower chemical potentials and lower densities. For cold nuclear matter the correlations can reduce $`\mu _B`$ from 1850 to 1450 MeV. The corresponding change in the $`(T,\rho _B)`$ plane is more dramatic as can be seen by inspecting Fig. 4. The phase transition, at low temperatures, takes place at densities as high as 8$`\rho _0`$ without correlations, but this value is reduced to about 5$`\rho _0`$ for the strongest correlations considered. In case II, also shown in Figs. 3 and 4, we have used a medium-dependent bag parameter for the QGP bubble. This bag parameter is identical to the bag parameter $`B(\sigma )`$ used for the nucleonic bags in the hadronic phase as given in Eq.12. This medium-dependence is appropriate for the production of a QGP bubble in infinite nuclear and may be appropriate for its production in the heart of the participant region in central collisions between very massive nuclei. In this case it is found that the phase transition from the baryon-meson phase to the QGP phase takes place at much lower densities so that the nucleons do not overlap and the quark-quark correlations do not play a role in determining the transition line. The transition temperature falls rapidly with density and the phase transition at low temperatures occurs at a comparatively low chemical potential $`\mu _B=950`$ MeV and a correspondingly low density $`\rho _B/\rho _0=1.35`$ . This density is obviously too low for the production of the QGP in heavy ion collisions and is strictly appropriate only for infinite nuclear matter as it does not include any finite size effects. It does however hint at the sizable reduction in the compression needed to produce the QGP in collisions between very heavy systems.
In conclusion we have investigated the effect of short-range quark-quark correlations on the properties of hot nuclear matter and the phase transition to the QGP. We have found that these correlations cure the problem usually encountered in the MQMC model of a very large nucleonic bag radius. They also lead to the saturation of the effective mass at high densities. Moreover, these correlations affect the properties of the phase transition at low temperatures for the case of a medium-independent bag parameter for the QGP bubble in vacuum (case I). Such a situation arises in experimental situations attempting to produce the QGP in small finite hot nuclear systems. In such a case, the present results indicate that the phase transition occurs at very high densities 5-8 times normal nuclear matter density. The only exceptions occur at very high temperatures greater than 100 MeV, in which case the transition occurs at arbitrarily small densities. In case II we have used a medium-dependent bag parameter for the QGP bubble, and the phase transition is found to occur at much lower densities than in Case I. The phase transition occurs before the nucleons overlap and so the quark-quark correlations do not play a role in determining the transition line. This case, strictly speaking, corresponds to producing a QGP bubble in infinite nuclear matter, but may be approximately approached in central collisions involving two very heavy nuclei. The comparatively low compressions required for the phase transition in such collisions would thus offer the best chance of producing the QGP.
###### Acknowledgements.
Financial support by the Deutsche Forschungsgemeinschaft through the grant GR 243/51-1 is gratefully acknowledged.
|
no-problem/0003/physics0003077.html
|
ar5iv
|
text
|
# Patterns on liquid surfaces: cnoidal waves, compactons and scaling
## 1 Introduction
Liquid oscillations on bounded surfaces have been studied intensively, both theoretically \[1-3\] and experimentally \[4-6\]. The small-amplitude oscillations of incompressible drops maintained by surface tension are usually characterized by their fundamental linear modes of motion in terms of spherical harmonics \[1-3\]. Nonlinear oscillations of a liquid drop introduce new phenomena and more complicated patterns (higher resonances, solitons, compactons, breakup and fragmentation, fractal structures, superdeformed shapes) than can be described by a linear theory. Nonlinearities in the description of an ideal drop demonstrating irrotational flow arise from Bernoulli’s equation for the pressure field and from the kinematic surface boundary conditions . Computer simulations have been carried for non-linear axial oscillations and they are in very good agreement with experiments \[4-6\].
The majority of experiments show a rich variety of complicated shapes, many related to the spinning, breaking, fission and fusion of liquid drops. There are experiments and numerical simulations where special rotational patterns of circulation emerge: a running wave originates on the surface of the drop and then propagates inward. Recent results (superconductors , catalytic patterns , quasi-molecular spectra , numerical tests on higher order non-linear equations and analytical calculations on the non-compact real axis \[12-13\]) show shape-stable traveling waves for nonlinear systems with compact geometry. Recent studies showed that a similar one-dimensional analysis for the process of cluster emission from heavy nuclei and quasi-molecular spectra of nuclear molecules yields good agreement with experiment . Such solutions are stable and express to a good extent the formation and stability of patterns, clusters, droplets, etc. However, even localised, they have nor compact support neither periodicity (excepting some intermediate steps of the cnoidal solutions, ), creating thus difficulties when analysing on compact surfaces.
In the present paper we comment on the cnoidal-towards-solitons solution investigated in , especially from the energy point of view. We introduce here a new nonlinear 3-dimensional dynamical model of the surface, in compact geometry (pools, droplets, bubbles, shells), inspired by , and we investigate the possibilities to obtain compacton-like solutions for this model. We also study the scale symmetries of such solutions.
The model in consider the nolinear hydrodynamic equations of the surface of a liquid drop and show their direct connection to KdV or MKdV systems. Traveling solutions that are cnoidal waves are obtained and they generate multiscale patterns ranging from small harmonic oscillations (linearized model), to nonlinear oscillations, up to solitary waves. These non-axis-symmetric localized shapes are described by a KdV Hamiltonian system, too, which results as the second order approximation of the general Hamiltonian, next corrextion from the linear harmonic shape oscillations. Such rotons were observed experimentally when the shape oscillations of a droplet became nonlinear .
## 2 Liquid drop cnoidal and soliton solutions from Hamiltonian approach
The dynamics governing one-dimensional surface oscillations of a perfect ($`\rho =`$const.), irrotational fluid drop (or bubble, shell) can be described by the velocity field $`\mathrm{\Phi }`$ and a corresponding Hamiltonian \[1-3,7,10,13\]. By expanding the Hamiltonian and dynamical equations in terms of a small parameter, i.e. the amplitude of the perturbation $`\eta `$ over the radius of drop $`R_0`$, the usual linear theory is recovered in the first order. Higher order non-linear terms introduce deviations and produce large surface oscillations like cnoidal waves . These oscillations, under conditions of a rigid core of radius $`R_0h`$ and non-zero angular momentum, transform into solitary waves. In the following, by using the calculation developed in , we present the Hamiltonian approach for the liquid drops nonlinear oscillations. However, this approach is different from the nuclear liquid drop model point of view in , since we do not use here the nuclear interaction (shell corrections) responsible for the formation of different potential valleys.
The total hydrodynamic energy $`E`$ consists of the sum of the kinetic $`T`$ and potential $`U`$ energies of the liquid drop. The shape function is assumed to factorize, $`r(\theta ,\varphi ,t)=R_0(1+g(\theta )\eta (\varphi ,t))`$. All terms that depend on $`\theta `$ are absorbed in the coefficients of some integrals and the energy reduces to a functional of $`\eta `$ only. The potential energy is given by the surface energy $`U_S=\sigma (𝒜_\eta 𝒜_0)|_{V_0}`$, where $`\sigma `$ is the surface pressure coefficient, $`𝒜_\eta `$ is the area of the deformed drop, and $`𝒜_0`$ the area of the spherical drop, of constant volume $`V_0`$. The kinetic energy $`T=\rho _\mathrm{\Sigma }\mathrm{\Phi }\mathrm{\Phi }d\stackrel{}{S}/2`$, \[1-3,10,13\], the kinematic free surface boundary condition $`\mathrm{\Phi }_r=_tr+(_\theta r)\mathrm{\Phi }_\theta /r^2+(_\varphi r)\mathrm{\Phi }_\varphi /r^2\mathrm{sin}\theta `$, and the boundary condition for the radial velocity on the inner surface $`_r\mathrm{\Phi }|_{r=R_0h}=0`$, , result in the expression
$`T={\displaystyle \frac{R_0^2\rho }{2}}{\displaystyle _0^\pi }{\displaystyle _0^{2\pi }}{\displaystyle \frac{R_0\mathrm{\Phi }\eta _t\mathrm{sin}\theta +\frac{1}{R_0}g\eta _\varphi \mathrm{\Phi }\mathrm{\Phi }_\varphi (1\mathrm{sin}\theta )}{\sqrt{1+g_\theta ^2\eta ^2+g^2\eta _\varphi ^2}}}𝑑\theta 𝑑\varphi .`$ (1)
If the total energy, written in the second order in $`\eta `$, is taken to be a Hamiltonian $`H[\eta ]`$, the time derivative of any quantity $`F[\eta ]`$ is given by $`F_t=[F,H]`$. Defining $`F=_0^{2\pi }\eta (\varphi Vt)𝑑\varphi `$ it results (, last reference)
$`{\displaystyle \frac{dF}{dt}}={\displaystyle _0^{2\pi }}\eta _t𝑑\varphi ={\displaystyle _0^{2\pi }}(2C_2\eta _\varphi +6C_3\eta \eta _\varphi 2C_4\eta _{\varphi \varphi \varphi })𝑑\varphi =0,`$ (2)
which leads to the KdV equation. Here $`𝒞_2=\sigma R_0^2(S_{1,0}^{1,0}+S_{0,1}^{1,0}/2)+R_0^6\rho V^2C_{2,1}^{3,1}/2`$, $`𝒞_3=\sigma R_0^2S_{1,2}^{1,0}/2+R_0^6\rho V^2(2S_{1,2}^{3,1}R_0+S_{2,3}^{5,2}+R_0S_{2,3}^{6,2})/2`$, $`𝒞_4=\sigma R_0^2S_{2,0}^{1,0}/2`$, with $`S_{i,j}^{k,l}=R_0^l_0^\pi h^lg^ig_\theta ^j\mathrm{sin}^k\theta d\theta `$. Terms proportional to $`\eta \eta _\varphi ^2`$ can be neglected since they introduce a factor $`\eta _0^3/L^2`$ which is small compared to $`\eta _0^3`$, i.e. it is in the third order. In order to verify the correctness of the above approximations, we present, for a typical soliton solution $`\eta (\varphi ,t)`$, some terms occuring in the expresion of $`E`$, Fig. 1. All details of calculation are given in . Therefore, the energy of the non-linear liquid drop model can be interpreted as the Hamiltonian of the one-dimensional KdV equation. The coefficients in eq.(2) depend on two stationary functions of $`\theta `$ (the depth $`h(\theta )`$ and the transversal profile $`g(\theta )`$), hence, under the integration, they involve only a parametric dependence.
The KdV equation has the following cnoidal wave (Jacobi elliptic function) as exact solution
$`\eta =\alpha _3+(\alpha _2\alpha _3)sn^2\left(\sqrt{{\displaystyle \frac{C_3(\alpha _3\alpha _2)}{12C_4}}}(\varphi Vt)|m\right),`$ (3)
where $`\alpha _1,\alpha _2,\alpha _3`$ are constants of integration, $`m^2=(\alpha _3\alpha _2)/(\alpha _3\alpha _1)`$. This solution oscillates between $`\alpha _2`$ and $`\alpha _3`$, with a period $`T=2K(m)\sqrt{\frac{(\alpha _3\alpha _2)C_3}{3C_4}}`$, where $`K(m)`$ is the period of a Jacobi elliptic function $`sn(x|m)`$. The parameter $`V`$ is the velocity of the cnoidal waves and $`\alpha _1+\alpha _2+\alpha _3=\frac{3(VC_2)}{2C}`$. In the limit $`\alpha _1=\alpha _2=0`$ the solution eq.(3) approaches
$`\eta =\eta _0sech^2\left[\sqrt{{\displaystyle \frac{\eta _0C_3}{12C_4}}}(\varphi Vt)\right],`$ (4)
which is the soliton solution of amplitude $`\eta _0`$. Small oscillation occur when $`\alpha _3\alpha _2`$ and $`m0,T\pi /2`$. Consequently, the system has two limiting solutions, a periodic and a localized traveling profile, which deform one into the other, by the initial conditions and the velocity parameter $`V`$. A figure showing the deformation from the $`l=5`$ cnoidal mode towards a soliton is shown in Figs. 2.
The cnoidal solution eq.(3) depends on the parameters $`\alpha _i`$ subjected to the volume conservation and the periodicity condition of the solution (for the final soliton state this condition should be taken as a quasi-periodicity realised by the rapidly decreasing profile. This a problem of the basic model, ). The periodicity restriction reads
$`K\left(\sqrt{{\displaystyle \frac{\alpha _3\alpha _2}{\alpha _3\alpha _1}}}\right)={\displaystyle \frac{\pi }{n}}\sqrt{\alpha _3\alpha _1},n=1,2,\mathrm{},2\sqrt{\alpha _3\alpha _1}.`$ (5)
Hence, a single free parameter remains, which can be taken either one out of the three $`\alpha `$’s, $`V`$ or $`\eta _0`$. Equatorial cross-sections of the drop are shown in Fig. 2b for the cnoidal solution at several values of the parameter $`\eta _0`$. All explicite calculations are presented in detail in .
In Fig. 3 we present the total energy plotted versus the parameters $`\alpha _1,\alpha _2`$ for constant volume. From the small oscillation limit ($`\alpha _23`$ in the figure) towards the solitary wave limit ($`\alpha _2=1`$ in the figure) the energy increases and has a valley for $`\alpha _10.1`$ and $`\alpha _2(1.2,1.75)`$ (close to the $`l=2`$ mode). In order to introduce more realistic results, the total hydrodynamic energy is plotted versus $`\alpha _1,\alpha _2`$ for constant volume, too but we marked those special solutions fulfilling the periodicity condition. In Fig. 4 we present the total energy valley, from the small oscillations limit towards the solitary wave limit. We notice that the energy constantly increases but around $`\alpha _2(1.2,1.75)`$ (close to the linear $`l=2`$ mode) it has a valley providing some stability for solitary solution (also called roton ).
## 3 The three-dimensional nonlinear model
In the following we introduce a sort of generalized KdV equation for fluids. We consider the three-dimensional irrotational flow of an ideal incompressible fluid layer in a semi-finte rectangular channel subjected to uniform vertical gravitation ($`g`$ in $`z`$ direction) and to surface pressure . The depth of the layer, when the fluid is at rest is $`z=h`$. Boundary conditions at the finite spaced walls consist in annilation of the normal velocity component, i.e. on the bottom of the layer ($`z=0`$) and on the walls $`x=x_0\pm L/2`$ of the channel of width $`L`$. The following results remain valid if the walls expand arbitrary, e.g. $`L\mathrm{}`$, and the flow is free. We choose for the potential of the velocities the form
$`\mathrm{\Phi }={\displaystyle \underset{k0}{}}\alpha _k(t)\mathrm{cos}{\displaystyle \frac{k\pi (xx_0)}{L}}\mathrm{cosh}{\displaystyle \frac{\sqrt{2}k\pi (yy_0)}{L}}\mathrm{cos}{\displaystyle \frac{k\pi z}{L}},`$ (6)
where $`\alpha _k(t)`$ are arbitrary functions of time and $`L`$ is a free parameter. Eq.(7) fulfils $`\mathrm{}\mathrm{\Phi }=0`$ and the above boundary conditions at the walls. However there is another boundary condition at the free surface of the fluid
$`(\mathrm{\Phi }_z\eta _t\eta _x\mathrm{\Phi }_x)_{z=h+\eta }=0,`$ (7)
where $`\eta (x,t)`$ describes the shape of the free surface. By introducing the function
$`f(x,t)={\displaystyle \underset{k=0}{\overset{\mathrm{}}{}}}{\displaystyle \frac{\alpha _k(t)k\pi }{L}}\left(\mathrm{sin}{\displaystyle \frac{k\pi (xx_0)}{L}}\mathrm{cosh}{\displaystyle \frac{\sqrt{2}k\pi (yy_0)}{L}}\right),`$ (8)
the velocity field on the free surface can be written
$`\mathrm{\Phi }_x|_{z=h+\eta }=\mathrm{cosh}(z_x)f,`$
$`\mathrm{\Phi }_z|_{z=h+\eta }=\mathrm{sinh}(z_x)f.`$ (9)
Eqs (10) do not depend on $`L`$ and the case $`L\mathrm{}`$ of unbounded channels and free travelling profiles remains equaly valid. Since the unique force field in the problem is potential, the dynamics is described by the Bernoulli equation, which, at the free surface, reads
$`\mathrm{\Phi }_{xt}+\mathrm{\Phi }_x\mathrm{\Phi }_{xx}+\mathrm{\Phi }_z\mathrm{\Phi }_{xz}+g\eta _x+{\displaystyle \frac{1}{\rho }}P_x=0.`$ (10)
Here $`P`$ is the surface pressure obtained by equating $`P`$’s first variation with the local mean curvature of the surface, under the restriction of the volume conservation
$`P|_{z=h+\eta }={\displaystyle \frac{\sigma \eta _{xx}}{(1+\eta _x^2)^{3/2}}},`$ (11)
and $`\sigma `$ is the surface pressure coefficient. The pressure in eq.(12) approaches $`\sigma \eta _{xx}`$, for small enough relative amplitude of the deformation $`\eta /h`$. In order to solve the system of the two partial differential equations (8,11) with respect to the unknown functions $`f(x,t)`$ and $`\eta (x,t)`$, we consider the approximation of small perturbations of the surface compared to the depth, $`a=max|\eta ^{(k)}(x,t)|<<h`$, where $`k=0,\mathrm{},3`$ are orders of differentiation. Inspired by and using a sort of perturbation technique in $`a/h`$, we obtain from eqs.(6-11) the generalised KdV equation
$`\eta _t+{\displaystyle \frac{c_0}{h}}\mathrm{sin}(h)\eta +{\displaystyle \frac{c_0}{h}}(\eta _x\mathrm{cosh}(h)\eta +\eta \mathrm{cosh}(h)\eta _x)0.`$ (12)
If we approximate $`\mathrm{sin}(h)h\frac{1}{6}(h)^3`$, $`\mathrm{cosh}(h)1\frac{1}{2}(h)^2`$, we obtain, from eq.(9), the polynomial differential equation:
$`a\stackrel{~}{\eta }_t+2c_0ϵ^2h\stackrel{~}{\eta }\stackrel{~}{\eta }_x+c_0ϵh\stackrel{~}{\eta }_xc_0ϵ{\displaystyle \frac{h^3}{6}}\stackrel{~}{\eta }_{xxx}{\displaystyle \frac{c_0ϵ^2h^3}{2}}\left(\stackrel{~}{\eta }_x\stackrel{~}{\eta }_{xx}+\stackrel{~}{\eta }\stackrel{~}{\eta }_{xxx}\right)`$ $`=`$ $`0,`$ (13)
where $`ϵ=\frac{a}{h}`$. The first four terms in eq.(20) correspond to the zero order approximation terms in $`f`$, obtained from the boundary condition at the free surface, i.e. the traditional way of obtaining the KdV equation in shallow channels.
In order to find an exact solution for eq.(12) we can write it in the form:
$$Ahu_X(X)+\frac{u(X+h)u(Xh)}{2i}+u_X(X)\frac{u(X+h)+u(Xh)}{2}$$
$`+u(X){\displaystyle \frac{u_X(X+h)+u_X(Xh)}{2}}=0,`$ (14)
where $`X=x+Ac_0t`$ and $`A`$ is an arbitrary real constant. We want to stress here that eq.(14) is a finite-difference differential equation, which is rather the exception than the rule fir such systems. Hence, it may contain among its symmetries, the scaling symmetry. Actualy, the first derivative of $`u(X)`$ is shown to be alinear combination of translated versions of the original function. In this way, the theory of such equations can be related with the wavelet, or other self-similarity systems, theory, . In the following we study the solutions with a rapid decreasing at infinity and make a change of variable: $`v=e^{BX}`$ for $`x(\mathrm{},0)`$ and $`v=e^{BX}`$ for $`x(0,\mathrm{})`$, with $`B`$ an arbitrary constant. Writing $`u(X)=hA+f(v)`$, and choosing for the solution the form of a power series in $`v`$:
$`f(v)={\displaystyle \underset{n=0}{\overset{\mathrm{}}{}}}a_nv^n,`$ (15)
we obtain a nonlinear recurrsion relation for the coefficients $`a_n`$:
$$\left(Ahk+\frac{\mathrm{sin}(Bhk)}{B}\right)a_k$$
$`={\displaystyle \underset{n=1}{\overset{k1}{}}}n\left(\mathrm{cosh}\left(Bh(kn)\right)+\mathrm{cosh}(Bh(k1))\right)a_na_{kn}.`$ (16)
With the coefficients given in eq.(16) the general solution $`\eta `$ can be written analyticaly. In order to verify the consistency of this solution we study a limiting case of the relation, by replacing $`\mathrm{sin}`$ and $`\mathrm{cosh}`$ expressions with their lowest nonvanishing terms in their power expansions Thus, eq.(16) reduces to
$`\alpha _k={\displaystyle \frac{6}{B^2h^3k(k^21)}}{\displaystyle \underset{n=1}{\overset{k1}{}}}n\alpha _n\alpha _{kn},`$ (17)
and
$`\alpha _k=\left({\displaystyle \frac{1}{2B^2h^3}}\right)^{k1}k`$ (18)
is the solution of the above recurrence relation. In this approximation, the solution of eq.(12) reads
$`\eta (X)`$ $`=`$ $`2B^2h^3{\displaystyle \underset{k=1}{\overset{\mathrm{}}{}}}k\left(e^{B|X|}\right)^k={\displaystyle \frac{B^2h^3}{2}}{\displaystyle \frac{1}{\left(\mathrm{cosh}(BX/2)\right)^2}},`$ (19)
which is just the single-soliton solution of the KdV equation and it was indeed obtained by assuming $`h`$ small in the recurrence relation (16). Hence, we have shown that the KdV equation describing the shallow liquids can be generalised for any depths and lengths. This result may be the starting point to search for more interesting symmetries. It would be interesting to interpret the generalized-KdV eq.(12) as the Casimir element of a certain algebra.
## 4 Compacton and self-similar solutions
Eq.(12) has a special character, namely contains both infinitesimal and finite difference operators. This particularity relates it to another field of nonlinear systems, that is scaling functions and wavelet basis, functions or distributions with compact support and self-similarity properties. In the following we investigate a particular case of eq.(12), that is when $`h\eta `$, $`h\delta `$, where $`\delta `$ is the half-width of the solution, if this has bounded or compact support. In this approximations, from eq.(12) we keep only the terms
$`{\displaystyle \frac{1}{c_0}}\eta _t+\eta _x+{\displaystyle \frac{1}{h}}\eta \eta _x{\displaystyle \frac{h}{2}}\eta _x\eta _{xx}+{\displaystyle \frac{1}{h}}\eta \eta _x{\displaystyle \frac{h}{2}}\eta \eta _{xxx}+𝒪_30.`$ (20)
This equation is related to another intergable system, namely the K(2,2) equation, investigated in
$`\eta _t+(\eta ^2)_x+(\eta ^2)_{xxx}=0.`$ (21)
The main property of the K(2,2) equation is the equal occurence of non-linearity, dispersion and the existence of a Lagrangian and Hamiltonian system associated with it. The special solutions of this equation are the compactons
$`\eta _c={\displaystyle \frac{4\eta _0}{3}}\mathrm{cos}^2\left({\displaystyle \frac{x\eta _0t}{4}}\right),|x\eta _0t|2\pi ,`$ (22)
and $`\eta _c=0`$ otherwise. This special solutions have compact support and special properties concerning the scattering between different such solutions. As the authors comment in , the robustness of these solutions makes it clear that a new mechanism is underlying this system. In this respect, we would like to add that, taking into account eq.(12), this new mechanism might be related to selfsimilarity and multiscale properties of nonlinear systems.
## 5 Conclusions
In the present paper we introduced a non-linear hydrodynamic model describing new modes of motion of the free surface of a liquid. The total energy of this nonlinear liquid drop model, subject to non-linear boundary conditions at the free surface and the inner surface of the fluid layer, gives the Hamiltonian of the Korteweg de Vries equation. We have studied the stability of the cnoidal wave and solitary wave solutions, from the point of view of minima of this Hamiltonian.
The non-linear terms yield rotating steady-state solutions that are cnoidal waves on the surface of the drop, covering continuously the range from small harmonic oscillations, to anharmonic oscillations, and up to solitary waves. The initial one-dimensional model was extend to a three-dimensional model. A kind of new generalized KdV equation, together with some of its analytical solutions have been presented. We also found a connection between the obtained generalized KdV equation, and another one (i.e. K(2,2)), in a certain approximation. In this case, compacton solutions have been found and new symmetries (e.g. self-similarity) were put into evidence.
The analytic solutions of the non-linear model presented in this paper, make possible the study of clusterization as well as to explain or predict the existence of new strongly deformed shapes, or new patterns having compact support or finite wavelength. The model applies not only in fluid and rheology theories, but may provide insight into similar processes occurring in other fields and at other scales, such as the behavior of superdeformed nuclei, supernova, preformation of cluster in hydrodynamic models (metallic, molecular, nuclear), the fission of liquid drops (nuclear physics), inertial fusion, etc.
Supported by the U.S. National Science Foundation through a regular grant, No. 9603006, and a Cooperative Agreement, No. EPS-9550481, that includes a matching component from the Louisiana Board of Regents Support Fund. One of the authors (A.L.) would like to thank Peter Herczeg from the T5 Division at Los Alamos National Laboratory, and at the Center for Nonlinear Studies at Los Alamos for hospitality.
FIGURE CAPTIONS
Fig. 1
The order of smallness of four typical terms depending on $`\varphi `$ and occuring in the Hamiltonian, eqs.(1,2). Order zero holds for $`\eta ^2`$, order 1 for $`\eta _\varphi ^2`$, order 2 for $`\eta ^3,`$ and order 3 for $`\eta \eta _\varphi ^2`$.
Figs. 2
2a.
The transition of the cnoidal solution, from a $`l=5`$ mode to the soliton limit: shape of the cross-section for $`\theta =\pi /2`$ function as a function of $`\alpha _2`$ with $`\alpha _{1,3}`$ fixed by the volume conservation and periodicity conditions.
2b.
Cnoidal solutions (cross-sections of $`\mathrm{\Sigma }1`$ for $`\theta =\pi /2`$) subject to the volume conservation constraint. Results for the $`l=6`$ mode to the $`l=2`$ mode and a soliton are shown. The corresponding linear modes, i.e. spherical harmonics, are superimposed on the non-linear solutions.
2c.
Pictorial view of a soliton deformation of a drop, on the top of the original undeformed sphere. The supporting sphere for the soliton has smaller radius because of the volume conservation.
Fig. 3
The energy plotted versus $`\alpha _1,\alpha _2`$ for constant volume. From the small oscillation limit ($`\alpha _23`$) towards the solitary wave limit ($`\alpha _2=1`$) the energy increases and has a valley for $`\alpha _10.1`$ and $`\alpha _2(1.2,1.75)`$ (close to the $`l=2`$ mode).
Fig. 4
The total energy plotted versus $`\alpha _1,\alpha _2`$ for constant volume (small circles). Larger circles indicate the patterns fulfilling the periodicity condition. From the small oscillations limit ($`\alpha _23`$) towards the solitary wave limit ($`\alpha _2=1`$) the energy increases but for $`\alpha _2(1.2,1.75)`$ (close to $`l=2`$ mode) it has a valley..
|
no-problem/0003/hep-ph0003156.html
|
ar5iv
|
text
|
# Acknowledgement
## Acknowledgement
The author would like to thank Jim Crittenden and Nick Brook for pointing his attention to the issue discussed in this note and for numerous valuable discussions.
|
no-problem/0003/cond-mat0003387.html
|
ar5iv
|
text
|
# Suppression and enhancement of impurity scattering in a Bose-Einstein condensate
\[
## Abstract
Impurity atoms propagating at variable velocities through a trapped Bose-Einstein condensate were produced using a stimulated Raman transition. The redistribution of momentum by collisions between the impurity atoms and the stationary condensate was observed in a time-of-flight analysis. The collisional cross section was dramatically reduced when the velocity of the impurities was reduced below the speed of sound of the condensate, in agreement with the Landau criterion for superfluidity. For large numbers of impurity atoms, we observed an enhancement of atomic collisions due to bosonic stimulation. This enhancement is analogous to optical superradiance.
\]
One manifestation of superfluidity is that objects traveling below a critical velocity $`v_L`$ through a superfluid propagate without dissipation. Landau used simple kinematic arguments to derive an expression for the critical velocity $`v_L=\mathrm{min}(E(p)/p)`$, where $`E(p)`$ is the energy of an elementary excitation with momentum $`p`$.
When superfluid <sup>4</sup>He was forced through capillaries, adsorbed films and tightly packed powders , the onset of dissipation was found at velocities much lower than the Landau critical velocity due to turbulence and vortex formation in the superfluid. The Landau critical velocity can usually only be observed by moving *microscopic* particles through the superfluid which do not create a macroscopic flow pattern. Studies of superfluidity with microscopic objects were pursued in liquid <sup>4</sup>He by dragging negative ions through pressurized <sup>4</sup>He , and by scattering <sup>3</sup>He atoms off superfluid <sup>4</sup>He droplets .
Atomic Bose-Einstein condensates are superfluid gases and show phenomena analogous to superfluid liquids, albeit at eight orders of magnitude lower density. For a homogeneous gaseous Bose-Einstein condensate, the Bogoliubov spectrum indicates a Landau critical velocity equal to the speed of sound $`v_L=c\sqrt{\mu /M}`$, where $`\mu `$ is the chemical potential and $`M`$ is the mass of condensate atoms. The first evidence for a critical velocity in a Bose condensate was obtained by stirring the condensate with a *macroscopic* object (a laser beam) . The observed critical velocity was about a factor of four smaller than the Bogoliubov speed of sound. Recent studies of superfluidity have revealed quantized vortices and a non-classical moment of inertia .
In this Letter, we report on a study of the motion of *microscopic* impurities through a gaseous Bose-Einstein condensate. The impurity atoms were created using a stimulated Raman process which transferred a small fraction of the condensate atoms into an untrapped hyperfine state with well-defined initial velocity. As these impurities traversed the condensate, they dissipated energy by colliding with the stationary condensate, which resulted in a redistribution of momenta of the impurities. As the impurity velocity was reduced below the speed of sound, we observed a dramatic reduction in the probability of collisions, which is evidence for superfluidity in Bose-Einstein condensates.
Our experiments were performed on Bose-Einstein condensates of sodium atoms in the $`|F=1,m_F=1`$ hyperfine ground state. Condensates of $`10^7`$ atoms were created using laser and evaporative cooling and stored in a cylindrically symmetric magnetic trap with an axial trapping frequency of $`16`$ Hz. By adiabatically changing the radial trapping frequency between $`165`$ Hz and $`33`$ Hz, the density of the condensate, and hence the peak speed of sound in the condensate was varied between $`1.1`$ cm/s and $`0.55`$ cm/s.
Impurity atoms were created using a two-photon Raman transition, in which the condensate was exposed to a pair of laser beams . The laser beams had orthogonal linear polarizations, thus driving a Raman transition from the trapped $`|F=1,m_F=1`$ state to the untrapped $`|F=1,m_F=0`$ hyperfine ground state. Both beams were derived from a common source, and then passed through two acousto-optic modulators operating with a frequency difference $`\omega =\omega _z+\mathrm{}q^2/(2M)`$, where $`\mathrm{}\omega _z`$ is the Zeeman splitting between the $`|m_F=1`$ and $`|m_F=0`$ states in the offset field of the magnetic trap. The momentum transfer from the light field to the $`m_F=0`$ atoms is $`\mathrm{}q=2\mathrm{}k\mathrm{sin}\theta /2`$ where $`k`$ is the wavevector of the light field and $`\theta `$ is the angle between the two laser beams. The Raman light field was typically pulsed on for about $`10\mu s`$ at an intensity of several mW/cm<sup>2</sup>. The fraction of transferred atoms could be varied by changing the light intensity.
Collisions between the impurities and the condensate were analyzed by time-of-flight absorption imaging. For this, the magnetic trap was suddenly switched off 4 ms after the Raman pulse, by which time the impurity atoms had fully traversed the condensate. After an additional 5 ms, a magnetic field gradient was pulsed on for 30 ms, spatially separating the $`m_F=0`$ atoms from the condensate. After a total time-of-flight of typically 60 ms, all atoms were optically pumped into the $`|F=2,m_F=2`$ ground state and resonantly imaged on the cycling transition.
Collisions at ultracold temperatures are in the $`s`$-wave regime. The products of such collisions between free particles are evenly distributed in momentum space over a spherical shell around the center-of-mass momentum of the collision partners. A time-of-flight picture records the momentum distribution of the released cloud. Thus, collisions between the condensate and the impurities are visible as a circular halo which represents the line-of-sight integrated spherical shell. Fig. 1 shows a typical absorption image of collisions in the free particle regime for impurity atoms with a velocity of $`2\mathrm{}k/M`$ = 6 cm/s, produced by counterpropagating Raman beams.
To probe for superfluidity, we produced impurity atoms at low velocities (7 mm/s) by using Raman beams which intersected at an angle of $`14^{}`$ and aligned symmetrically about the radial direction, so that the difference vector $`𝐪=𝐤_\mathrm{𝟏}𝐤_\mathrm{𝟐}`$ was directed axially . The trajectory of the impurity atoms was initially in the axial direction, but was soon modified by two forces: a downward gravitational acceleration along a radial direction (into the page in images and hereafter denoted as the $`z`$-axis), and the radial mean-field repulsion of the $`m_F=0`$ atoms from the $`m_F=1`$ condensate.
This situation is similar to the previous study of an rf output coupler by which $`m_F=0`$ atoms were produced at rest . For the rf output coupler, collisions between impurity atoms and the condensate were difficult to detect because the scattered and unscattered atoms were not clearly distinguishable. In this study, the small axial velocity imparted by Raman scattering allowed us to identify products of elastic collisions in time-of-flight images since collisions with the stationary condensate tended to redistribute the impurity atoms toward lower axial velocities. However, the acceleration of the impurity atoms precluded the observation of well-defined collision halos. A time-of-flight analysis of impurity scattering for the case of a low density condensate is shown in Figure 2a. The axial velocity imparted by Raman scattering displaces the unscattered $`m_F=0`$ atoms upward in the image, whereas collisions produce impurity atoms with smaller axial velocities which then appear below the unscattered atoms in the image. In contrast, Figure 2b shows a time-of-flight image for the case of a high density condensate, for which the number of collided atoms is greatly diminished, indicating the suppression of impurity collisions due to superfluidity.
The number of collided atoms was determined by counting impurity atoms in a region of the time-of-flight image below the unscattered impurity atoms, which also contained Raman outcoupled thermal $`m_F=0`$ atoms. Thus, the number of collided atoms in the counting region was obtained by subtracting the thermal background which was determined by counting a similar sized region above the unscattered impurity atoms where we expect few collision products. This number was doubled to obtain the total number of collided atoms since we expect only about half of the collision products to be in the counting region; the remainder overlapped with the distribution of unscattered impurity atoms.
In studying these collisions, we discovered that the fraction of collided atoms increased with the number of outcoupled impurities (see Fig. 3). According to a perturbative treatment described below, the collision probability should be independent of the number of impurities. If the number of outcoupled atoms is increased, one would expect the collision probability to *decrease* slightly due to the reduction in the condensate density, or to *increase* slightly because the smaller condensate density implies a smaller critical velocity for dissipation. However, these effects are smaller (10–20%) than the observed two-fold increase in the collided fraction.
Rather, this large increase can be explained as a collective self-amplification of atomic scattering, akin to the recently observed superradiant amplification of light scattering from a Bose-Einstein condensate . Collisions between impurity atoms and the condensate transfer atoms from a macroscopically occupied initial state to final momentum states which were previously empty. The population in these final states can stimulate further scattering by bosonic enhancement and this effect increases for larger outcoupling. This collisional amplification is not directional, and is similar to the recently observed optical omnidirectional superfluoresence . In contrast, the observation of four-wave mixing of atoms represents the case where collisions were stimulated by a single macroscopically occupied final mode.
Fig. 4 shows the decrease of collision probability as the velocity of the impurity atoms approached the speed of sound in the condensate. The collision probability was determined by averaging over many iterations of the experiment with the number of outcoupled atoms kept below $`10^6`$, in which case collective effects may be neglected. For our experimental conditions, the impurity velocity was predominantly determined by the gravitational acceleration $`g=9.8`$ m/s<sup>2</sup>, which imparted an average velocity of $`v_g=\sqrt{2gz_c}`$ where $`z_c`$ is the Thomas-Fermi radius of the condensate in the $`z`$-direction. This downward velocity ranged from 17 mm/s for tightly confined condensates to 26 mm/s for loosely confined condensates, and was larger than the initial 7 mm/s velocity imparted by Raman scattering. Thus, the effect of superfluidity on impurity scattering depends primarily on the parameter $`\overline{\eta }=v_g/c`$ which is the ratio of the typical impurity velocity $`v_g`$ to the speed of sound at the center of the condensate $`c=\sqrt{\mu /M}`$. Experimentally, $`\overline{\eta }`$ is determined using the radial trapping frequency and the chemical potential $`\mu `$ which is determined from the expansion of the condensate in the time-of-flight images .
High and low $`\overline{\eta }`$ correspond to dissipative and non-dissipative regimes, respectively. This behavior is apparent in Fig. 2. For the case of loose confinement, the low condensate density (small $`c`$) and large condensate radius (large $`v_g`$) yield a large value for $`\overline{\eta }5`$. The effect of collisions is clearly visible with about 20% of the atoms scattered to lower axial velocities (Fig. 2a). In contrast, for the case of tight confinement, the high condensate density (large $`c`$) and small condensate radius (small $`v_g`$) yield a small value for $`\overline{\eta }1.5`$, and the collision probability is greatly suppressed due to superfluidity (Fig. 2b).
The predicted cross-section for collisions between an $`m_F=0`$ impurity atom at momentum $`\mathrm{}𝐤`$ and a $`m_F=1`$ condensate of density $`n_0`$ is obtained by calculating the collision rate $`\mathrm{\Gamma }`$ using Fermi’s Golden rule :
$`\mathrm{\Gamma }`$ $`=`$ $`n_0\left({\displaystyle \frac{\mathrm{}a}{M}}\right)^2{\displaystyle 𝑑q𝑑\mathrm{\Omega }q^2S(q)\delta \left(\frac{\mathrm{}𝐤𝐪}{M}\frac{\mathrm{}q^2}{2M}\omega _q^B\right)}`$
$`=`$ $`n_0\sigma (\eta )v,`$
Here, $`S(q)=\omega _q^0/\omega _q^B`$ is the static structure factor of the condensate with $`\mathrm{}\omega _q^0=\mathrm{}^2q^2/2M`$ and $`\mathrm{}\omega _q^B=\sqrt{\mathrm{}\omega _q^0(\mathrm{}\omega _q^0+2\mu )}`$ being the energies of a free particle and a Bogoliubov quasiparticle of momentum $`q`$, respectively. The collision cross section is $`\sigma (\eta )=\sigma _0F(\eta )`$ where $`\eta =v/c`$, $`v=\mathrm{}k/M`$ is the impurity velocity, and $`\sigma _0=4\pi a_{0,1}^2`$ where $`a_{0,1}=2.75`$ nm is the scattering length for $`s`$-wave collisions between the $`|m_F=0`$ and $`|m_F=1`$ states of sodium. For $`\eta <1`$, $`F(\eta )=0`$ and for $`\eta >1`$, $`F(\eta )=11/\eta ^4\mathrm{log}(\eta ^4)/\eta ^2`$.
We can approximate our experiment by considering the motion of the $`m_F=0`$ atoms under the gravitational acceleration alone and ignoring the effects of the initial axial velocity and mean-field expulsion . The $`m_F=0`$ atoms falling through the condensate experience a collisional density $`𝒞(\eta )=𝑑zn(x,y,z)\sigma (\eta )`$, where $`n(x,y,z)`$ is the condensate density, and $`\eta `$ is determined by the local condensate density and the downward impurity velocity. The collisional density relative to its value at large velocities $`𝒞_{\mathrm{}}`$is given by
$`{\displaystyle \frac{𝒞(\overline{\eta })}{𝒞_{\mathrm{}}}}`$ $``$ $`{\displaystyle \frac{𝑑𝐫n_I(𝐫)\times 𝑑z^{}n(𝐫^{})\sigma _0F(\eta )}{𝑑𝐫n_I(𝐫)\times 𝑑z^{}n(𝐫^{})\sigma _0}}`$ (1)
where we assume that the initial impurity density $`n_I(𝐫)n(𝐫)`$ . The condensate density in the Thomas-Fermi limit is $`n(x,y,z)=n_0(1(x/x_c)^2(y/y_c)^2(z/z_c)^2)`$, where $`x_c=(2\mu /M\omega _x^2)^{1/2}`$ (similarly for $`y_c`$ and $`z_c`$) is the Thomas-Fermi radius, $`\omega _x`$ is the trapping frequency in the $`x`$ direction, and $`\mu `$ is the chemical potential. The solid line in Fig. 4 was determined by numerically integrating Eq. 1. To compare the collision probability for the different data points, we divided the observed collided fraction by $`𝒞_{\mathrm{}}=(5/12)\times n_0\sigma _0z_c`$. The observed decrease in the collisional density for small $`\overline{\eta }`$ (Fig. 4) shows the superfluid suppression of collisions. Numerical simulations ruled out the possibility that the observed decrease in collisional density could be caused solely by variations of the path length of particle trajectories due to the mean-field repulsion and the initial velocity.
The measured values in Fig. 4 are systematically larger by a factor of about two than those expected theoretically. This discrepancy is also seen for impurity collisions at velocities of 6 cm/s for which superfluidity should play no role. While we cannot presently account for this systematic error, the observation of suppression of collisions due to superfluidity is robust, since it requires only a relative comparison of collision probabilities at different $`\overline{\eta }`$.
The method presented here can generally be used to study ultra-cold collisions. In this study, we focused on collisions between atoms in different hyperfine states. By driving a Bragg transition instead of a Raman transition, we have also observed collisions between atoms in the same internal state. At a velocity of 6 cm/s, we found the collision cross section to be $`2.1\pm 0.3`$ times larger than in the Raman case, reflecting the exchange term in elastic collisions for identical particles that increases the cross section from $`4\pi a^2`$ to $`8\pi a^2`$.
Raman transitions are one way to realize output couplers for atom lasers . Theoretical treatments of atom lasers have typically considered only the condensate and the outcoupled atoms in a two-mode approximation and ignored the modes accessible by collisions . However, our experiment shows that the outcoupled atoms do not simply pass through the condensate. Rather, they collide and populate modes coupled by atomic scattering ; the collisions may even be enhanced by bosonic stimulation. In principle, such collisional losses can be avoided by *lowering* the density. However, an alternative route to suppressing collisions is to *increase* the density until the speed of sound is larger than the velocity of the outcoupled atoms, thus realizing a “superfluid” output coupler.
In conclusion, we have studied collisions between impurity atoms and a Bose-Einstein condensate. Both the observed superfluid suppression of collisions and the collective enhancement are crucial considerations for the future development of intense atom lasers.
We are grateful to D.E. Pritchard for valuable discussions. This work was supported by the ONR, NSF, JSEP, ARO, NASA, and the David and Lucile Packard Foundation. A.P.C. acknowledges additional support from the NSF, A.G. from DAAD, and D.M.S.K. from JSEP and a Robert A. Millikan Postdoctoral Fellowship.
|
no-problem/0003/physics0003008.html
|
ar5iv
|
text
|
# Electromagnetic interaction between two uniformly moving charged particles: a geometrical derivation using Minkowski diagrams
## I Introduction
Special Relativity, as presented in today’s textbooks, is a complex mathematical theory. The 1 spatial + 1 temporal dimensional Minkowski diagrams , which initially introduce the Lorentz transformation, the time dilatation and the length contraction, are soon put aside in favor of an approach based on differential calculus and linear algebra. One gets little intuitive understanding of the law of relativistic addition of velocities, and of the fact that ”magnetism is a kind of ’second-order’ effect arising from relativistic changes in the electric fields of moving charges” . However, by introducing only slightly more elaborate Minkowski diagrams, and using geometrical derivations, one can get back the intuitive understanding, to the great delight of the physicist who still believes in the spirit of Descartes’ philosophy.
## II Real plane and complex plane: same trigonometry
The complex plane, like the real plane, is a two-dimensional (2D) vector space. The scalar products for the real plane (1) and for the complex plane (2) are defined as follows
$$\widehat{\text{x}}\widehat{\text{x}}=1\widehat{\text{y}}\widehat{\text{y}}=1\widehat{\text{x}}\widehat{\text{y}}=\widehat{\text{y}}\widehat{\text{x}}=0$$
(1)
$$\widehat{\text{x}}\widehat{\text{x}}=1\widehat{\text{i}}\widehat{\text{i}}=1\widehat{\text{x}}\widehat{\text{i}}=\widehat{\text{i}}\widehat{\text{x}}=0$$
(2)
where $`\widehat{\text{x}}`$,$`\widehat{\text{y}}`$ and respectively $`\widehat{\text{x}}`$,$`\widehat{\text{i}}`$ are the basis vectors.
Since both planes have a scalar product, one can talk about orthogonal vectors (their scalar product is zero) and about the magnitude of a vector (the square root of the scalar product of a vector with itself). This allows us to define the circle (the geometric locus of the points equally spaced from a given point), the angle in radians (the length, between two points of a circle of radius one, measured along the circumference), and the trigonometric functions sine and cosine (the magnitudes of the projections of a radius one vector on the two coordinate axes). From these definitions it follows that, for both the real and the complex planes, one has the relations:
$$\mathrm{sin}^2(\alpha )+\mathrm{cos}^2(\alpha )=1$$
(3)
$$[\frac{d}{d\alpha }\mathrm{sin}(\alpha )]^2+[\frac{d}{d\alpha }\mathrm{cos}(\alpha )]^2=1.$$
(4)
From (3)-(4) one can get the derivatives of the trigonometric functions, the Taylor series expansions of sine and cosine, and then all the well known trigonometric relations. The only detail we have to keep in mind is that the angle $`\alpha `$ in the real plane is a real number, while in the complex plane it is a purely imaginary number, due to the non-positive definite scalar product used in the last case. There are a few more relevant differences, which we can best point out if we represent the complex plane as an Euclidean plane. Two vectors in the complex plane are orthogonal if they make the same angle with the bisecting line of the first quadrant. The circle in the complex plane looks like a hyperbola . Not any line passing through the origin intersects the right (or left) branch of the hyperbola. This means that there are pairs of lines passing through the origin to which we cannot assign an angle. However, for the triangles we will be working with, the ratio of segments behaves as if it were an angle, of negative value. The true angle is obtained by symmetry with respect to the first bisecting line, as the pair $`\alpha `$ and $`\alpha `$ indicates in Figure 1.
## III Relativistic addition of velocities
Consider a reference frame K’ which is moving with a velocity $`\text{V}=V\widehat{\text{x}}`$ relative to another one K, and a particle moving with a velocity $`\text{v}^{}=v_x^{}\widehat{\text{x}}^{}+v_y^{}\widehat{\text{y}}^{}+v_z^{}\widehat{\text{z}}^{}`$ in the reference frame K’. The reference frames are chosen such that their origins and the particle coincide at the space-time point O, as shown in Figure 1. Notice that $`\widehat{\text{y}}=\widehat{\text{y}}^{}`$ and $`\widehat{\text{z}}=\widehat{\text{z}}^{}`$, because V has a component only in the x direction. The Oz axis is not plotted, but is similar to the Oy axis. The question is: What is the velocity $`\text{v}=v_x\widehat{\text{x}}+v_y\widehat{\text{y}}+v_z\widehat{\text{z}}`$ of the particle in the reference frame K?
The world-line OP of the particle is projected on the complex planes (x,O,ict), (y,O,ict), (z,O,ict), (x’,O,ict’), (y’,O,ict’), (z’,O,ict’), and the resulting angles from the respective projections give the components of the velocity of the particle in the two reference frames considered. For the situation considered the planes (x,O,ict) and (x’,O,ict’) coincide. It is seen from Figure 1 that
$$\mathrm{tan}(\alpha )=\frac{EF}{OE}=\frac{V}{ic}\mathrm{tan}(\alpha )=\frac{V}{ic}$$
(5)
$$\mathrm{tan}(\beta )=\frac{DC}{OD}=\frac{v_x^{}}{ic}\mathrm{tan}(\beta )=\frac{v_x^{}}{ic}$$
(6)
$$\mathrm{tan}(\gamma )=\frac{DA}{OD}=\frac{v_y^{}}{ic}\mathrm{tan}(\gamma )=\frac{v_y^{}}{ic}$$
(7)
$$\mathrm{tan}(\delta )=\frac{EB}{OE}=\frac{v_y}{ic}\mathrm{tan}(\delta )=\frac{v_y}{ic}$$
(8)
$$\mathrm{tan}(\theta )=\frac{EC}{OE}=\frac{v_x}{ic}\mathrm{tan}(\theta )=\frac{v_x}{ic}.$$
(9)
In order to express $`v_x`$ and $`v_y`$ as functions of $`V,v_x^{}`$ and $`v_y^{}`$ we need to express $`\delta `$ and $`\theta `$ as functions of $`\alpha `$,$`\beta `$ and $`\gamma `$.
In the plane (x,O,ict) of the Lorentz boost the addition of velocities is based on the addition of angles
$$\theta =\alpha +\beta $$
(10)
$$\mathrm{tan}(\theta )=\mathrm{tan}(\alpha +\beta )=\frac{\mathrm{tan}(\alpha )+\mathrm{tan}(\beta )}{1\mathrm{tan}(\alpha )\mathrm{tan}(\beta )}.$$
(11)
From (11), by substitution of the tangents (5)-(9), it follows that
$$v_x=\frac{V+v_x^{}}{1+Vv_x^{}/c^2}.$$
(12)
Two rectangles, APCD and BPCE, result from the projection process. It is evident that
$$\frac{CP}{OC}=\frac{EB}{OC}=\frac{EB}{OE}\frac{OE}{OC}=\mathrm{tan}(\delta )\mathrm{cos}(\theta )$$
(13)
$$\frac{CP}{OC}=\frac{DA}{OC}=\frac{DA}{OD}\frac{OD}{OC}=\mathrm{tan}(\gamma )\mathrm{cos}(\beta ).$$
(14)
From (13)-(14) it follows that
$$\mathrm{tan}(\delta )=\frac{\mathrm{cos}(\beta )\mathrm{tan}(\gamma )}{\mathrm{cos}(\alpha +\beta )}=\frac{\mathrm{tan}(\gamma )}{\mathrm{cos}(\alpha )[1\mathrm{tan}(\alpha )\mathrm{tan}(\beta )]}.$$
(15)
By substitutions of the tangents (5)-(8) and of $`\mathrm{cos}(\alpha )=[1+\mathrm{tan}^2(\alpha )]^{1/2}`$ we get
$$v_y=\frac{v_y^{}(1V^2/c^2)^{1/2}}{1+Vv_x^{}/c^2}.$$
(16)
A similar expression is obtained for the $`v_z`$ component.
## IV Electromagnetic interaction between two uniformly moving charged particles
Consider two charged particles (with charges $`Q_1`$ and $`Q_2`$) at some arbitrary positions, moving with arbitrary, but uniform, velocities. We orient our 3D reference frame in such a way that the first particle (which generates the field) is initially at the origin, moving along the Ox axis with velocity $`\text{V}=V\widehat{\text{x}}`$, and the vector $`\text{R}=R\mathrm{cos}(\theta )\widehat{\text{x}}+R\mathrm{sin}(\theta )\widehat{\text{y}}`$ connecting the two particles is in the (x,O,y) plane. The angle between R and the Ox axis is $`\theta `$. The second particle (subject to the electromagnetic field generated by the first one) is moving with velocity $`\text{v}=v_x\widehat{\text{x}}+v_y\widehat{\text{y}}+v_z\widehat{\text{z}}`$. A section through the (x,O,y) plane can be seen in Figure 2. The first particle is at point O and the second one is at point A.
### A Analytical calculation of the Lorentz force
The electric field (in Gaussian units) generated by the first particle at the position of the second particle is
$$\text{E}=\frac{Q_1\text{R}}{R^3}(1\frac{V^2}{c^2})[1\frac{V^2}{c^2}\mathrm{sin}^2(\theta )]^{3/2}.$$
(17)
The magnetic field generated by the first particle is
$$\text{H}=\frac{1}{c}\text{V}\times \text{E}.$$
(18)
The Lorentz force acting on the second particle is
$$\text{F}=Q_2\text{E}+\frac{Q_2}{c}\text{v}\times \text{H}.$$
(19)
From (17)-(19) the Cartesian components of the force are obtained
$$F_x=\frac{Q_1Q_2}{R^2}(1\frac{V^2}{c^2})[1\frac{V^2}{c^2}\mathrm{sin}^2(\theta )]^{3/2}[\mathrm{cos}(\theta )+\mathrm{sin}(\theta )\frac{v_yV}{c^2}]$$
(20)
$$F_y=\frac{Q_1Q_2}{R^2}(1\frac{V^2}{c^2})[1\frac{V^2}{c^2}\mathrm{sin}^2(\theta )]^{3/2}\mathrm{sin}(\theta )(1\frac{v_xV}{c^2})$$
(21)
$$F_z=0.$$
(22)
### B Geometrical derivation of the Lorentz force
The force components (20)-(22) can be obtained in a more graphical way, if we start with the Coulomb force generated by a charged particle at rest. One key assumption or experimental fact is that in a frame where all the source charges producing an electric field E are at rest, the force on a charge $`q`$ is given by $`\text{F}=q\text{E}`$ independent of the velocity of the charge in that frame . The reference frame K’ in which the source particle is at rest is moving with velocity V relative to the original frame K.
In the reference frame K the particle at A is observed to interact with the particle at O. The distance between particles is $`R`$, the length of the segment $`OA`$.
In the reference frame K’ the particle at A is observed to interact with the particle at B, where the segment $`BA`$ is a position vector $`\text{R}^{}`$ parallel to the plane (x’,O,y’). The following construction gives the position of point B: the segment $`AE`$ is parallel to Oy and intersects the Ox axis at E, whereas the segment $`EB`$ is parallel to Ox’ and intersects the world-line $`CO`$ at B. $`BD`$ projects the point B on the Ox axis at D.
Relative to K’, the particle at B exerts a radial Coulomb force on the particle at A. This force (in Gaussian units) is
$$\text{F}^{}=\frac{Q_1Q_2}{R^3}\text{R}^{}$$
(23)
where $`\text{R}^{}=R^{}[\mathrm{cos}(\theta ^{})\widehat{\text{x}}^{}+\mathrm{sin}(\theta ^{})\widehat{\text{y}}^{}]`$.
The key point in getting the force F in the reference frame K is to notice that the force, in any reference frame considered, is given by the projection on the real 3D space of that frame of the 4-force $`𝓕`$ (which is a Minkowski-space vector), that is
$$𝓕=𝓕_{\mathrm{real}}+𝓕_{\mathrm{imag}}=\gamma (v)\text{F}+\widehat{\text{i}}\gamma (v)\frac{P}{c}$$
(24)
$$𝓕=𝓕_{\mathrm{real}}^{}+𝓕_{\mathrm{imag}}^{}=\gamma (v^{})\text{F}^{}+\widehat{\text{i}}^{}\gamma (v^{})\frac{P^{}}{c}$$
(25)
where $`\gamma (v)=(1v^2/c^2)^{1/2}`$ and $`P=\text{F}\text{v}`$.
We will obtain the 4-force $`𝓕`$ from its real and imaginary components ($`𝓕_{\mathrm{real}}^{}`$ and $`𝓕_{\mathrm{imag}}^{}`$) in the reference frame K’, then we will decompose the same 4-force into its real and imaginary components ($`𝓕_{\mathrm{real}}`$ and $`𝓕_{\mathrm{imag}}`$) in the reference frame K. The Lorentz force we are looking for is just $`\text{F}=𝓕_{\mathrm{real}}/\gamma (\text{v})`$.
From (23)-(25) it follows that
$$𝓕_{\mathrm{real}}^{}=\gamma (v^{})\frac{Q_1Q_2}{R^3}\text{R}^{}.$$
(26)
To get the imaginary component $`𝓕_{\mathrm{imag}}^{}`$ we use the orthogonality between the 4-force and the 4-velocity, $`𝓕𝓥=0`$, where the 4-velocity is $`𝓥=\gamma (v^{})\text{v}^{}+\widehat{\text{i}}^{}\gamma (v^{})c`$. The orthogonality condition leads to
$$\gamma ^2(v^{})\frac{Q_1Q_2}{R^2}\frac{\text{R}^{}\text{v}^{}}{R^{}}+𝓕_{\mathrm{imag}}^{}\widehat{\text{i}}^{}\gamma (v^{})c=0$$
(27)
$$𝓕_{\mathrm{imag}}^{}=\widehat{\text{i}}^{}\gamma (v^{})\frac{Q_1Q_2}{R^2}\frac{v_{\mathrm{rad}}^{}}{c}$$
(28)
where the radial component of the velocity is
$$v_{\mathrm{rad}}^{}=\frac{\text{R}^{}\text{v}^{}}{R^{}}=v_x^{}\mathrm{cos}(\theta ^{})+v_y^{}\mathrm{sin}(\theta ^{}).$$
(29)
The components of the force F in the reference frame K are given by the projection of the 4-force $`𝓕`$ on the 3D real space of K. An easy way to do this is to notice that we can decompose $`𝓕_{\mathrm{real}}^{}`$ (which has the direction of the segment $`BA`$) and $`𝓕_{\mathrm{imag}}^{}`$ (which has the direction of the segment $`BO`$) into sums of 4-vectors, each of the 4-vectors being parallel to one of the axes of the reference frame K:
$$\text{r}_{BA}=\text{r}_{BD}+\text{r}_{DE}+\text{r}_{EA}$$
(30)
$$\text{r}_{BO}=\text{r}_{BD}+\text{r}_{DO}$$
(31)
Because these expansions do not involve any component along the Oz axis, this simply means that $`F_z=0`$. The projections of the 4-force on the Ox and Oy axes are
$$\gamma (v)F_x=_{\mathrm{real}}^{}\frac{DE}{BA}+_{\mathrm{imag}}^{}\frac{DO}{BO}$$
(32)
$$\gamma (v)F_y=_{\mathrm{real}}^{}\frac{EA}{BA}.$$
(33)
The lengths of the various segments needed above are as follows:
$$AO=R$$
(34)
$$EA=AO\mathrm{sin}(\theta )=R\mathrm{sin}(\theta )$$
(35)
$$OE=AO\mathrm{cos}(\theta )=R\mathrm{cos}(\theta )$$
(36)
$$BE=OE\mathrm{cos}(\alpha )=R\mathrm{cos}(\theta )\mathrm{cos}(\alpha )$$
(37)
$$DE=BE\mathrm{cos}(\alpha )=R\mathrm{cos}(\theta )\mathrm{cos}^2(\alpha )$$
(38)
$$AB=(AE^2+BE^2)^{1/2}=R\mathrm{cos}(\alpha )[1+\mathrm{tan}^2(\alpha )\mathrm{sin}^2(\theta )]^{1/2}=R^{}$$
(39)
We also notice that $`DO/BO=\mathrm{sin}(\alpha )`$. The force components in (32)-(33) become
$`F_x=`$ $`{\displaystyle \frac{\gamma (v^{})}{\gamma (v)}}{\displaystyle \frac{Q_1Q_2}{R^2}}{\displaystyle \frac{\mathrm{cos}(\theta )}{\mathrm{cos}(\alpha )[1+\mathrm{tan}^2(\alpha )\mathrm{sin}^2(\theta )]^{3/2}}}`$ (41)
$`+i{\displaystyle \frac{\gamma (v^{})}{\gamma (v)}}{\displaystyle \frac{Q_1Q_2}{R^2}}{\displaystyle \frac{v_{\mathrm{rad}}^{}}{c}}{\displaystyle \frac{\mathrm{sin}(\alpha )}{\mathrm{cos}^2(\alpha )[1+\mathrm{tan}^2(\alpha )\mathrm{sin}^2(\theta )]}}`$
$$F_y=\frac{\gamma (v^{})}{\gamma (v)}\frac{Q_1Q_2}{R^2}\frac{\mathrm{sin}(\theta )}{\mathrm{cos}^3(\alpha )[1+\mathrm{tan}^2(\alpha )\mathrm{sin}^2(\theta )]^{3/2}}.$$
(42)
We can also calculate
$$\mathrm{sin}(\theta ^{})=\frac{EA}{AB}=\frac{\mathrm{sin}(\theta )}{\mathrm{cos}(\alpha )[1+\mathrm{tan}^2(\alpha )\mathrm{sin}^2(\theta )]^{1/2}}$$
(43)
$$\mathrm{cos}(\theta ^{})=\frac{BE}{AB}=\frac{\mathrm{cos}(\theta )}{[1+\mathrm{tan}^2(\alpha )\mathrm{sin}^2(\theta )]^{1/2}}.$$
(44)
If the velocity of the particle at A has the components $`v_x,v_y,v_z`$, as measured in the reference frame K, and K is moving with the velocity $`\text{V}^{}=V\widehat{\text{x}}^{}`$ relative to K’, then the particle will have the following components of the velocity (compare with equations (12) and (16)) in the reference frame K’
$$v_x^{}=\frac{v_xV}{1Vv_x/c^2}v_y^{}=\frac{v_y(1V^2/c^2)^{1/2}}{1Vv_x/c^2}v_z^{}=\frac{v_z(1V^2/c^2)^{1/2}}{1Vv_x/c^2}.$$
(45)
With these components we find that
$$\gamma (v^{})=\gamma (v)\frac{(1Vv_x/c^2)}{(1V^2/c^2)^{1/2}}$$
(46)
and the radial velocity (29) becomes
$$v_{\mathrm{rad}}^{}=\frac{(v_xV)\mathrm{cos}(\alpha )\mathrm{cos}(\theta )+v_y(1V^2/c^2)^{1/2}\mathrm{sin}(\theta )}{(1Vv_x/c^2)\mathrm{cos}(\alpha )[1+\mathrm{tan}^2(\alpha )\mathrm{sin}^2(\theta )]^{1/2}}.$$
(47)
Substituting $`\gamma (v^{})`$ and $`v_{rad}^{}`$ in (40)-(41), and also using the fact that $`\mathrm{sin}(\alpha )=i(V/c)\gamma (V)`$, $`\mathrm{cos}(\alpha )=\gamma (V)`$ and $`\mathrm{tan}(\alpha )=iV/c`$, we finally obtain the components in (20)-(21).
## V Conclusions
We have presented a geometrical calculation of the relativistic addition of velocities, and of the electromagnetic interaction between two uniformly moving charged particles. The geometrical approach used is an elegant, more intuitive and alternative way of obtaining these important results of Special Relativity. We hope our work will usefully complement other pedagogical efforts centered on Minkowski space diagrams.
|
no-problem/0003/nlin0003047.html
|
ar5iv
|
text
|
# Four-phase patterns in forced oscillatory systems
## I Introduction
Spatially extended systems characterized by the coexistence of two or more stable states compose a broad class of nonequilibrium pattern forming systems. The most common multistable systems are those that exhibit bistability, (e.g. chemical systems KaSh:94 ; POS:97 , vertically vibrated granular systems UMS:96 , and binary fluid convection KBS:88 ). Spatial patterns in these systems involve alternating domains of the two different stable states, which are separated from each other by interfaces or fronts. Bistable systems support a variety of patterns from spiral waves to splitting spots and labyrinths Pear:93 ; LeSw:95 ; HaMe:94b ; HaMe:94c ; GMP:96 . In some systems, such as the ferrocyanide-iodate-sulfite reaction LeSw:95 ; LMOS:93 and the oxidation of carbon monoxide on a platinum surface HBKR:95 , the bistability arises from the nonlinear nature of the system. In other systems such as liquid crystals in a rotating magnetic field MiMe:94 ; FRCG:94 ; FrGi:95 and periodically forced oscillators CoEm:92 , the bistability arises from a broken symmetry.
Periodically forced oscillatory systems are convenient systems for exploring multistability in pattern formation since the number of coexisting stable states can be controlled by changing the forcing frequency. Applying a periodic force of sufficient amplitude and at a frequency $`\omega _f\frac{n}{m}\omega _0`$, where $`\omega _0`$ is the oscillation frequency of the unforced system, entrains the system to the forcing frequency. The entrained system has $`n`$ stable states each with the same oscillation frequency but in one of $`n`$ oscillation phases separated by multiples of $`2\pi /n`$. We refer to the $`n`$ different phase shifted states as “phase states” of the system.
Recent experiments using the ruthenium-catalyzed Belousov-Zhabotinsky reaction forced by periodic illumination revealed subharmonic resonance regimes $`\omega _f:\omega _0=`$ 2:1, 3:2, 3:1, 4:1, with two (2:1) , three (3:2,3:1), and four (4:1) stable phase states POS:97 ; LBMS:99 . Patterns consisting of alternating spatial domains with a phase shift of $`\pi `$ are observed within the 2:1 resonance regime, and three-phase patterns with spatial domains phase-shifted by $`2\pi /3`$ are observed within the 3:1 resonance regime POS:97 ; LBMS:99 ; LPACW:99 . The 4:1 resonance is more complicated. Adjacent spatial domains may differ in phase by either $`\pi `$ or $`\pi /2`$. As a result the asymptotic patterns that develop can have four phases, two phases, or a mixture of two and four phases.
In this paper we explore pattern formation in the 4:1 resonance regimes. In Section II, we describe our experimental observations of four phase patterns in the 4:1 resonance band of the forced Belousov-Zhabotinsky reaction. We then present an analytical study of the 4:1 resonance EHM:98 ; EHM:99 in Section III. The study is based on a normal form, or amplitude equation, approach which is strictly valid only close to the Hopf bifurcation of the unforced oscillatory system. In order to test the analytical predictions and to study the behavior of forced systems far from the Hopf bifurcation, which is the case in the experiments, we conduct numerical studies of two reaction-diffusion models (the FitzHugh-Nagumo and Brusselator). We describe the models and results in Section IV. In Section V we discuss and compare the analytical and numerical results with the experimental observations.
## II The periodically forced Belousov-Zhabotinsky reaction
We use a light-sensitive form of the Belousov-Zhabotinsky (BZ) reaction, a chemical reaction system with oscillatory kinetics, to study the 4:1 subharmonic resonance patterns. In the experiments, the chemicals of the BZ system diffuse and react within a $`0.4`$ $`\mathrm{mm}`$ thick porous membrane. The system is maintained in a non-equilibrium steady state by a continuous flow of fresh, well mixed reactant solutions BZ on either side of the thin membrane where the patterns form. The unforced pattern is a rotating spiral wave wave of ruthenium catalyst concentration.
We periodically force the system using spatially homogeneous square wave pulses of light with intensity $`I`$, where $`I`$ is the square of the forcing amplitude, and pulse frequency $`\omega _f`$ ($`\omega _f/2\pi `$ in Hz). We choose the frequency $`\omega _f`$ to be approximately four times the natural frequency of the unforced oscillations.
To determine the temporal response of a pattern when it is periodically perturbed at a particular pair of ($`I`$,$`\omega _f`$) parameter values we collect a time series of evenly sampled pattern snapshots; a $`60\times 60`$ pixel region of the $`640\times 480`$ pixel image. We sample at a rate of approximately 30 frames/oscillation and calculate the Fast Fourier Transform for the time series of each pixel. The power spectrum of each pixel is determined. An average over all pixels provides a power spectrum of a pattern, as shown in Fig. 1. The 4:1 resonant patterns exhibit a dominant peak at $`\omega _f/4`$ in the power spectrum. Higher order harmonics are also present.
An example of a 4:1 resonant pattern observed in the experiments is shown in Fig. 2. The rotating four-phase spiral wave in Fig. 2(a) is the asymptotic state of the system. This image is a plot of the phase angle $`\mathrm{arg}(a)`$, where $`a=a(x,y)`$ is the complex Fourier amplitude associated with the $`\omega _f/4`$ mode for each pixel $`(x,y)`$ in the pattern. The four domains (white, light gray, dark gray and black) correspond to the four phase states with oscillation phases that are shifted by $`0`$, $`\pi /2`$, $`\pi `$, and $`3\pi /2`$ with respect to the forcing.
Fig. 2(b) is a different representation of the same data. In this case the response $`a`$ at $`\omega _f/4`$ is plotted in the complex plane instead of the $`xy`$ plane. This representation of the data allows us to see the distribution of the oscillation amplitude and phase at all pixels in the pattern. The four corners of the diamond shape in Fig. 2(b) are the four stable phase states. The edges of the diamond shape in Fig. 2(b) are formed from pixels at phase-fronts separating adjacent domains. The majority of pixels in the pattern are in one of the four corner states as the histogram of phase angles in Fig. 3 illustrates.
Traveling four-phase patterns exist over the entire dynamic range of forcing intensity $`I`$ in the 4:1 resonance region. The range of forcing intensity is limited by a $`I`$-dependence of the reaction kinetics. As $`I`$ is increased, the reaction kinetics shifts from oscillatory to excitable.
## III An amplitude equation for forced oscillatory systems
We study the experimental observations shown in the previous section using a normal form equation for the amplitude of the $`\omega _f/4`$ mode. Consider first an oscillatory system responding to the forcing at $`\omega _f/n`$ where $`n`$ is integer. We assume the system is near the onset of oscillations, i.e. close to a Hopf bifurcation. The set of dynamical fields $`𝐮`$ describing the spatio-temporal state of the system can be written as
$$𝐮=𝐮_\mathrm{𝟎}A\mathrm{exp}(i\omega _ft/n)+c.c.+\mathrm{},$$
(1)
where $`𝐮_\mathrm{𝟎}`$ is constant, $`A`$ is a slowly varying complex amplitude, and the ellipses denote other resonances with smaller contributions. The slow space and time evolution of the amplitude $`A`$ is described by the forced complex Ginzburg-Landau equation,
$`A_\tau `$ $`=`$ $`(\mu +i\nu )A+(1+i\alpha )A_{zz}(1i\beta )|A|^2A`$ (2)
$`+\gamma _nA_{}^{}{}_{}{}^{(n1)},`$
where $`\mu `$ is the distance from the Hopf bifurcation, $`\nu `$ is the detuning from the exact resonance, and $`\gamma _n`$ is the forcing amplitude.
For the special case $`n=4`$ (the 4:1 resonance) we can eliminate the parameter $`\mu `$ by rescaling time, space, and amplitude as $`t=\mu \tau `$, $`x=\sqrt{\mu /2}z`$ and $`B=A\sqrt{\mu }`$ to obtain
$`B_t`$ $`=`$ $`(1+i\nu _0)B+{\displaystyle \frac{1}{2}}(1+i\alpha )B_{xx}(1i\beta )|B|^2B`$ (3)
$`+\gamma B_{}^{}{}_{}{}^{3},`$
where $`\nu _0=\nu /\mu `$. Equation (3) also applies to the 4:3 subharmonic resonance. This follows from symmetry considerations: the system is symmetric to discrete time translations $`tt+\frac{2\pi }{\omega _f}=t+\frac{3\pi }{2\omega }`$. The amplitude equation must then be invariant under the transformation $`BB\mathrm{exp}(3\pi i/2)`$. The only forcing term satisfying this requirement to cubic order is $`B_{}^{}{}_{}{}^{3}`$.
### III.1 Phase states and phase fronts
Constant solutions of Eq. (3) indicate that the system is entrained to the forcing. There are four stable constant solutions to Eq. (3), each with the same amplitude but with different phases, $`\mathrm{arg}(B)`$, which correspond to the four stable phase states. Simple expressions for these solutions and exact forms for the front solutions connecting them in space are obtained from the gradient version of Eq. (3), where $`\nu _0=\alpha =\beta =0`$:
$$B_t=B+\frac{1}{2}B_{xx}|B|^2B+\gamma B_{}^{}{}_{}{}^{3}.$$
(4)
The stable phase states (constant solutions) of Eq. (4) for $`0<\gamma <1`$ are $`(B_1,B_2,B_3,B_4)=(\lambda ,i\lambda ,\lambda ,i\lambda )`$ where $`\lambda =1/\sqrt{1\gamma }`$. They are represented as solid circles in Fig. 4.
Front solutions connecting pairs of these states are of two types, fronts between states separated in phase by $`\pi `$ and fronts between states separated in phase by $`\pi /2`$ (hereafter $`\pi `$-fronts and $`\pi /2`$-fronts). The $`\pi `$-front solutions are
$`B_{31}`$ $`=`$ $`B_1\mathrm{tanh}x,`$
$`B_{42}`$ $`=`$ $`B_2\mathrm{tanh}x.`$ (5)
For the particular parameter value $`\gamma =1/3`$ the $`\pi /2`$-fronts have the simple forms
$`B_{21}`$ $`=`$ $`{\displaystyle \frac{1}{2}}\sqrt{{\displaystyle \frac{3}{2}}}\left[1+i+(1i)\mathrm{tanh}x\right],`$
$`B_{14}`$ $`=`$ $`{\displaystyle \frac{1}{2}}\sqrt{{\displaystyle \frac{3}{2}}}\left[1i(1+i)\mathrm{tanh}x\right],`$
$`B_{32}`$ $`=`$ $`B_{14},`$
$`B_{43}`$ $`=`$ $`B_{21}.`$ (6)
Additional front solutions follow from the invariance of Eq. (4) under reflection, $`xx`$.
Figure 4 shows these front solutions (parametrized by the spatial coordinate $`x`$) in the complex $`B`$ plane. For example, the $`\pi `$-front $`B_{31}`$ is represented by the solid line connecting the state $`B_3`$ to the state $`B_1`$ as $`x`$ increases from $`\mathrm{}`$ to $`+\mathrm{}`$. The $`\pi /2`$-front $`B_{21}`$ is represented by the dashed line connecting the state $`B_2`$ to the state $`B_1`$.
In the special case of the gradient system (4) all front solutions are stationary. The more general case with nongradient terms in Eq. (3) can be studied by perturbation theory when $`\nu _0,\alpha `$ and $`\beta `$ are small EHM:99 . The results of this analysis show that the $`\pi /2`$-fronts become propagating fronts while the $`\pi `$-fronts remain stationary.
Figure 5 shows a rotating four-phase spiral wave from a numerical solution of the two-dimensional version cgl2 of Eq. (3). The phase diagram in the complex $`B`$ plane, shown in Fig. 5(b), has four $`\pi /2`$-fronts: $`B_{14}`$, $`B_{43}`$, $`B_{32}`$ and $`B_{21}`$. The amplitude $`B`$ corresponds to the complex Fourier amplitude $`a`$ measured in the experiment; the four-phase spiral pattern in Fig. 2 and the corresponding diamond-shape in the complex plane are predicted by the amplitude equation.
### III.2 A phase-front instability
The existence of the stationary $`\pi `$-front solutions suggests that standing two-phase patterns similar to those found under 2:1 resonant conditions POS:97 ; LBMS:99 may be observed in the 4:1 resonant case provided the $`\pi `$-fronts are stable. Standing two-phase patterns have not been observed in experiments in the 4:1 resonance band so the stability of $`\pi `$-fronts becomes a question. Stability conditions for $`\pi `$-front solutions were studied in Refs. EHM:98 ; EHM:99 . The results are described below.
Consider the pair of $`\pi /2`$-fronts shown in Fig. 6(a). They are separated by a distance $`2\chi `$ and connect the phase states $`B_3`$ and $`B_1`$. For $`\gamma 1/3`$, the solutions (III.1) are good approximations to $`\pi /2`$-front solutions. The pair of fronts can be represented as
$$B(x;\zeta ,\chi )B_{32}(x\zeta +\chi )+B_{21}(x\zeta \chi )i\lambda ,$$
(7)
where $`\zeta `$ is their mean position. For large separation distances ($`\chi >>1`$) $`BB_{32}`$ when $`x\zeta \chi `$ and $`BB_{21}`$ when $`x\zeta +\chi `$ and Eq. (7) represents a pair of isolated $`\pi /2`$-fronts. When the distance between the pair decreases to zero ($`\chi 0`$), then $`BB_{31}`$ and Eq. (7) approaches a $`\pi `$-front solution.
The stability of $`\pi `$-fronts is determined by the interaction between a pair of $`\pi /2`$-fronts. Stable $`\pi `$-fronts are the result of an attractive $`\pi /2`$-front interaction; the $`\pi /2`$-fronts attract each other and the distance between them decreases to zero. A repulsive interaction implies unstable $`\pi `$-fronts. The potential $`V(\chi )`$ that governs this interaction,
$$\dot{\chi }=\frac{dV}{d\chi },$$
(8)
is shown in Fig. 6(b) for various $`\gamma `$ values. The potential has a single maximum for $`\gamma <\gamma _c=1/3`$ which represents a repulsive interaction between $`\pi /2`$-fronts and the instability of $`\pi `$-fronts. It has a single minimum for $`\gamma >\gamma _c`$ which indicates the attractive interaction between $`\pi /2`$-fronts and the resulting stability of $`\pi `$-fronts. At $`\gamma _c`$ the potential is flat, $`V=0`$, for all $`\chi `$ values. At this parameter value, pairs of $`\pi /2`$-fronts do not interact and there is a continuous family of front pair solutions with arbitrary separation distances, $`2\chi `$, in Eq. (7). This degeneracy of solutions at the critical point $`\gamma =\gamma _c`$ is removed by adding higher order terms to the amplitude equation, as we discuss in Section III.4.
To summarize, stationary $`\pi `$-front solutions of Eq. (3) are stable for forcing amplitudes $`\gamma >\gamma _c=1/3`$. When $`\gamma `$ is decreased past $`\gamma _c`$, $`\pi `$-fronts lose stability and split into pairs of propagating $`\pi /2`$-fronts. The splitting process is shown in Fig. 7 where the $`B_{31}`$ $`\pi `$-front evolves into the pair of stable traveling $`\pi /2`$-fronts, $`B_{32}`$ and $`B_{21}`$ when $`\gamma <\gamma _c`$. The parity symmetry $`\chi \chi `$ makes evolution toward the pair $`B_{14}`$ and $`B_{43}`$ equally likely. The splitting occurs for forcing amplitudes arbitrarily close to $`\gamma _c`$, although in that case the time scale of this process becomes very long.
### III.3 Effects of the phase-front instability on pattern formation
The stability of stationary $`\pi `$-fronts for $`\gamma >\gamma _c`$ suggests the predominance of standing two-phase patterns. These patterns involve alternating domains with oscillation phases shifted by $`\pi `$ with respect to one another. Domains shifted by $`\pi /2`$ may exist as transients; the interactions between $`\pi `$-fronts and $`\pi /2`$-fronts always produce $`\pi /2`$-fronts which are stable but attract one another and coincide to form stationary $`\pi `$-fronts. Since the $`\pi /2`$-fronts are traveling these transients are relatively short. For $`\gamma <\gamma _c`$ the interactions between the $`\pi /2`$-fronts are repulsive. The $`\pi `$-fronts are unstable and split into pairs of traveling $`\pi /2`$-fronts. As a result, traveling waves with all four phase-states are the asymptotic pattern.
A typical two-dimensional traveling pattern involving all four phases is the four-phase spiral wave shown in Fig. 2 or in Fig. 5. Figure 8 shows the effect of the phase-front instability on a four-phase spiral wave. The initial spiral wave (Fig. 8(a)) was obtained by solving a two-dimensional version of Eq. (3) for $`\gamma <\gamma _c`$. The following three frames (Fig. 8(b)-(d)) are snapshots showing the evolution of the initial four-phase spiral wave into a standing two-phase pattern after $`\gamma `$ is increased above $`\gamma _c`$. The evolution begins at the spiral core where the attractive interactions between pairs of $`\pi /2`$-fronts are the strongest. The coalescence of $`\pi /2`$-fronts leaves behind a stationary $`\pi `$-front which grows in length until no $`\pi /2`$-fronts are left, as is evident by the single line in the complex $`B`$ plane shown in Fig. 8(d).
### III.4 Higher order terms in the amplitude equation
From the analysis of Eq. (3) we have shown that two-phase patterns must be standing and four-phase patterns must be traveling. The analysis of the equation with higher order contributions suggests the possible existence of a small $`\gamma `$ range, of order $`\mu 1`$, surrounding $`\gamma _c`$ where slowly traveling two-phase patterns exist.
The higher order contributions to Eq. (3), such as $`|B|^4B`$, or $`|B|^2B_{xx}`$, lift the degeneracy of the instability. Figure 9 shows two possible scenarios for the front interaction potential $`V`$ when higher order contributions to Eq. (3) are included (both scenarios lift the degeneracy of the phase-front instability). In one case, shown in Fig. 9(a), the stationary $`\pi `$-front loses stability to a pair of counter-propagating $`\pi `$-fronts in a pitchfork bifurcation which leads to double-minimum potential. This scenario is a nonequilibrium Ising-Bloch pitchfork bifurcation of $`\pi `$-fronts like the one found in the 2:1 resonance case CLHL:90 and in other bistable systems FRCG:94 ; IMN:89 ; HaMe:94a ; BRSP:94 . It leads to slow traveling two-phase patterns in the range where $`\gamma `$ is near $`\gamma _c`$. In the scenario shown in Fig. 9(b), the stationary $`\pi `$-front loses stability via a subcritical bifurcation which leads to double-maximum potential. In this case there is a range of stable $`\pi `$-fronts coexisting with pairs of separated $`\pi /2`$-fronts. This allows the possibility of patterns containing both $`\pi `$-fronts and $`\pi /2`$-fronts. Beyond this range the potential has a single maximum and $`\pi `$-fronts split into pairs of $`\pi /2`$-fronts. Both scenarios persist over a range of $`\gamma `$ of order $`\mu `$, the distance from the Hopf bifurcation.
## IV Numerical solutions of periodically forced reaction-diffusion models
The amplitude equation analysis predicts the existence of a phase-front instability near the Hopf bifurcation and hints at possible modifications of the instability as the distance from the Hopf bifurcation is increased. Our objectives in this section are to test the existence of the instability in reaction-diffusion models and to use the models to examine how the instability is modified far from the Hopf bifurcation.
### IV.1 The FitzHugh-Nagumo model
We study a periodically forced version of the FitzHugh-Nagumo equations
$`u_t`$ $`=`$ $`u(1+\mathrm{\Gamma }\mathrm{cos}\omega _ft)u^3v+^2u,`$ (9)
$`v_t`$ $`=`$ $`ϵ(ua_1v)+\delta ^2v.`$
The unforced model is obtained by setting $`\mathrm{\Gamma }=0`$. The uniform state $`(u,v)=(0,0)`$ undergoes a Hopf bifurcation as $`ϵ`$ is decreased past $`ϵ_c=1/a_1`$. The Hopf frequency is $`\omega _H=\sqrt{ϵ_c1}`$ and the distance from the Hopf bifurcation is measured by $`\mu =(ϵ_cϵ)/ϵ_c`$.
We compute the numerical solutions of Eq. (9) in the 4:1 resonance band ($`\omega _f4\omega _H`$) and close to the Hopf bifurcation ($`\mu 1`$). Close to the Hopf bifurcation the amplitude equation analysis applies. We expect to find a critical value of the forcing amplitude $`\mathrm{\Gamma }_c`$ corresponding to the phase-front instability point $`\gamma _c`$ in the amplitude equation. For the FitzHugh-Nagumo equations this $`\mathrm{\Gamma }_c`$ will, in general, depend on the parameters $`ϵ`$, $`\delta `$, $`a_1`$, and $`\omega _f`$. In the following we fix $`a_1=1/2`$, $`\delta =0`$, $`\omega _f=4`$ and only vary $`ϵ`$ (the parameter that controls the distance $`\mu `$ to the Hopf bifurcation) and the forcing amplitude $`\mathrm{\Gamma }`$.
Close to the Hopf bifurcation we find stable stationary $`\pi `$-fronts for forcing amplitudes $`\mathrm{\Gamma }>\mathrm{\Gamma }_c`$. Below $`\mathrm{\Gamma }_c`$, stationary $`\pi `$-fronts are unstable and split into pairs of $`\pi /2`$-fronts. Figure 10 illustrates this in numerical solution of a one-dimensional version of Eq. (9). An stable $`\pi `$-front pattern is generated from random initial conditions with $`\mathrm{\Gamma }>\mathrm{\Gamma }_c`$. At $`t=0`$ $`\mathrm{\Gamma }`$ is decreased below $`\mathrm{\Gamma }_c`$; the $`\pi `$-front becomes unstable and splits into a pair of traveling $`\pi /2`$-fronts.
The numerically computed $`\mathrm{\Gamma }_c`$ for the solution in Fig. 10 is $`\mathrm{\Gamma }_c2.15`$. Since $`\mathrm{\Gamma }_c`$ is a function of the parameters in Eq. (9), we define a new parameter $`\eta =(\mathrm{\Gamma }_c\mathrm{\Gamma })/\mathrm{\Gamma }_c`$ that measures the distance from the phase-front instability point. In Fig. 10, $`\eta 0.012`$ indicating that we are just beyond the critical point.
Farther from the Hopf bifurcation we find that the phase-front instability still exists. Figure 11 shows the the evolution of an initial unstable stationary $`\pi `$-front with parameters chosen so the system is far from the Hopf bifurcation but at the same distance, $`\eta 0.012`$, from the phase-front instability. The asymptotic solution is a slowly propagating $`\pi `$-front, in contrast to a pair of separated $`\pi /2`$-fronts that develop close to the Hopf bifurcation (see Fig. 10). The range of forcing amplitudes near $`\mathrm{\Gamma }_c`$ over which these traveling $`\pi `$-fronts exist increases with $`\mu `$. At smaller forcing amplitudes, below the range of traveling $`\pi `$-fronts, $`\pi `$-fronts split into pairs of $`\pi /2`$-fronts and four phase traveling patterns prevail.
In two dimensions the typical traveling wave pattern for $`\mathrm{\Gamma }<\mathrm{\Gamma }_c`$ is a rotating four-phase spiral wave. Figure 12(a) shows a stable four-phase spiral wave generated from random initial conditions. Using this spiral as an initial condition, we increase $`\mathrm{\Gamma }`$ above $`\mathrm{\Gamma }_c`$ and the system evolves into a two-phase standing pattern. Figures 12(b)-(d) show the transition. Since the $`\pi /2`$ fronts are attracting the spiral is unstable and two of the four phase domains shrink until a standing two-phase pattern remains.
The numerical solutions of the forced FitzHugh-Nagumo equations support the predictions of the amplitude equation analysis. Close to the Hopf bifurcation, the phase-front instability is found (compare Fig. 7 with Fig. 10 and Fig. 8 with Fig. 12). Far from the Hopf bifurcation the instability persists. The effects of higher order terms in the amplitude equation are valid even far from the Hopf bifurcation ($`\mu =0.25`$); the phase-front instability near the Hopf bifurcation (as $`\mu 0`$) turns into an Ising-Bloch pitchfork bifurcation. Stationary $`\pi `$-fronts bifurcate to traveling $`\pi `$-fronts and not $`\pi /2`$-fronts.
### IV.2 The Brusselator model
We tested the transition from four-phase traveling waves to two-phase standing waves using another reaction-diffusion model, the forced Brusselator,
$`u_t`$ $`=`$ $`cdu+[1+\mathrm{\Gamma }\mathrm{cos}\omega _ft]u^2v+^2u,`$ (10)
$`v_t`$ $`=`$ $`duu^2v+\delta ^2v.`$
The unforced Brusselator, obtained by setting $`\mathrm{\Gamma }=0`$, has a stationary uniform state $`(u,v)=(c,d/c)`$ which undergoes a Hopf bifurcation as $`d`$ is increased past $`d_c=1+c^2`$. The Hopf frequency is $`\omega _H=c`$ and the distance from the Hopf bifurcation is measured by $`\mu =(dd_c)/d_c`$.
We studied Eq. (10) in the 4:1 resonance band using a numerical partial differential equation solver PaCa:96 ; APCMLS:00 . We found that below a critical forcing amplitude $`\mathrm{\Gamma }_c`$ the solutions are rotating four-phase spiral waves consisting of $`\pi /2`$-fronts (see Fig. 13(a)). The four-phase spiral wave was generated by one of two following initial conditions: a spiral wave computed from the unforced ($`\mathrm{\Gamma }=0`$) Brusselator equations, or the linear functions
$`u(x,y)=`$ $`y/L,`$ $`0yL,`$
$`v(x,y)=`$ $`2x/L+4`$ $`0xL,`$
where $`L=632.5`$.
Above $`\mathrm{\Gamma }_c`$ pairs of $`\pi /2`$ fronts attract each other and the core of the spiral evolves into an expanding $`\pi `$-front. Figures 13(b)-(d) illustrate this process. When the $`\pi /2`$ fronts disappear, the resulting asymptotic pattern is two states separated by a stationary $`\pi `$-front. The transition from a four-phase spiral wave to a two-phase stationary pattern, as in the amplitude equation model and the FitzHugh-Nagumo model, indicates the existence of the phase-front instability in the Brusslator model.
## V Conclusions
We studied 4:1 resonant patterns in Belousov-Zhabotinsky chemical experiments, in an amplitude equation for forced oscillatory systems (the forced complex Ginzburg-Landau equation), and in forced FitzHugh-Nagumo and Brusselator reaction-diffusion models. At low forcing amplitudes all of these systems exhibit traveling four-phase patterns.
An analysis of a forced complex Ginzburg-Landau equation, derivable from periodically forced reaction-diffusion systems near a Hopf bifurcation, predicts traveling four-phase patterns at low forcing amplitude and standing two-phase patterns at high forcing amplitude. The transition mechanism between these two patterns is a degenerate phase-front instability where a stationary $`\pi `$-front splits into a pair of traveling $`\pi /2`$-fronts. We derived an interaction potential between $`\pi /2`$-fronts that describes the instability as a change from repulsive to attractive $`\pi /2`$-front interactions. We investigated the behavior of the instability near the critical point where higher order terms in the amplitude equation become important. We found that these terms lift the degeneracy of the instability and introduce a narrow intermediate regime. In this regime we found both slowly traveling $`\pi `$-fronts and the coexistence of stable stationary $`\pi `$-fronts and repelling pairs of $`\pi /2`$-fronts.
We further investigated this phase-front instability using the FitzHugh-Nagumo and the Brusselator reaction-diffusion models. These models exhibit the instability even far from the Hopf bifurcation where the amplitude equation is not known to be valid. Near the Hopf bifurcation the instability, at $`\mathrm{\Gamma }_c`$, separates patterns of stationary $`\pi `$-fronts from patterns of traveling $`\pi /2`$-fronts. In two dimensions, a rotating four-phase spiral wave evolves into a two-phase standing pattern when $`\mathrm{\Gamma }`$ is increased past $`\mathrm{\Gamma }_c`$. In the FitzHugh-Nagumo model we found, far from the Hopf bifurcation, an intermediate range near $`\mathrm{\Gamma }_c`$ where traveling $`\pi `$ front patterns were observed. These numerical results are in full agreement with the theoretical predictions based on the amplitude equation.
The standing two-phase patterns found in the amplitude equation and in the FitzHugh-Nagumo and Brusselator models were not observed in the experiments, which were conducted far from the Hopf bifurcation. However, the existence of the phase-front instability far from the Hopf bifurcation was found in the numerical studies of the FitzHugh-Nagumo and Brusselator models. We conclude that the large distance from the Hopf bifurcation does not explain the absence of standing two-phase patterns in the experiments. A more likely explanation is the limited dynamic range of the forcing amplitude in the experiments. Experiments show that the dynamics of the BZ reaction are $`\gamma `$-dependent; as the forcing amplitude is increased, the dynamics undergo a transition from oscillatory to excitable kinetics. The excitable kinetics are not described by the amplitude equation or by the reaction-diffusion models in the parameter ranges we studied.
###### Acknowledgements.
We acknowledge the support of the Engineering Research Program of the Office of Basic Energy Sciences of the U.S. Department of Energy. Additional support was provided by the ASCI project B347883 through the Lawrence Berkeley National Lab; the Robert A. Welch Foundation; grant No. 98-00129 from the United States - Israel Binational Science Foundation; and by the Department of Energy, under contract W-7405-ENG-36.
|
no-problem/0003/cond-mat0003086.html
|
ar5iv
|
text
|
# A modified dual-slope method for heat capacity measurements of condensable gases
## I INTRODUCTION
The adiabatic heat-pulse calorimetry has been widely used to investigate the thermodynamic properties of materials for more than a century. The applied principle, $`C=\mathrm{}Q/\mathrm{}T`$, which describes the heat capacity $`C`$ of the sample, is determined by the pulse heat $`\mathrm{}Q`$ supplied to the sample under adiabatic conditions and the temperature rise $`\mathrm{}T`$, is well known. Due to the inherent simplicity as well as the general applicability independent of the sample thermal conductivity, this method is the most favored choice for heat capacity measurements of condensable gases which have poor thermal conductivity in their low temperature solid phase. Although the traditional adiabatic calorimetry has high precision and can be used to determine the latent heat at strong first-order transitions, it is very difficult to achieve the resolution needed to characterize the temperature dependence of $`C_v(T)`$ (or $`C_p(T)`$) close to the critical temperature $`T_c`$ for a second-order transition. Also, because of the inherent limitations on achieving the ideal adiabatic conditions at low temperatures and the long time required to cover a few tens of kelvin temperature range with reasonable number of data points, new user friendly non-adiabatic techniques with excellent sensitivity are needed to study the heat capacity of condensable gases. Among these new techniques, the most sensitive method is the ac method devised by Sullivan and Seidel. While the ac method allows one to obtain accurate values of $`C_v`$, it suffers from the fact that it normally must be used for small samples at low ac frequencies and over limited temperature ranges (typically $`T<20`$ K). In order to keep the sample in equilibrium with the heater and thermometer during one cycle, the time constant for thermal relaxation ($`\tau _1`$) of the sample plus sample holder (or calorimeter) to the thermal reservoir must be short compared to the period of the driving flux. In addition, the samples’s internal equilibrium time constant ($`\tau _2`$) with the sample holder (or calorimeter) must be short compared to $`\tau _1`$. It is this latter constraint which restricts the applicability of this method to small samples with sufficiently large thermal conductivity such as pure metals below 20 K.
A slightly less sensitive method involving a fixed heat input followed by a temperature decay measurement is the relaxation method. In this method the sample is raised to an equilibrium temperature above the thermal reservoir and then allowed to relax to the reservoir temperature without heat input. A recent improvement of this technique utilizing advanced numerical methods was provided by Hwang $`et`$ $`al.`$ The relaxation method requires extensive calibration of the heat losses of the sample as a function of temperature difference between the reservoir and the final sample temperature and relies on accurate, smooth temperature calibration of thermometers. The technique depends heavily on the numerical techniques used to determine many equilibrium heat losses during the heating portion of the relaxation cycle to obtain good temperature resolution of the heat capacity changes. Though Hwang $`et`$ $`al.`$ addressed the problems arising from large $`\tau _2`$, this method still fails to provide accurate $`C_v`$ values above $``$20 K for large samples with poor thermal conductivity.
A variation of the relaxation method called dual-slope method, was first discussed by Riegel and Weber. In this method an extremely weak thermal link to the reservoir is used and the temperature of the sample is recorded over a 10 h. cycle while heating at constant power for one half of the cycle, and then allowing the sample relax with zero heat input. The heat loss to the reservoir and surroundings can be eliminated from the calculation of $`C_v(T)`$ using this technique provided the sample, sample holder, and the thermometer are always in equilibrium with each other (the reason for long 10 h. cycle to cover 3 K range), and the reservoir temperature is held constant over the 10 h. cycle. A further modification of this technique is a hybrid between the ac method and the dual-slope method, which reduces the duration of the cycle to about two hours. Although the dual-slope method is very elegant and easy to implement, the technique has only been implemented for small samples (typically less than 0.5 g) with good thermal conductivity and at temperatures less than $``$20 K. The success of this method depends heavily on achieving very good thermal equilibrium between the sample, sample holder, and the thermometer, and for large samples with poor thermal conductivity (i.e., large $`\tau _2`$) this method may also fail when we consider only the first order approximation of the heat balance equations of Riegel and Weber.
In the case of condensable gases, the sample size must be at least a few grams to reduce the spurious effect of sample condensed in the fill line and the heat leak through this tube. Due to the low thermal conductivity of these samples, in particular for powdered samples which may not wet the calorimeter walls, the $`\tau _2`$ can be very large leading to the failure of most of the above techniques. The present article describes a modified dual-slope method in which $`C_v`$ is evaluated by directly comparing the heating and cooling rates of the sample temperature for two algebraically independent heat pulse sequences without explicit use of the thermal conductance between sample and thermal bath. For the specific geometry of the calorimeter that we used, which is most suitable for the heat capacity measurements of condensable gases in the presence of external electric or magnetic fields, higher order heat balance equations are calculated, which can easily be adapted for other sample configurations as well. Because of the explicit consideration of the higher order equations, the problem of large $`\tau _2`$ is naturally addressed and during the heating and cooling cycles one does not require the sample to be in equilibrium with the thermometer, or in other words, the thermometer need not necessarily record the actual sample temperature. Due to this freedom we can obtain $`C_v`$ of samples with poor thermal conductivity very rapidly.
## II Theory
To model the thermal response of a heat-pulse calorimeter (Fig. 2) for heat capacity measurements, with the schematic diagram shown in Fig. 1, where the calorimeter is heated with the application of heater power $`P(t)`$ to some desired temperature $`T_{max}`$ and then allowed to cool-down, we start with the following set of heat balance equations:
$$P(t)=C^{}\dot{T_h^{}}+\lambda _s(T_h^{}T_h)+\lambda _r(T_h^{}T_0)$$
(1)
and
$$0=C\dot{T_h}+\lambda _s(T_hT_h^{})P_0(T_h)$$
(2)
for heating, and
$$0=C^{}\dot{T_c^{}}+\lambda _s(T_c^{}T_c)+\lambda _r(T_c^{}T_0)$$
(3)
and
$$0=C\dot{T_c}+\lambda _s(T_cT_c^{})P_0(T_c)$$
(4)
for cooling, where $`C`$ and $`T`$ are the heat capacity and the temperature of the thermometer well plus the inner conductor respectively, $`C^{}`$ and $`T^{}`$ are those of the outer conductor plus the heater coil. From these equations we need to obtain the quantity $`(C+C^{})`$ which is the net heat capacity of the calorimeter. Once the sample is condensed in the calorimeter, both $`C`$ and $`C^{}`$ will be modified but the above equations are still valid. $`P_0(T)`$ is the parasitic stray heat due to the conduction of heat through the fill line, the stainless steel suspension rod, and the thermometer leads as well as radiation from the pumping line. From the geometry of the calorimeter (Fig. 2) it is reasonable to assume that this stray heat affects only the inner conductor heat balance equations (Eqs. 2 & 4).
Since the thermometer is firmly coupled to the inner conductor through the thermometer well, the temperature $`T`$ is what the thermometer practically records. Hence we need to eliminate $`T^{}`$ to obtain $`(C+C^{})`$. For $`T_h=T_c`$ Eqs. 1 & 3 reduce to
$$P(t)=C^{}(\dot{T_h^{}}\dot{T_c^{}})+(\lambda _s+\lambda _r)(T_h^{}T_c^{}).$$
(5)
From Eqs. 2 & 4 and their first derivatives we obtain
$$(T_h^{}T_c^{})=\frac{C}{\lambda _s}(\dot{T_h}\dot{T_c}),$$
(6)
$$(\dot{T_h^{}}\dot{T_c^{}})=(\dot{T_h}\dot{T_c})+\frac{C}{\lambda _s}(\ddot{T_h}\ddot{T_c}).$$
(7)
After eliminating $`T_h^{}`$, $`T_c^{}`$ and their time derivatives from Eqs. 5, 6, and 7 we obtain
$$P(t)=(C+C^{})(\dot{T_h}\dot{T_c})+\frac{CC^{}}{\lambda _s}(\ddot{T_h}\ddot{T_c})+\frac{C\lambda _r}{\lambda _s}(\dot{T_h}\dot{T_c}).$$
(8)
Clearly, Eq. (8) reduces to the first order derivations of Riegel and Weber for $`\lambda _s1`$. However, for condensed gases, in particular for powdered samples, this condition is never met. In the present geometry, for a sufficiently weak thermal link $`\lambda _r`$ where $`\lambda _s\lambda _r`$, one can ignore the last term in Eq. (8). To solve the remaining equation exactly for $`C+C^{}`$, we need to obtain Eq. (8) for two algebraically independent pulse sequences of $`P(t)`$.
## III Experiment
Figure 2 shows the design of the calorimeter optimized for the above technique. The thermal reservoir is made up of a brass vacuum can (18 cm long, 4 cm diameter) with a manganin wire wound uniformly on the entire length of the outer surface. A dilute solution of GE-varnish is used to soak the cloth insulation of the manganin heater wire to provide good thermal contact with the vacuum can. A grooved copper tube (4 cm long) is soldered to the top flange, which can be used for heat sinking the heater and thermometer wires before connecting them to the calorimeter. A 122 cm long stainless steel tube (0.96 cm diameter) is soldered to the top flange to pump the vacuum space as well as support the vacuum can. copper radiation baffles are placed inside this tube at regular intervals to reduce the thermal radiation. In addition the top flange supports up to ten individual vacuum feed-throughs (not shown), which are thermally anchored to the flange through STYCAST 2850FT vacuum seal which has good thermal conductivity at low temperatures. The calorimeter consists of two concentric, thin walled, gold plated, OFHC copper cylinders 10.2 cm in length providing a 1 mm cylindrical shell space for condensing the samples. This shell space is vacuum sealed at both ends with homemade 1 mm thick STYCAST 2850FT O-rings. The outer conductor (2.7 cm inner diameter and 0.2 mm wall thickness) with a brass heater wire uniformly wound on the entire length of its outer surface serves as a radial heating element. Uniform winding of the heater wire is very crucial in eliminating longitudinal heat flow thereby reducing the temperature gradient along the length of the calorimeter. For good thermal contact with the outer cylinder, a dilute solution of GE-varnish was applied to the heater wire. In the middle of the inner conductor (2.5 cm diameter, 0.4 mm wall thickness) a thin copper ring as well as a copper thermometer well are soldered (see Fig. 2). The larger wall thickness (0.4 mm instead of 0.2 mm) for the inner cylinder is necessary to withstand the pressures generated when the solid samples melt at high temperatures. The copper ring sandwiched between two threaded nylon wings supports the calorimeter when suspended from the top flange with the help of a threaded stainless steel rod. This arrangement not only reduces the unwanted heat leaks to the calorimeter but also provides excellent mechanical stability when the free end of the stainless steel rod is slipped into the post at the bottom of the vacuum can (see Fig. 2).
The 1.6 mm diameter cupronickel alloy tube serving as the sample fill line is coiled into a three turn spring (not shown) before soldering it to the inner cylinder. To be able to apply electric field across the sample cell when necessary, it is important to ground the outer conductor (to reduce electrical interference from the heater current) and apply the high voltage to the inner conductor. Hence the coiled section of the fill line is isolated electrically as well as thermally from the remaining length with the help of a vacuum joint made up of a wider cupronickel tube, cigarette paper, and STYCAST seal. To assist sample condensation, the manganin heater wire is wrapped on the fill line tube outside of the vacuum can. Two factory calibrated germanium resistance thermometers placed inside the thermometer wells are used for recording the temperatures of the calorimeter and the thermal reservoir. The entire assembly is placed in a second vacuum can (125 cm long, 5 cm diameter) which can be lowered into a commercial liquid He dewar. With this arrangement including a few suitably placed copper radiation baffles at the neck of the outer vacuum can, a standard 100 liter dewar lasted for almost a month while heat capacity experiments were carried out in the $`4.275`$ K range.
## IV RESULTS AND DISCUSSION
Our observations indicate that due to the high degree of cylindrical symmetry, even though the calorimeter is rather unusually long ($``$10 cm), thermometers record the expected temperature values to very high accuracy. Figure 3 shows typical temperature excursions when the calorimeter is filled with pure N<sub>2</sub> as a sample. For simplicity we chose square and triangular voltage pulses for two algebraically independent $`P(t)`$ sequences. Both pulses can be easily generated with the help of a standard signal generator or data acquisition card and a suitable operational amplifier. Before the heat pulse is applied, the calorimeter is left to equilibrate with the surroundings until its temperature trace is horizontal with the time axis. Typically, when the reservoir temperature is adjusted to a new value, the calorimeter attained equilibrium in 30 to 60 min. To ensure that the calorimeter always attained equilibrium from either a higher temperature or a lower temperature for all pulse sequences (i.e., $`TT_0^+`$ or $`TT_0^{}`$), and that small hysteresis of the the thermometer does not affect the trace, when $`T_0`$ is set to a higher value, a small heat pulse is simultaneously applied to the calorimeter so that its temperature rises above the equilibrium temperature by at least 2 to 3 K. When such a precaution was not taken, we observed that the base line of the two traces in Fig. 3 differed by $``$100 mK resulting in spurious results when fit to Eq. (8).
Although we assumed that the last term in Eq. (8) will be negligible for $`\lambda _s\lambda _r`$, it is clear that for the long exponential decay section of the traces (below the lower horizontal line in Fig. 3) where $`(\dot{T_h}\dot{T_c})`$ can be very large, the above approximation may not be valid. We observed that for a typical 6.5 K pulse (similar to Fig. 3), the useful window of temperature where the above approximation is valid for obtaining faithful results is only 3.5$``$4 K (the section between the two dashed lines in Fig. 3). It is crucial that during the entire pulse sequence, the reservoir temperature $`T_0`$ should be constant. With the help of a program written in LabVIEW, which controls the voltage across the reservoir heater through a multifunction AD/DA card and with the thermometer output in the negative feedback loop, we were able to maintain $`T_0`$ to within 30 mK during the entire pulse sequence (see Fig. 3). Because the feedback is controlled through software, excellent stability in $`T_0`$ was achieved by changing the feedback parameters depending on the pulse height and temperature range.
Figure 4 shows the heat capacity curves of the calorimeter filled with pure N<sub>2</sub> for two $`T_0`$ values. Excellent overlap of the curves demonstrates that with this technique, we can obtain heat capacity of samples with poor thermal conductivity very rapidly ($``$3.5 K range in 3 h.), without further curve fitting generally required with other techniques. By extending the LabVIEW program written for controlling the reservoir temperature $`T_0`$, we fully automated the data acquisition process to obtain pulse sequence data for various $`T_0`$ values with an interval of 2.5 K to ensure data overlap from adjacent pulse sequences. Once the data are obtained, standard third degree polynomial fits (only for the useful temperature interval similar to the one between dashed lines in Fig. 3) are employed for obtaining time derivatives in Eq. (8). Once the heat capacity is calculated for each pulse sequence, no further curve fitting is necessary and in general for $`T<30`$ K, the overlap between adjacent curves is better than the one shown in Fig. 4. Figure 5 shows the heat capacity of the empty calorimeter (with small amounts of He gas for better thermal contact) obtained through the above automated process in the $`1070`$ K range. The kink at 60 K, the presence of which is accidental, shows the heat capacity near a continuous phase transition with excellent resolution. The data shown without further curve fitting clearly demonstrates the high sensitivity of the technique.
In summary, we have demonstrated the use of a novel technique to study the heat capacity of moderately large samples with poor thermal conductivity in the 7.5-70 K range. A fully automated calorimeter for rapid measurement of the heat capacity of condensable gases utilizing the above technique has been presented. The technique along with the automated calorimeter with a provision to apply external electric and magnetic fields is particularly useful for the study of continuous phase transitions in molecular solids as well as field induced changes in the heat capacity.
###### Acknowledgements.
The authors gratefully acknowledge the support of L. Phelps, G. Labbe, B. Lothrop, W. Malphurs, M. Link, E. Storch, T. Melton, S. Griffin, and R. Fowler. This work is supported by a grant from the National Science Foundation No. DMR-962356.
|
no-problem/0003/nlin0003044.html
|
ar5iv
|
text
|
# The Markovian metamorphosis of a simple turbulent cascade model
## Abstract
Markovian properties of a discrete random multiplicative cascade model of log-normal type are discussed. After taking small-scale resummation and breaking of the ultrametric hierarchy into account, qualitative agreement with Kramers-Moyal coefficients, recently deduced from a fully developed turbulent flow, is achieved.
PACS: 47.27.Eq, 02.50.Ga, 05.40.+j
KEYWORDS: fully developed turbulence, random multiplicative branching process, Markov process.
CORRESPONDING AUTHOR:
Martin Greiner
Max-Planck-Institut für Physik komplexer Systeme
Nöthnitzer Str. 38
D–01187 Dresden, Germany
tel.: 49-351-871-1218
fax: 49-351-871-1199
email: greiner @ mpipks-dresden.mpg.de
Phenomenological modelling of the energy cascade in fully developed turbulence has a long tradition . As representatives of the multifractal approach random multiplicative branching processes mimic the redistribution of energy flux from the large, integral length scale $`L`$ down to the small, dissipative scale $`\eta `$ and focus on the scaling aspect of the surrogate energy dissipation field, extracted from measured velocity time series. In a particular simple model version, which is only one-dimensional and binary discrete, a domain of length $`r_j=L/2^j`$ is split into two subdomains of equal length $`r_{j+1}`$ and the energy flux density $`\epsilon (r_j)`$ of the parent domain is non-uniformly redistributed by assigning a left/right multiplicative weight $`q_/`$ to the left/right subdomain: $`\epsilon _{}(r_{j+1})=q_{}\epsilon (r_j)`$, $`\epsilon _{}(r_{j+1})=q_{}\epsilon (r_j)`$. The multiplicative weights are drawn from a scale-independent symmetric probabilistic splitting function $`p(q_{},q_{})=p(q_{},q_{})`$, are $`q_{}=q_{}=1`$ on average and are completely uncorrelated to multiplicative weights from all other branchings differing in scale and position. At an intermediate length scale $`\eta r_jL`$, corresponding to $`j`$ cascade steps, the local bare field density $`\epsilon (r_j)=q_1q_2\mathrm{}q_j`$ is a product of $`j`$ independent multiplicative weights, where $`\epsilon (r_0)=1`$ has been chosen for simplicity. Upon taking the logarithm, the product turns into a summation over independent and identically distributed random variables:
$$\mathrm{ln}\epsilon (r_j)=\mathrm{ln}q_1+\mathrm{ln}q_2+\mathrm{}+\mathrm{ln}q_j.$$
(1)
Introducing $`y_j=y(l_j)=\mathrm{ln}\epsilon (r_j)\mathrm{ln}\epsilon (r_j)`$ as new field variable and $`l_j=\mathrm{ln}(L/r_j)=j\mathrm{ln}2`$ as a logarithmic scale, it is straightforward to derive
$$\frac{\mathrm{\Delta }y}{\mathrm{\Delta }l}=\frac{y_{j+1}y_j}{l_{j+1}l_j}=\frac{1}{\mathrm{ln}2}\left(\mathrm{ln}q_{j+1}\mathrm{ln}q\right)=\sqrt{\frac{\mathrm{ln}^2q\mathrm{ln}q^2}{2\mathrm{ln}2}}\xi _{j+1}$$
(2)
from (1) for parent/daughter field variables. The last step, leading to the stationary Gaussian-white noise “random force” $`\xi _j`$ with normalisation $`\xi _j\xi _j^{}=\frac{2}{\mathrm{ln}2}\delta _{jj^{}}`$ , only holds for a splitting function $`p(q_{},q_{})=p(q_{})p(q_{})`$ of log-normal type, where
$$p(q)=\frac{1}{\sqrt{2\pi }\sigma q}\mathrm{exp}\left(\frac{1}{2\sigma ^2}\left(\mathrm{ln}q+\frac{\sigma ^2}{2}\right)^2\right).$$
(3)
The Langevin equation (2) represents a discrete Markov process, evolving from large to small scales with zero drift term $`D^{(1)}=0`$ and constant diffusion term $`D^{(2)}=(\mathrm{ln}^2q\mathrm{ln}q^2)/2\mathrm{ln}2`$.
In several ways this thinking seemingly contradicts the results deduced from a large-Reynolds number helium jet experiment : from the coarse-grained one-dimensional surrogate energy dissipation field
$$\overline{\epsilon }(x,r)=\frac{1}{r}_{x\frac{r}{2}}^{x+\frac{r}{2}}\epsilon (x^{},\eta )dx^{},$$
(4)
entering into the transformed variable
$$\overline{y}(x,l)=\mathrm{ln}\overline{\epsilon }(x,r)\mathrm{ln}\overline{\epsilon }(x,r),$$
(5)
the Kramers-Moyal coefficients
$$D^{(n)}(\overline{y}(l))=\underset{\mathrm{\Delta }l0}{lim}\frac{1}{n!\mathrm{\Delta }l}\left[\overline{y}(l+\mathrm{\Delta }l)\overline{y}(l)\right]^np(\overline{y}(l+\mathrm{\Delta }l)|\overline{y}(l))d\overline{y}(l+\mathrm{\Delta }l)$$
(6)
have been determined to yield
$`D^{(1)}(\overline{y})`$ $`=`$ $`\gamma \overline{y}(\gamma 0.21)`$ (7)
$`D^{(2)}(\overline{y})`$ $`=`$ $`D(D0.03)`$ (8)
$`D^{(n3)}(\overline{y})`$ $``$ $`0.`$ (9)
This outcome suggests that the energy cascade in fully developed turbulence can be described by a scale-continuous Markovian Ornstein-Uhlenbeck process, which differs from the discrete random multiplicative branching picture in two ways: scale-continuous evolution with a linear drift and constant diffusion term as opposed to a scale-discrete evolution with a zero drift and constant diffusion term. – However, the comparison between these two, apparently contradicting pictures is not well taken and is not as straightforward as anticipated. So far it has been like comparing a caterpillar with a butterfly. Now, we will initiate the metamorphosis in two steps.
Step one has to do with the distinction between a bare and a dressed field . Since fully developed turbulence is a three-dimensional process, the redistribution of energy flux from larger to smaller scales should be conserved in three dimensions, as long as the dissipative scale $`\eta `$ is not reached. Of course this does not hold once the process is looked at in only one dimension, which is done in one-point time-series measurements of one component of the velocity field and from which the one-dimensional surrogate energy dissipation field is extracted. For the simple binary discrete random multiplicative branching process this implies that the splitting function $`p(q_{},q_{})p(q_{})p(q_{})`$ more or less factorises, where $`p(q)`$ should be a positively skewed distribution limited to the support $`0qq_{\mathrm{max}}`$ with $`q_{\mathrm{max}}2^3`$ . In this respect, the log-normal distribution (3) with the realistic parameter $`\sigma =0.42`$, reproducing observed lowest-order scaling exponents $`\epsilon ^n(r_j)(L/r_j)^{\tau (n)}`$ and observed multiplier distributions, represents a fair candidate. With a factorised splitting function, where $`q_{}+q_{}=2`$ only holds on average, we have to distinguish between the bare field (1), which is evolved from the large scale $`L`$ down to the intermediate scale $`r_j`$, and the dressed field
$$\overline{\epsilon }(r_j)=\epsilon (r_j)(1+\mathrm{\Delta }(r_j)),$$
(10)
which has been evolved from $`L`$ all the way down to the dissipative scale $`\eta `$ and then again resummed up to the intermediate scale $`r_j`$. The two fields differ by a small-scale resummation factor $`(1+\mathrm{\Delta }(r_j))`$, which is equal to one only on average. As the experimental analysis (4) corresponds to the dressed field, we also need to employ the dressed field for the random multiplicative branching process, in order to make a fairer comparison between model and data results.
For a truly fair comparison between model and data results we have to call for an additional, second step: since the binary discrete random multiplicative branching process is organised hierarchically in one-dimensional space, the underlying ultrametric does not allow for spatially homogeneous observables right away. This has been noted only recently and a simple scheme has been suggested, breaking the ultrametric hierarchy and restoring spatial homogeneity. It builds a long chain of independent cascade field realisations, each of length $`L`$, randomly places the observational interval of length $`\eta rL`$ within this chain and samples over these random placings.
These two steps have been decisive for the correct interpretation of the observed multiplier phenomenology : the small-scale resummation of step one explains the scale-independent multiplier distributions as fixed-point distributions and step two is in charge for producing the correct correlations between multipliers. – Since the Kramers-Moyal coefficients (6) can be understood as moments of logarithmic multipliers, defined for an infinitesimal scale step, we might already begin to speculate here that steps one and two might also be in charge for turning the caterpillar (2) into the butterfly (7).
We will now test this speculation by numerical simulation of the binary discrete random multiplicative branching process with the factorised splitting function (3) of log-normal type ($`\sigma =0.42`$). A chain of $`N_L=10^6`$ independent cascade realisations is constructed, where each realisation has been obtained after $`J=10`$ binary cascade steps; consequently the length of the total chain amounts to $`10^6L=1.2810^9\eta `$.
At first we test the Markov property in general. The conditional probability distributions $`p(\overline{y}(l_2)|\overline{y}(l_1))`$ with centred intervals $`l_2>l_1`$ is sampled over $`e^{l_1}N_L`$ random placings within the long chain of cascade realisations. It is found that these conditional probability distributions fulfill the Chapman-Kolmogorov equation $`p(\overline{y}(l_3)|\overline{y}(l_1))=p(\overline{y}(l_3)|\overline{y}(l_2))p(\overline{y}(l_2)|\overline{y}(l_1))d\overline{y}(l_2)`$ close to perfectly in the scale range $`\eta lL`$. This is a necessary and almost sufficient validation for this branching process to appear Markovian .
Without any loss of generality the Kramers-Moyal coefficients (6) are only calculated at binary scales $`r_j`$, but again sampled over randomly chosen $`x`$-values within the long cascade chain. Convergence is tested by letting the positive integer number $`m1`$ in the centred daughter interval of length $`r_j\mathrm{\Delta }r_j=r_j2m\eta `$. Good convergence is achieved for $`0j5`$, i.e. the upper part of the cascade inertial range, whereas for $`6jJ=10`$ convergence has been found to be unsatisfactory since $`\eta r_j`$ is no longer fulfilled. Consequently, in the following we will only show results for the former scale range. – The first Kramers-Moyal coefficient $`D^{(1)}(\overline{y},l)`$ is illustrated in Fig. 1a at binary scales $`j=1,3,5`$. It is not constant zero anymore; now it is linear in $`\overline{y}(l_j)`$. Fitting the parametrisation $`D^{(1)}(\overline{y},l)=\gamma _0(l)+\gamma (l)\overline{y}`$ yields $`\gamma _0(l)=0`$ within simulation error bars and a positive, slightly scale-dependent drift coefficient $`\gamma (l)`$ with values listed in Tab. 1a. The latter decreases from a value $`0.19`$ at $`L=2^9\eta `$ to a value $`0.08`$ at $`r_5=2^5\eta `$ and agrees with the experimentally deduced value (7) within a factor of 1.1-2.5. The second Kramers-Moyal coefficient $`D^{(2)}(\overline{y},l)`$, depicted in Fig. 1b, turns out to be almost constant and almost scale-independent; fitted values of the parametrisation $`D^{(2)}(\overline{y},l)=D(l)+d_1(l)\overline{y}`$ can be found in Tab. 1a. Compared with $`D=0.127`$ of the caterpillar thinking (2) it is reduced by about a factor of 3, but is still about a factor of $`1.4`$ above the experimentally given result (7). Also in qualitative agreement with the experimental results, higher-order Kramers-Moyal coefficients are close to zero: $`D^{(3)}(\overline{y},l)`$ and $`D^{(4)}(\overline{y},l)`$ are of the order $`10^3`$ and $`10^4`$, respectively. Here we might evoke Pawulas theorem to conclude that, after taking steps one and two into account, the binary discrete random multiplicative branching process of log-normal type appears as a scale-continuous Markovian Ornstein-Uhlenbeck process with Kramers-Moyal coefficients
$`D^{(1)}(\overline{y})`$ $`=`$ $`\gamma (l)\overline{y}(0.2\gamma (l)0.1)`$ (11)
$`D^{(2)}(\overline{y})`$ $`=`$ $`D(D0.04)`$ (12)
$`D^{(n3)}(\overline{y})`$ $``$ $`0.`$ (13)
This result is in nice qualitative agreement with the experimental observation (7).
The result (11) has been obtained by taking both steps, one and two, of the metamorphosis into account. Step one with its small-scale resummation is definitely responsible for the transition from the scale-discrete evolution of the bare field to the scale-continuous Markov description for the dressed field. What are then the implications of step two, i.e. the breaking of the ultrametric cascade hierarchy to restore spatial homogeneity? In order to clarify this point, we restrict the sampling of the Kramers-Moyal coefficients only to the hierarchical positions $`x_m=(m+0.5)r_j`$ with integer $`0m<2^jN_L`$ within the long chain of cascade configurations. For these positions, the integration interval of length $`r_j`$, entering into (4), perfectly matches an intermediate interval of the bare cascade evolution. Results for the first and second Kramers-Moyal coefficients are listed in Tab. 1b. The first coefficient, which we now denote with a tilde, is again found to be of the form $`\stackrel{~}{D}^{(1)}(\overline{y},l)=\stackrel{~}{\gamma }(l)\overline{y}`$. Note however, that the drift coefficient $`\stackrel{~}{\gamma }`$ is negative and that its modulus is about a factor of 3-7 less when compared with $`\gamma `$. This demonstrates that small-scale resummation alone already introduces a weak linear drift term, but breaking of the ultrametric hierarchy is very necessary to change its sign and to bring it to the correct order of magnitude. Also the second Kramers-Moyal coefficient $`\stackrel{~}{D}^{(2)}(\overline{y},l)=\stackrel{~}{D}(l)+\stackrel{~}{d}_1(l)\overline{y}`$ is affected by leaving out step two, but only weakly: again it shows a small scale-dependence, is almost constant for a given scale and, once compared with $`D(l)`$, is reduced by about a factor of 1.2-1.7. Note also, that it is mainly small-scale resummation, which drives the diffusion coefficient away from the caterpillar thinking (2) with $`D=0.127`$.
We conclude: small-scale resummation and breaking of the ultrametric hierarchy initiate the Markovian metamorphosis of a discrete random multiplicative branching process as they turn the caterpillar, a discrete, Gaussian white-noise evolution in scale, into a butterfly, that is an effective scale-continuous Ornstein-Uhlenbeck description, the latter being in qualitative agreement with the experimental observations. At least on a qualitative level, there is no conflict between the experimentally observed energy cascade in fully developed turbulence and random multiplicative branching processes; this statement is further supported by recent work on multiplier distributions . – Several points have to be considered in order to possibly achieve an even better quantitative agreement between such model results and experimental observations: sensitivity on the generator of random multiplicative branching processes, i.e. on scale-discrete or scale-continuous implementations and on the choice of splitting function, as well as on finite-size, large-scale and dissipative effects; work in these directions is in progress . Also data results for larger Reynolds numbers would be highly appreciated.
|
no-problem/0003/cond-mat0003342.html
|
ar5iv
|
text
|
# Bose-Einstein condensation in a two-dimensional trap
## I introduction
The recent experimental realization of Bose-Einstein Condensation(BEC) of alkali atoms in magnetic traps has generated much interest and activity in theoretical and experimental physics. An interesting, but less addressed, question is whether such a phase transition due to global coherence exists in two-dimensional(2D) space. Even though it has been known that 2D BEC of a uniform(untrapped) Bose gas cannot occur in 2D at a finite temperature since thermal fluctuations destabilize the condensate , it has been suggested that spatially-varying potentials which break the uniform distribution may create BEC in 2D inhomogeneous systems . In the presence of trapping potential, the effect of thermal fluctuations are strongly quenched due to the different behavior exhibited by the density of states.
At zero temperature, the 2D condensation can be described by the 2D Gross-Pitaevskii equation(GPE), a mean-field approximation for the macroscopic wave function of weakly interacting bosons. The GPE is obtained from mean-field many-body quantum-statistical theory and has been proven to explain the condensate state of dilute Bose system satisfactorily. Although no direct experimental observation of 2D BEC has been reported yet, recent theoretical works suggest a possibility of 2D BEC in a trapped condition. However, most of the theoretical approaches for 2D BEC have been rather indirect and sometimes even faced with several difficulties.
Tempere and Devreese studied harmonically-interacting bosons in 2D. They calculated the critical temperature from a grand-partition function to show the occurrence of 2D BEC, but without any knowledge of the 2D condensate wave function for the density profile. Jackson et al. studied the 2D vortex state by direct substitution of the 3D interaction strength $`g_{3D}=4\pi \mathrm{}^2a/m`$ ($`a`$ is the s-wave scattering length in 3D and $`m`$ is the atomic mass) into the 2D GPE, which resulted in a dimensional inconsistency. Bayindir et al. solved the 2D GPE using the two-fluid model a density estimation and obtained the temperature dependence of the internal energy and condensate fraction, where the 2D interaction strength was still treated just as a free variable. Haugset et al. calculated the density profile and ground state energy for a finite number of 2D Bose particles by diagonalizing the Hamiltonian numerically, where a modified 2D interaction strength was used to solve the dimensional inconsistency. Analytical approach of interacting Bosons in 2D trap by 2D nonlinear GPE was attempted by Gonzalez et. al. , and they studied the ground-state energy density by applying the Haugset et al.’s modified form of the 2D interaction strength to the 2D GPE.
Previous 2D researches are based on the size-independent concepts: if the atomic motion in any one axis is frozen completely in the ground state, the system is considered as two-dimension. Moreover, most of the studies considered the 2D trap as an extremely anisotropic 3D trap and still far from the pure 2D system. On the other hand, we will focus on the 2D system which is confined by a 2D harmonic trap in $`(x,y)`$-direction and by ideal rigid walls in $`z`$-direction . In this system, the 2D interaction strength has a logarithmic form which will be derived using a 2D scattering theory in the next section. Quasi-2D atomic systems, where atoms follow the 2D kinematics with the 3D interactions , have been recently realized for spin-polarized atomic hydrogen adsorbed on the surface of liquid helium and quasi-condensation has been indirectly demonstrated . Although no pure 2D BEC has been studied experimentally yet, the 2D GPE is quite valid even in quasi-2D at low temperatures . Moreover it will provide us a guide to a better understanding of the 2D statistical properties of the quasi-2D system.
In this paper, we will solve the 2D GPE numerically and show the existence of the 2D BEC directly by obtaining the stable condensate wave functions and the 2D ground state energy density of the trapped Bose atoms. Possible vortex states in 2D are also discussed. The results of the 2D Bose condensation are then compared with those of the well-known 3D cases.
## II two-dimensional scattering theory
The relation between interaction strength, $`g`$, and the s-wave scattering length, $`a`$, is well-known in 3D as $`g=4\pi \mathrm{}^2a/m`$. However, the 3D result is not applicable to a 2D GPE: substitution of the 3D relation into the 2D GPE results in dimensional inconsistency. As a first step to obtain the correct 2D GPE, the interaction strength in 2D is derived from the following 2D scattering theory . We begin with the 2D time-independent Schrödinger equation with the interaction $`U(\rho )`$, given by
$$(_\rho ^2+k^2)\psi _k(\rho )=U(\rho )\psi _k(\rho ),$$
(1)
where $`k^2=2\mu _mE/\mathrm{}^2`$. $`\mu _m`$ is the reduced mass of two particles ($`m/2`$). One can find the general solution of Eq. (1) with the help of the 2D Green’s function
$$(_\rho ^2+k^2)G_k(\rho ,\rho ^{})=\delta ^2(\rho \rho ^{}).$$
(2)
The general solution of the 2D Green’s function is the Hankel function . Therefore the 2D wave function can be expressed as
$`\psi _k(\rho )`$ $`=`$ $`e^{i𝐤\rho }+{\displaystyle d^2\rho ^{}G_k(\rho ,\rho ^{})U(\rho ^{})\psi (\rho ^{})}`$ (3)
$`=`$ $`e^{i𝐤\rho }{\displaystyle \frac{i}{4}}{\displaystyle d^2\rho ^{}H_0(k|\rho \rho ^{}|)U(\rho ^{})\psi (\rho ^{})}`$ (4)
$``$ $`e^{i𝐤\rho }{\displaystyle \frac{i}{2}}{\displaystyle \frac{e^{i(k\rho \frac{\pi }{4})}}{\sqrt{2\pi k\rho }}}{\displaystyle d^2\rho ^{}U(\rho ^{})e^{i(𝐤𝐤^{})\rho ^{}}}.`$ (5)
Here $`H_0`$ is the first kind Hankel function of order zero defined by $`H_0(x)=J_0(x)+iN_0(x)`$, where $`J_0`$ and $`N_0`$ are Bessel and Neumann function of order zero. For large $`\rho `$, it has the asymptotic behavior $`H_l(x)\sqrt{2/\pi x}e^{i(xl\pi /2\pi /4)}`$, where $`l`$ is an integer. The Born approximation is applied to the last step, and $`𝐤^{}=k\widehat{\rho }`$.
In 2D, for large $`\rho `$, the scattered wave function should have the form
$$\psi _k(\rho )e^{i𝐤\rho }+F_k\frac{e^{i(k\rho \frac{\pi }{4})}}{\sqrt{\rho }},$$
(6)
where $`F_k`$ is the first-order Born scattering amplitude in 2D which has a dimension of $`(length)^{1/2}`$. This asymptotic formula should be a solution of the time independent Schrödinger equation (1), and satisfy $`\rho \mathrm{}`$ limit of the Green’s function solution. The scattering cross-section in 2D now has the dimension of length given by
$$\frac{d\sigma _{2D}}{d\theta }=\frac{|𝐣_{sc}|\rho }{|𝐣_{inc}|}=|F_k|^2,$$
(7)
where $`𝐣_{inc}`$ and $`𝐣_{sc}`$ are incident and scattered flux densities. We note that the total scattering cross-section for small $`k`$ should be $`2\pi |F_k|^2`$ in 2D instead of $`4\pi |f_{3D}|^2`$ in 3D. If one assumes the delta-function type interaction $`U(\rho )=(mg/\mathrm{}^2)\delta ^2(\rho ),`$ the 2D scattering amplitude $`F_k`$ is obtained in the complex form from Eqs. (5) and (6) as
$$F_k=\frac{i}{2}\frac{1}{\sqrt{2\pi k}}\frac{mg}{\mathrm{}^2}.$$
(8)
The next step is to find the relation between the scattering amplitude and the scattering length in 2D. The well-known relation $`f_{3D}=a`$, where $`f_{3D}`$ is the 3D scattering amplitude and $`a`$ is the s-wave scattering length in 3D, is not directly applicable in 2D. In general, the s-wave scattering length in 2D, $`b`$, is different from the one in 3D, $`a`$, and the s-wave scattering length in 2D is not known yet from experiment.
The relation is obtained from partial wave analysis of collision theory. The incident wave can be written as an expansion of plane wave in cylindrical coordinate
$$e^{ik\rho \mathrm{cos}\theta }=\frac{1}{2}\underset{l=\mathrm{}}{\overset{\mathrm{}}{}}i^l\left[H_l(k\rho )+H_l^{}(k\rho )\right]e^{il\theta },$$
(9)
where $`l`$ is an integer. Away from the range of potential, the scattered wave has a phase shift $`\delta _l`$, and the scattering matrix is obtained as $`S_l=e^{2i\delta _l}`$ since $`|S_l|=1`$ for elastic scattering. Therefore, for large $`\rho `$, the solution of Eq. (1) is written as
$`\psi _k(\rho )`$ $`=`$ $`e^{i𝐤\rho }+{\displaystyle \frac{1}{2}}{\displaystyle \underset{l=\mathrm{}}{\overset{\mathrm{}}{}}}i^l(S_l1)H_l(k\rho )e^{il\theta }`$ (10)
$``$ $`e^{i𝐤\rho }+{\displaystyle \frac{1}{\sqrt{2\pi k\rho }}}{\displaystyle \underset{l=\mathrm{}}{\overset{\mathrm{}}{}}}i^l(e^{2i\delta _l}1)e^{i(k\rho \frac{l\pi }{2}\frac{\pi }{4})}e^{il\theta }.`$ (11)
Scattering length is defined as the distance where the two-body wave function vanishes for zero energy. The phase shift due to potential scattering is given as a function of the scattering length, and it is well-known as $`\delta _{0,3D}=ka`$ in 3D, whereas
$$\delta _0=\frac{\pi }{2}\frac{1}{\mathrm{ln}kb}\left[1+𝒪\left(\frac{1}{\mathrm{ln}kb}\right)\right],$$
(12)
in 2D . Note that it is effective only in the low energy scattering limit, $`kb<<1`$. It can be easily checked as follows. The scattering length is just the intercept of the radial wave function satisfying the boundary condition $`\psi _k(b)=0`$. Therefore, for $`l=0`$, the wave function at large distance and small $`k`$ is expressed as
$`\psi _k(\rho )`$ $`=`$ $`J_0(k\rho ){\displaystyle \frac{J_0(kb)}{N_0(kb)}}N_0(k\rho )`$ (13)
$``$ $`{\displaystyle \frac{2}{\sqrt{2\pi k\rho }}}\left[\mathrm{cos}\left(k\rho {\displaystyle \frac{\pi }{4}}\right){\displaystyle \frac{\pi }{2\mathrm{ln}kb}}\mathrm{sin}\left(k\rho {\displaystyle \frac{\pi }{4}}\right)\right]`$ (14)
$`=`$ $`{\displaystyle \frac{2}{\sqrt{2\pi k\rho }}}\mathrm{cos}\left(k\rho {\displaystyle \frac{\pi }{4}}+\delta _0\right).`$ (15)
The necessary condition for the validity of the Born approximation is that the phase shift $`\delta _0`$ be very small for small $`k`$, which can be easily confirmed from Eq. (12). Note that unlike the 3D case where $`a`$ can be negative, we do not consider the negative scattering length here since the centrifugal potential of the lowest partial wave is negative in 2D so that the extrapolated local wave function cuts the radial axis always above the origin .
Comparing Eq. (11) with Eq. (6), the scattering amplitude is written as a series expansion, and the $`l=0`$ state contributes to the 2D system. Therefore, the 2D scattering amplitude becomes
$`F_k`$ $`=`$ $`{\displaystyle \frac{1}{\sqrt{2\pi k}}}{\displaystyle \underset{l=\mathrm{}}{\overset{\mathrm{}}{}}}\left(e^{2i\delta _l}1\right)e^{il\theta }`$ (16)
$`=`$ $`{\displaystyle \frac{2i\delta _0}{\sqrt{2\pi k}}}(1+i\delta _0+\mathrm{})`$ (17)
$``$ $`{\displaystyle \frac{i\pi }{\sqrt{2\pi k}}}{\displaystyle \frac{1}{\mathrm{ln}kb}}.`$ (18)
Finally, we obtain the 2D interaction $`g`$ from Eqs. (8) and (18) as
$$g=\frac{2\pi \mathrm{}^2}{m}\frac{1}{\mathrm{ln}kb}.$$
(19)
In general the 2D interaction strength is given as
$$g=\frac{4\pi \mathrm{}^2\xi }{m}.$$
(20)
Here $`\xi `$ is a dimensionless atomic parameter given by
$$\xi =\frac{1}{2}\frac{1}{\mathrm{ln}kb},$$
(21)
where $`\xi `$ is positive since $`kb1`$. Note that the logarithmic dependence of the interaction term suggests similar condensate characteristics in 2D for most bosonic alkali atoms with positive scattering length.
In addition to the 2D scattering length, $`b`$, we do not have any reliable value of the wave-vector, $`k`$, in 2D. An approximation that the wave-vector $`k`$ be the inverse of the largest distance available in the perpendicular direction may not be correct. However, the experimental results provide the value of the product $`kb`$ as described in the next section. Note that in an extremely anisotropic 3D system, the dimensionless atomic parameter is given as $`\xi _e=(1/\sqrt{2\pi })a/a_z,`$ where $`a_z=\sqrt{\mathrm{}/m\omega _z}`$, and moreover $`\xi _e<\xi `$ in general.
## III the condensate state and its energy state
Now the 2D condensate wave function of trapped dilute Bose atoms of mass $`m`$ can be obtained from the 2D GPE
$$\left[\frac{\mathrm{}^2}{2m}_\rho ^2+V_{ext}(\rho )+Ng\psi ^2(\rho )\right]\psi (\rho )=\mu \psi (\rho ),$$
(22)
where $`d^2\rho \psi ^2(\rho )=1`$, and $`N`$ is the number of condensate particles. Here, we assume a 2D isotropic, harmonic trap $`V_{ext}(\rho )=\frac{1}{2}m\omega ^2\rho ^2`$ where $`\omega `$ is the trap frequency. Then we can simplify Eq. (22) for numerical calculation by introducing dimensionless variables ($`\rho a_{ho}\rho ,`$ $`\mu \mathrm{}\omega \mu `$, and $`\psi a_{ho}^1\psi `$)
$$\left[_\rho ^2+\rho ^22\mu +8\pi \xi N\psi ^2(\rho )\right]\psi (\rho )=0,$$
(23)
where $`a_{ho}=\sqrt{\mathrm{}/m\omega }`$ is the 2D harmonic oscillator length, and $`\mu `$ is the 2D chemical potential which is obtained from the normalization condition. Note that the product of incident wave-vector and scattering length, $`kb`$, is the only atomic parameter that contributes to the condensate states.
In the numerical calculation of the GPE, we need the value of the atomic parameter $`\xi `$ in Eq. (20) which is a function of $`k`$ and $`b`$. Although they are not known separately, we are able to deduce their product $`kb`$ from experimental data, which appears together in the 2D GPE. It is certain that different Bose atoms have different 2D scattering lengths, but the logarithmic dependence makes the difference less sensitive. A recent experiment of the hydrogen on helium surface by Safonov et al. has reported $`\xi =1/7`$ which indicates $`kb=3\times 10^2`$. Although their system actually satisfies a quasi-2D condition, one can still take this value in the 2D GPE as an effective interaction potential. Since it is well known that the 2D GPE is valid even in quasi-2D at low temperatures , and moreover it hints the 2D atomic characteristics in the quasi-2D system, it is useful to quantify the criterion for the quasi-2D, or the effective thickness of the 3D system to exhibit the 2D statistical properties. The criteria of the effective thickness of the trapped 2D Bose system may by obtained .
The next procedure to solve Eq. (23) is almost similar to those of 3D . We have plotted the 2D condensate wave functions versus $`\rho `$ for several values of atom number $`N`$ in Fig. 1. It corresponds to the $`z=0`$ cut of the anisotropic contour plot of the ground states in 3D. The spatial distribution of the condensate in 2D is much broader than that in 3D, and the condensate wave functions approach the parabolic limit more rapidly with the increase of the number of atoms. In other words, the effect of atomic interaction potential becomes more prominent in 2D.
In the non-interacting case, the solution is still Gaussian with $`\psi (\rho )=\pi ^{1/2}e^{\rho ^2/2}`$. In the strongly repulsive limit or Thomas-Fermi limit, it has the parabolic solution of $`\psi ^2(\rho )=(2\mu \rho ^2)/8\pi \xi N`$. The overall shape of the condensate wave functions are similar with that of 3D, but it approaches the parabolic limit more quickly with the number of atoms . Therefore, the peak of the density profile decreases much faster in 2D. The 2D healing length that balances between the quantum pressure and the interaction energy of the condensate is also different from that of 3D. Refer to TABLE 1 for detailed comparison.
The ground-state energy for 2D condensate bosons can be calculated in a similar way. With the dimensionless variables defined before, we obtain the dimensionless energy density as
$`\epsilon `$ $`=`$ $`{\displaystyle \left[\frac{1}{2}|\psi |^2+\frac{1}{2}\rho ^2|\psi |^2+2\pi \xi N|\psi |^4\right]d^2\rho }.`$ (24)
Using a Gaussian trial function, we easily find the ground-state energy density satisfies
$`\epsilon \sqrt{1+2\xi N}.`$ (25)
The ground-state energy per particle in Eq. (25) is plotted in Fig. 2, and compared with the well-known 3D results of <sup>87</sup>Rb. The 2D system becomes more excited for a given $`N`$ and less stable as $`N`$ is increased with respect to the 3D case. We have summarized the fundamental differences between our 2D results and the well-known 3D ones in TABLE 1.
## IV the vortex state
Now let us consider the vortex states of the 2D system. The hydrodynamic theory connected to superfluidity is understood by vortex. The 2D system can be rotating about the center of the 2D trap to give quantized circulation of atomic motion. The angular momentum quantum number $`\kappa `$ gives the quantum winding of the 2D vortex state. A vortex state with winding number $`\kappa `$ is written as $`\psi (\rho )=\varphi (\rho )e^{iS(\rho )}`$, where $`\varphi (\rho )=\sqrt{n(\rho )}`$ is the modulus. The phase $`S`$ is chosen as $`\kappa \theta `$ where $`\kappa `$ is an integer. An angular momentum quantum number $`\kappa `$ can be then assigned to the quantum winding of the 2D vortex state, and one finds the vortex states with a tangential velocity $`v=\kappa \mathrm{}/m\rho `$. As a result of the quantum circulation, the angular momentum of the system with respect to the $`\rho =0`$ axis becomes $`L=N\kappa \mathrm{}`$.
Adding the vortex term of $`\kappa ^2/\rho ^2`$, Eq. (23) is directly converted into the vortex state
$$\left[_\rho ^2+\frac{\kappa ^2}{\rho ^2}+\rho ^22\mu +8\pi \xi N\varphi ^2(\rho )\right]\varphi (\rho )=0.$$
(26)
The wave function for $`\kappa =1`$ vortex state is plotted in Fig. 3. The overall shape and $`N`$ dependence of the vortex-state wave function are similar with those of 3D . As expected, the vortex state also corresponds to the $`z=0`$ cut of the anisotropic contour plot of vortex state in 3D. We also observe that the 2D vortex has a larger radius than the 3D one.
The critical angular velocity or the energy difference between the vortex state of $`\kappa =1`$ and the ground state $`\kappa =0`$ in Eq. (25) is obtained analytically as
$$\epsilon _{\kappa =1}\epsilon _{\kappa =0}=2\sqrt{1+\frac{\xi N}{2}}\sqrt{1+2\xi N}.$$
(27)
For $`\epsilon _{\kappa =1}`$, the trial function of $`\varphi \rho e^{\rho ^2}`$ type was employed. With the increase of the number of particles, the vortex-excitation energy in 2D is much smaller than the 3D one, which indicates that vortices are expected to be produced more easily in 2D than 3D.
Although there are fundamental differences between the 2D and the 3D results as summarized in TABLE 1, it will be interesting to compare with the quasi-2D scheme considered as a limiting case of 3D. For comparison of the 2D trap with an extremely squeezed 3D trap, we take the 3D external potential given by $`V_{ext}(\rho ,z)=(1/2)m\omega ^2(\rho ^2+\lambda ^2z^2).`$ Here $`\lambda `$ is the anisotropy parameter and much larger than 1 for the quasi-2D system, whereas $`\lambda \mathrm{}`$ for the 2D trap. With a fixed number of atoms, we plot the condensate wave functions versus $`\lambda `$, from the 3D GPE in Fig. 4. We find the 3D wave functions merge very slowly with $`\lambda `$, but does not reach the pure 2D limit at all.
## V discussions
The 2D nonlinear Schrödinger equation(GPE) is not a simple extension of the 3D case but is connected into a totally different 2D collision theory which requires different approach. We have developed the theory of the 2D GPE for trapped neutral Bose atoms in a 2D harmonic trap. Applying the quasi-2D experimental value of $`\xi =1/7`$ to the effective interaction strength, one can solve the 2D GPE numerically without detailed knowledge of $`k`$ and $`b`$ separately.
We have obtained the stable solutions of the 2D GPE, which predicts possible existence of 2D BEC. The ground-state energy of the condensate particle is also calculated and it is found that the 2D system becomes less stable as the number of trapped atoms is increased, compared with the 3D case. We also have obtained the wave functions of the 2D vortex state numerically and its critical angular velocity.
The 2D BEC transition may look similar to the Kosterlitz-Thouless(KT) vortex-state transition, but the phase transition of 2D BEC does not need any strong interaction between atoms. That is the fundamental difference between the two transitions.
The logarithmic dependence of the interaction potential on the scattering length makes the 2D system less sensitive to the species of the atoms used for condensation. That will make every alkali atoms of positive scattering length show similar condensate wave patterns in 2D. The comparison between our 2D and well-known 3D results are summarized in TABLE 1.
The quasi-2D scheme can be interpreted as a limiting case of the 3D one. By varying the trapping field, it is possible to separate the single-particle states in the oscillation into well-defined bands. To compare the situation of 2D trap with an extremely squeezed 3D trap, we take the external potential for 3D as $`V_{ext}(r_{},z)=(1/2)m\omega ^2(r_{}^2+\lambda ^2z^2)`$, where $`\lambda 1`$. The case of negative scattering length in 2D will be discussed later.
###### Acknowledgements.
Authors thank G. Shlyapnikov, C. Greene, J. Macek, and H. Nha for helpful discussions. This work was supported by the Creative Research Initiatives of the Korean Ministry of Science and Technology.
E-mail: whjhe@snu.ac.kr
| Dimension | $`D=3`$ | $`D=2`$ |
| --- | --- | --- |
| Scattering amplitude | $`f_{3D}=a`$ | $`F_{2D}=i\sqrt{\frac{2\pi }{k}}\xi `$ |
| Interaction strength | $`g_{3D}=\frac{4\pi \mathrm{}^2a}{m}`$ | $`g_{2D}=\frac{4\pi \mathrm{}^2\xi }{m}`$ |
| Healing length | $`\xi _{3D}=(8\pi n_{3D}a)^{1/2}`$ | $`\xi _{2D}=\left(8\pi \xi n_{2D}\right)^{1/2}`$ |
| Chemical potential | | |
| (noninteracting) | $`\mu _{3D}=1.5\mathrm{}\omega `$ | $`\mu _{2D}=\mathrm{}\omega `$ |
| (strongly interacting) | $`\mu _{3D}=\frac{1}{2}\left(\frac{15aN}{a_{h.o}}\right)^{2/5}\mathrm{}\omega `$ | $`\mu _{2D}=\left(4\xi N\right)^{1/2}\mathrm{}\omega `$ |
| Radius of condensation | $`r_c=\left(\frac{15aN}{a_{ho}}\right)^{1/5}`$ | $`\rho _c=\left(16\xi N\right)^{1/4}`$ |
| Ground state energy | $`E_{3D}=\frac{5}{7}\mu _{3D}NN^{7/5}`$ | $`E_{2D}=\frac{2}{3}\mu _{2D}NN^{3/2}`$ |
| Center lowering of | | |
| density profile | $`|\varphi _{3D}(0)|^2N^{3/5}`$ | $`|\varphi _{2D}(0)|^2N^{1/2}`$ |
|
no-problem/0003/hep-ph0003184.html
|
ar5iv
|
text
|
# I Introduction
## I Introduction
The observation of a deficit of muon-neutrinos in atmospheric neutrino experiments has paved the way for a new generation of experiments studying neutrino masses and mixing. The neutrino sector offers exceptional opportunities for studying some of the most fundamental issues in particle physics, such as the origin of masses and CP violation. A major advantage over the quark sector is that neutrino phenomena are free of the complications of strong interactions. A comprehensive knowledge of the neutrino mixing matrix may yield clues to the old puzzle of why there is more than one lepton family.
The SuperKamiokande (SuperK) collaboration has found that the atmospheric neutrino deficit is dependent on $`L/E_\nu `$, with greater suppression of the $`\nu _\mu `$ flux with increasing $`L/E_\nu `$. Moreover, the electron-neutrino rate is $`L/E_\nu `$ independent and consistent with the calculated $`\nu _e`$ flux. The natural interpretation of the atmospheric data is in terms of $`\nu _\mu \nu _\tau `$ oscillations, with maximal or near-maximal mixing and a neutrino mass-squared difference $`\delta m_{\mathrm{atm}}^23\times 10^3\mathrm{eV}^2`$. This is supported by SuperK measurements of the zenith angle dependence, which due to matter effects is different for $`\nu _\mu \nu _\tau `$ and $`\nu _\mu \nu _s`$, and by $`\pi ^0`$ production in neutral current events, which taken together rule out $`\nu _\mu \nu _s`$ at 99% C.L .
There are other indications of neutrino oscillation phenomena at $`\delta m^2`$ values distinct from the $`\delta m_{\mathrm{atm}}^2`$ scale. A long-standing puzzle is the observed deficits of solar neutrinos compared to the flux predictions of the Standard Solar Model. There are four regions of oscillation parameters $`\delta m_{\mathrm{solar}}^2,\mathrm{sin}^22\theta _{\mathrm{solar}}`$ that can accommodate the present solar data. Three of the solutions involve resonance enhancements due to the coherent scattering of $`\nu _e`$ from the dense solar medium. The Large Angle Matter (LAM) solution has $`\delta m_{\mathrm{solar}}^23\times 10^5\mathrm{eV}^2`$, the Small Angle Matter (SAM) solution has $`\delta m_{\mathrm{solar}}^25\times 10^6\mathrm{eV}^2`$, and the Long Oscillation Wavelength (LOW) solution has $`\delta m_{\mathrm{solar}}^210^7\mathrm{eV}^2`$ and large mixing. Vacuum Oscillation (VO) solutions have $`\delta m^210^{10}\mathrm{eV}^2`$ and large mixing. The Sudbury Neutrino Observatory (SNO) experiment in progress may be able to exclude some of these solar solutions.
In addition there is some evidence for $`\nu _\mu \nu _e`$ and $`\overline{\nu }_\mu \overline{\nu }_e`$ oscillations from an accelerator experiment (LSND) at Los Alamos. The observed event rates correspond to $`\mathrm{sin}^22\theta _{\mathrm{LSND}}10^2`$ with $`\delta m_{\mathrm{LSND}}^21\mathrm{eV}^2`$; a sizeable range of $`\delta m^2`$ values above 0.1 eV<sup>2</sup> is actually allowed. The mini-BooNE experiment at Fermilab, scheduled to start collecting data in December 2001 with first results expected by 2003, will determine whether the LSND effect is real.
In the next phase of neutrino oscillation studies long-baseline experiments are expected to confirm the $`\nu _\mu \nu _\mu `$ disappearance oscillations at the $`\delta m_{\mathrm{atm}}^2`$ scale. The K2K experiment from KEK to SuperK, with a baseline of $`L250`$ km and a mean neutrino energy of $`E_\nu 1.4`$ GeV is underway. The MINOS experiment from Fermilab to Soudan, with a longer baseline $`L730`$ km and higher mean energies $`E_\nu =3`$, 6 or 12 GeV, is under construction and the ICANOE and OPERA experiments, with $`E_\nu =17`$ GeV and baselines $`L730`$ km from CERN to Gran Sasso, have been proposed. The various experiments with dominant $`\nu _\mu `$ and $`\overline{\nu }_\mu `$ beams will securely establish the oscillation phenomena and may measure $`\delta m_{\mathrm{atm}}^2`$ to a precision of order 10%. Experiments with higher mean neutrino energies should be able to observe $`\tau `$ production.
Further exploration of the neutrino mixing and mass-squared parameter space will require higher intensity neutrino beams and $`\nu _e,\overline{\nu }_e`$ beams along with $`\nu _\mu ,\overline{\nu }_\mu `$. To provide these neutrino beams, muon storage rings have been proposed in which the muons decay in a long straight neutrino beam-forming section, and the muons are produced by a muon-collider type muon source . These “neutrino factories” are now under serious consideration. The resulting neutrino beams would be sufficiently intense to produce thousands of oscillation events in a reasonably sized detector (10–50 kt) at distances up to the Earth’s diameter. Some initial studies have been made of the physics capabilities of such machines as a function of the stored muon energy $`E_\mu `$, baseline $`L`$, and intensity $`I`$. The focus of the present paper is how to best choose $`E_\mu `$, $`L`$ and $`I`$ to maximize the physics output at an entry-level neutrino factory (hereafter referred to as ENuF) and beyond.
The present work expands on previous studies in several ways. First, we consider the minimal $`E_\mu `$ and $`I`$ needed to accomplish our physics goals. Second, we consider the consequences of a number of model scenarios that can accommodate the atmospheric and solar oscillation indications. Third, we investigate possibilities for measuring CP-violating phases. Finally, we investigate possibilities for observing $`\nu _e\nu _\tau `$ oscillations.
## II Theoretical overview
### A Oscillation Formalism
The neutrino flavor eigenstates $`\nu _\alpha `$ are related to the mass eigenstates $`\nu _j`$ in vacuum by a unitary matrix $`U`$,
$$|\nu _\alpha =\underset{j}{}U_{\alpha j}|\nu _j.$$
(1)
The effect of matter on $`\nu _e`$ beams has important consequences for long-baseline experiments. The propagation through matter is described by the evolution equation
$$i\frac{d|\nu _\alpha }{dx}=\underset{\beta }{}\underset{j1}{}\frac{1}{2E_\nu }\left[\delta m_{j1}^2U_{\alpha j}U_{\beta j}^{}+A\delta _{\alpha e}\delta _{\beta e}\right]|\nu _\beta ,$$
(2)
where $`x=ct`$ and $`A/(2E_\nu )`$ is the amplitude for coherent scattering of $`\nu _e`$ on electrons, and
$$A=2\sqrt{2}G_FY_e\rho E_\nu =1.52\times 10^4\mathrm{eV}^2Y_e\rho (\mathrm{g}/\mathrm{cm}^3)E(\mathrm{GeV}).$$
(3)
Here $`Y_e(x)`$ is the electron fraction and $`\rho (x)`$ is the matter density. The neutrino oscillation probabilities are then $`P(\nu _\alpha \nu _\beta )=|\nu _\beta (x=L)\nu _\alpha (x=0)|^2`$.
We solve this evolution equation numerically taking into account the $`x`$-dependence of the density using the Preliminary Reference Earth model. We have found that for $`L`$ less than about 3000 km (in which the entire neutrino path is in the upper mantle and the density is approximately constant), the results of the exact propagation and those obtained assuming constant density agree to within a few percent. However, for larger $`L`$ (where the neutrino path partially traverses the lower mantle and the density is no longer nearly constant) the assumption of constant density is no longer valid. For example, for $`L=7332`$ km (the Fermilab to Gran Sasso distance), event rate predictions assuming constant density can be wrong by as much as 40%. We always use the numerical solution of Eq. (2) in our calculations.
For three neutrinos (with $`\alpha =e,\mu ,\tau `$ and $`j=1,2,3`$) the Maki-Nakagawa-Sakata (MNS) mixing matrix will be parameterized by
$$U=\left(\begin{array}{ccc}c_{13}c_{12}& c_{13}s_{12}& s_{13}e^{i\delta }\\ c_{23}s_{12}s_{13}s_{23}c_{12}e^{i\delta }& c_{23}c_{12}s_{13}s_{23}s_{12}e^{i\delta }& c_{13}s_{23}\\ s_{23}s_{12}s_{13}c_{23}c_{12}e^{i\delta }& s_{23}c_{12}s_{13}c_{23}s_{12}e^{i\delta }& c_{13}c_{23}\end{array}\right),$$
(4)
where $`c_{jk}\mathrm{cos}\theta _{jk}`$, $`s_{jk}\mathrm{sin}\theta _{jk}`$, and $`\delta `$ is the non-conserving phase. Two additional diagonal phases are present in $`U`$ for Majorana neutrinos, but these do not affect oscillation probabilities.
### B Measuring $`\theta _{13}`$ and the Sign of $`\delta m_{32}^2`$
The oscillation channels $`\nu _e\nu _\mu `$ and $`\overline{\nu }_e\overline{\nu }_\mu `$ can be explored for the first time at a neutrino factory. In addition to a first observation of these transitions, the mixing angle $`\theta _{13}`$ can be measured, the sign of $`\delta m_{\mathrm{atm}}^2`$ can be determined from matter effects, and the CP phase $`\delta `$ could be measured or bounded. With this information, models of oscillation phenomena can be tested and discriminated.
The charged current (CC) interactions resulting from $`\nu _e\nu _\mu `$ and $`\overline{\nu }_e\overline{\nu }_\mu `$ oscillations produce “wrong-sign” muons (muons of opposite charge from the neutrinos in the beam). In the leading oscillation approximation the probability for $`\nu _e\nu _\mu `$ in 3-neutrino oscillations through matter of constant density is
$$P(\nu _e\nu _\mu )=s_{23}^2\mathrm{sin}^22\theta _{13}^m\mathrm{sin}^2\mathrm{\Delta }_{32}^m,$$
(5)
where
$$\mathrm{sin}^22\theta _{13}^m=\frac{\mathrm{sin}^22\theta _{13}}{\left(\frac{A}{\delta m_{32}^2}\mathrm{cos}2\theta _{13}\right)^2+\mathrm{sin}^22\theta _{13}}$$
(6)
and
$$\mathrm{\Delta }_{32}^m=\frac{1.27\delta m_{32}^2(\mathrm{eV}^2)L(\mathrm{km})}{E_\nu (\mathrm{GeV})}\sqrt{\left(\frac{A}{\delta m_{32}^2}\mathrm{cos}2\theta _{13}\right)^2+\mathrm{sin}^22\theta _{13}}.$$
(7)
Here $`A`$ is the matter amplitude of Eq. (3). Thus even with matter effects the $`\nu _e\nu _\mu `$ probability is approximately proportional to $`\mathrm{sin}^22\theta _{13}`$. The experimental sensitivity of the $`\nu _e\nu _\mu `$ measurements therefore changes almost linearly with $`\mathrm{sin}^22\theta _{13}`$.
For $`\overline{\nu }_e\overline{\nu }_\mu `$ oscillations, the sign of $`A`$ is reversed in Eqs. (6) and (7). For $`\mathrm{sin}^22\theta _{13}1`$ and $`A\delta m_{32}^2>0`$, $`P(\nu _e\nu _\mu )`$ is enhanced and $`P(\overline{\nu }_e\overline{\nu }_\mu )`$ is suppressed by matter effects; the converse is the case for $`A\delta m_{32}^2<0`$. Thus a comparison of the $`\nu _e\nu _\mu `$ and $`\overline{\nu }_e\overline{\nu }_\mu `$ CC rates gives information on the sign of $`\delta m_{32}^2`$.
### C $`\nu _e\nu _\tau `$ oscillations
The availability of $`\nu _e`$ and $`\overline{\nu }_e`$ beams from a neutrino factory would allow a search for $`\nu _e\nu _\tau `$ oscillations. In the leading oscillation approximation the probability for $`\nu _e\nu _\tau `$ oscillations through matter of constant density is
$$P(\nu _e\nu _\tau )=c_{23}^2\mathrm{sin}^22\theta _{13}^m\mathrm{sin}^2\mathrm{\Delta }_{32}^m,$$
(8)
Thus the $`\nu _e\nu _\tau `$ probability in matter is also approximately proportional to $`\mathrm{sin}^22\theta _{13}`$.
### D $`CP`$ Violation
In vacuum, $`CP`$ violation in the lepton sector can be explored by comparing oscillation probabilities involving neutrinos with the corresponding probabilities for oscillations involving antineutrinos. For three-neutrino oscillations in a vacuum, the probability difference is
$$P(\nu _\alpha \nu _\beta )P(\overline{\nu }_\alpha \overline{\nu }_\beta )=4J(\mathrm{sin}2\mathrm{\Delta }_{32}+\mathrm{sin}2\mathrm{\Delta }_{21}+\mathrm{sin}2\mathrm{\Delta }_{13}),$$
(9)
where $`\mathrm{\Delta }_{jk}1.27\delta m_{jk}^2(\mathrm{eV}^2)L(\mathrm{km})/E_\nu (\mathrm{GeV})`$ and $`J`$ is the $`CP`$-violating invariant , which can be defined as $`J=Im\{U_{e2}U_{e3}^{}U_{\mu 2}^{}U_{\mu 3}\}`$. The minus (plus) sign in Eq. (9) is used when $`\alpha `$ and $`\beta `$ are in cyclic (anticyclic) order, where cyclic order is defined as $`e\mu \tau `$. For the mixing matrix in Eq. (4),
$$J=\frac{1}{8}\mathrm{sin}2\theta _{12}\mathrm{sin}2\theta _{13}\mathrm{sin}2\theta _{23}\mathrm{cos}\theta _{13}\mathrm{sin}\delta .$$
(10)
Thus even for $`\delta =\pm 90^{}`$, $`J`$ will be small when the $`\theta _{12},\theta _{13}`$ mixing angles are small.
For $`|\delta m_{32}^2||\delta m_{21}^2|`$, the $`CP`$-violating probability difference for $`\nu _e\nu _\mu `$ is given approximately by
$$P(\nu _e\nu _\mu )P(\overline{\nu }_e\overline{\nu }_\mu )4J\mathrm{sin}\left(\frac{2.54\delta m_{21}^2(\mathrm{eV}^2)L(\mathrm{km})}{E_\nu (\mathrm{GeV})}\right),$$
(11)
It is evident from Eq. (11) that $`CP`$ violation is only appreciable in vacuum when the sub-leading oscillations (in this case oscillations due to $`\delta m_{21}^2`$) begin to develop. The same qualitative results are expected when neutrinos propagate through matter, although the oscillation probabilities are changed.
## III Family of Scenarios
With three neutrinos, there are only two distinct $`\delta m^2`$ values. The evidence for atmospheric, solar and accelerator neutrino oscillations at three different $`\delta m^2`$ scales cannot be simultaneously accommodated in a three-neutrino framework. Here we set the accelerator evidence aside and use three-neutrino oscillations to explain atmospheric and solar data; the oscillation scale of the accelerator data is more relevant to short-baseline experiments.
A family of representative scenarios in Table I, defined in the ongoing Fermilab long-baseline workshop study, will be adopted for our subsequent analysis; the central value $`|\delta m_{32}^2|=3.5\times 10^3\mathrm{eV}^2`$ is based on the published SuperK data. With further data accumulation, a slightly lower central value of $`2.8\times 10^3\mathrm{eV}^2`$ is indicated, with a 90% C.L. range of $`25\times 10^3`$ eV<sup>2</sup>. The forms of the mixing matrix $`U`$ in these scenarios are
$`U(\mathrm{LAM})`$ $`=`$ $`\left(\begin{array}{ccc}0.846& 0.523& 0.101e^{i\delta }\\ 0.3720.060e^{i\delta }& 0.6020.037e^{i\delta }& 0.704\\ 0.3720.060e^{i\delta }& 0.6020.037e^{i\delta }& 0.704\end{array}\right),`$ (15)
$`U(\mathrm{SAM})`$ $`=`$ $`\left(\begin{array}{ccc}0.994& 0.050& 0.101e^{i\delta }\\ 0.0350.071e^{i\delta }& 0.7060.004e^{i\delta }& 0.704\\ 0.0350.071e^{i\delta }& 0.7060.004e^{i\delta }& 0.704\end{array}\right),`$ (19)
$`U(\mathrm{LOW})`$ $`=`$ $`\left(\begin{array}{ccc}0.807& 0.582& 0.101e^{i\delta }\\ 0.4130.058e^{i\delta }& 0.5740.042e^{i\delta }& 0.704\\ 0.4130.058e^{i\delta }& 0.5740.042e^{i\delta }& 0.704\end{array}\right),`$ (23)
$`U(\mathrm{BIMAX})`$ $`=`$ $`\left(\begin{array}{ccc}\frac{1}{\sqrt{2}}& \frac{1}{\sqrt{2}}& 0\\ \frac{1}{2}& \frac{1}{2}& \frac{1}{\sqrt{2}}\\ \frac{1}{2}& \frac{1}{2}& \frac{1}{\sqrt{2}}\end{array}\right).`$ (27)
Since $`\mathrm{sin}^22\theta _{13}`$ is not well-known, sometimes we will consider the LAM solution with $`\mathrm{sin}^22\theta _{13}=0.004`$, for which
$$U(\mathrm{LAM}^{})=\left(\begin{array}{ccc}0.850& 0.525& 0.032e^{i\delta }\\ 0.3720.019e^{i\delta }& 0.6020.012e^{i\delta }& 0.707\\ 0.3720.019e^{i\delta }& 0.6020.012e^{i\delta }& 0.707\end{array}\right).$$
(28)
Scenarios 1–3 in Table I represent three-neutrino oscillation explanations of the atmospheric and solar deficits, with the LAM, SAM, and LOW solar options, respectively. We do not address the VO solar solution separately, since the sub-leading $`\delta m^2`$ effects will not be significant and the LOW and VO scenarios will be indistinguishable. Scenario 4 with bimaximal atmospheric and LAM mixing is interesting because the leading oscillation decouples in the $`\nu _e\nu _\mu `$ channel ($`U_{e3}=0`$) and the sub-leading oscillations will therefore be more visible. However, in this scenario there are no matter effects on $`\nu _e`$ propagation and the sign of $`\delta m^2`$ cannot be measured; also CP will be conserved.
The size of the Jarlskog invariant (modulo $`\mathrm{sin}\delta `$) is also shown in Table I; it is largest for the LAM and LOW scenarios (which have only one small angle), smaller for SAM (which has two small angles), and vanishes for the BIMAX scenario (in which one angle is zero). Since the observability of $`CP`$ violation depends on both $`J`$ and the size of the sub-leading oscillation scale $`\delta m_{21}^2`$ (see Eq. (11)), one expects appreciable $`CP`$ violation only in the LAM scenario.
The neutrino mass ordering can in principle be determined by the effects of matter on the leading electron neutrino oscillation. To illustrate this, Fig. 1 shows the two possible three-neutrino mass patterns. Figure 1a, with one large mass $`m_3`$, has atmospheric neutrino oscillations with $`\delta m_{32}^2>0`$. Figure 1b, with two large masses $`m_2,m_1`$, has $`\delta m_{32}^2<0`$. For scenario 4, with no $`\nu _e`$ participation in the leading oscillations, there are no matter effects at the $`\delta m_{32}^2`$ scale to determine the sign of $`\delta m_{32}^2`$.
## IV An Entry-Level Neutrino Factory
Neutrino factories require the development of new accelerator sub-systems which are technically challenging. The R&D required for a full-intensity muon source might take many years. It is reasonable to consider a strategy in which the R&D needed for the first neutrino factory is minimized by building, as a first step, a muon source that provides just enough muon decays per year to make contact with the interesting physics. If we also wish to minimize the cost of an entry-level facility, we must minimize the muon acceleration system and hence the energy of the muons decaying within the storage ring. In this section we consider, within the framework of the scenarios listed in Table I, the minimum muon energy and beam intensity needed at an ENuF.
We begin by defining our entry-level physics goal. We take this to be the first observation of $`\nu _e\nu _\mu `$ oscillations at the 10 event per year level. The signal will be the appearance of CC interactions which are tagged by a wrong-sign muon. To identify signal events we must be able to identify muons and measure their charge in the presence of the accompanying hadronic shower from the remnants of the target nucleon. Muons can only be cleanly identified and measured if their energy exceeds a threshold $`E_{\mathrm{min}}`$, which in practice is expected to be a few GeV. This places an effective lower bound on the acceptable energy of the muon storage ring. To illustrate this, consider the $`\nu _e\nu _\mu `$ signal in a detector that is 2800 km downstream of a 20 GeV neutrino factory. The predicted measured energy distributions for CC events tagged by wrong-sign muons are shown for the LAM scenario (Table I) in Fig. 2 as a function of $`E_{\mathrm{min}}`$. As $`E_{\mathrm{min}}`$ increases the signal efficiency decreases. For example, with $`E_{\mathrm{min}}=`$ 2 (4) GeV the resulting signal loss is 18% (36%) \[55%\]. In addition, as $`E_{\mathrm{min}}`$ increases the measured signal distributions become increasingly biased towards higher energies, and the information on the oscillations encoded in the energy distribution is lost. We conclude from Fig. 2 that with a 20 GeV storage ring we can probably tolerate an $`E_{\mathrm{min}}`$ of a few GeV, but would not want to decrease the storage ring energy below 20 GeV. Hence, we will adopt 20 GeV as the minimum storage ring energy for an ENuF. In the following, our calculations include a muon threshold $`E_{\mathrm{min}}=4`$ GeV, and for simplicity we assume the detection efficiency is 0 for signal events with $`E_\mu <E_{\mathrm{min}}`$ and 1 for $`E_\mu >E_{\mathrm{min}}`$.
We next consider the muon beam intensity required to meet our entry-level physics goal if the storage ring energy is 20 GeV. To minimize the required beam intensity we must maximize the detector mass $`(M)`$. Recently 50 kt has been considered as a plausible although ambitious $`M`$. We therefore choose $`M=50`$ kt. In addition to the neutrino factory beam properties and the detector mass, the signal event rate will depend upon the baseline and the oscillation parameters. Since, to a good approximation, the signal rate is proportional to $`\mathrm{sin}^22\theta _{13}`$, it is useful to define the $`\mathrm{sin}^22\theta _{13}`$ “reach” for an experiment as the value of $`\mathrm{sin}^22\theta _{13}`$ for which our physics goal (in this case the observation of 10 signal events per year) will be met. Setting the number of useful muon decays per year to $`10^{19}`$, the $`\mathrm{sin}^22\theta _{13}`$ reach is shown in Fig. 3 as a function of baseline and $`\delta m_{32}^2`$ with the other oscillation parameters corresponding to the LAM scenario. The calculational methods are described in Ref. . The $`\mathrm{sin}^22\theta _{13}`$ reach degrades slowly as $`L`$ increases, and improves with increasing $`|\delta m_{32}^2|`$, varying by about a factor of 5 over the $`\delta m_{32}^2`$ range currently favored by the SuperK results. Note that, for $`\delta m_{32}^2`$ in the center of the SuperK range, our entry level goal would be met with a 20 GeV storage ring and $`10^{19}`$ decays per year provided $`\mathrm{sin}^22\theta _{13}`$ exceeds approximately 0.01, which is more than an order of magnitude below the currently excluded region.
We now consider how the muon beam intensity required to meet our entry-level physics goal varies with the storage ring energy. We will choose a baseline of 2800 km, motivated by a consideration of the physics sensitivity of an upgraded ENuF (see next section). The number of muon decays required to meet our goal is shown in Fig. 4 versus the muon storage ring energy for the LAM, SAM, LOW, and BIMAX oscillation scenarios. Note that:
(i) The energy dependent intensities needed for the SAM and LOW scenarios are indistinguishable.
(ii) Due to the contributions from sub-leading oscillations, the intensity needed for the LAM scenario is slightly less than needed for the SAM and LOW scenarios.
(iii) With a 20 GeV storage ring $`2\times 10^{18}`$ muon decays per year would meet our entry level physics goals for the LAM, SAM, and LOW scenarios. The dependence of the required muon intensity $`I`$ on the storage ring energy is approximately given by $`IE^{1.6}`$.
(iv) For the BIMAX scenario in which $`\mathrm{sin}^22\theta _{13}=0`$ only the sub-leading $`\delta m^2`$ scale contributes to the signal. With a 20 GeV storage ring a few $`\times 10^{20}`$ muon decays per year would be needed to observe $`\nu _e\nu _\mu `$ oscillations. Although this scenario would be bad news for a low-intensity neutrino factory, oscillations driven by the sub-leading $`\delta m^2`$ scale might be studied with a higher intensity muon source.
It is straightforward to use the curves in Fig. 4 to infer the intensity required to meet our entry level goals for LAM, SAM, and LOW-type scenarios with values of $`\mathrm{sin}^22\theta _{13}`$ other than 0.04. For example, if $`\mathrm{sin}^22\theta _{13}=0.01`$ (a factor of 20 below the currently excluded value, and a factor of 4 below the value used for the curves in Fig. 4) we must multiply the beam intensity indicated in Fig. 4 by a factor of 4 to achieve our entry-level goal. Provided a 50 kt detector with good signal efficiency is practical, within the framework of LAM, SAM and LOW-type scenarios, a 20 GeV storage ring in which there are $`10^{19}`$ muon decays per year in the beam forming straight section would enable the first observation of $`\nu _e\nu _\mu `$ oscillations provided $`\mathrm{sin}^22\theta _{13}>0.01`$. Note that if $`\mathrm{sin}^22\theta _{13}>0.01`$ the next generation of long baseline accelerator experiments (e.g. MINOS) are expected make a first observation of $`\nu _\mu \nu _e`$ oscillations. These experiments would not be able to measure matter effects and determine the sign of $`\delta m_{32}^2`$. Furthermore, if $`\mathrm{sin}^22\theta _{13}0.01`$ then with several years of running the entry level neutrino factory data could be used to place a limit on this parameter about an order of magnitude below the MINOS limit.
Consider next the prospects for exploiting $`\nu _e\nu _\mu `$ measurements to determine the sign of $`\delta m_{32}^2`$. In ref. we have shown in the LAM scenario that the sign of $`\delta m_{32}^2`$ can be determined by comparing the wrong-sign muon rates and/or the associated CC event energy distributions when respectively positive and negative muons are stored in the ring. The most sensitive technique to discriminate $`\delta m_{32}^2>0`$ from $`\delta m_{32}^2<0`$ would be to take data when there were alternately positive and negative muons stored in the neutrino factory, and measure the resulting wrong-sign muon event energy distributions together with the $`\nu _\mu \nu _\mu `$ and $`\overline{\nu }_\mu \overline{\nu }_\mu `$ event energy distributions. The four distributions can then be simultaneously fit with the oscillation parameters $`\delta m_{32}^2`$ (including its sign), $`\mathrm{sin}^22\theta _{13}`$, and $`\mathrm{sin}^22\theta _{23}`$ left as free parameters . In the following we take a simpler approach to demonstrate that, provided $`L`$ is large enough, a neutrino factory that permitted the observation of 10 $`\nu _e\nu _\mu `$ events per year would also enable the sign of $`\delta m_{32}^2`$ to be determined.
We begin by defining the ratio:
$$R_{e\mu }=\frac{N(\overline{\nu }_e\overline{\nu }_\mu )}{N(\nu _e\nu _\mu )}$$
(29)
$`R_{e\mu }`$ is just the ratio of wrong-sign muon rates when respectively negative and positive muons are stored in the neutrino factory. Figure 5 shows $`R_{e\mu }`$ as a function of $`L`$ for $`E_\mu =20`$ GeV and $`\delta m_{32}^2=\pm 3.5\times 10^3`$ eV<sup>2</sup>/c<sup>4</sup>. Note that when $`L>2000`$ km the ratio $`R_{e\mu }`$ for positive $`\delta m_{32}^2`$ is more than a factor of 5 greater than the value for negative $`\delta m_{32}^2`$. As an example, consider a 50 kt detector 2800 km downstream of a 20 GeV storage ring in which there are $`10^{19}`$ muon decays per year in the beam forming straight section. Suppose that $`\mathrm{sin}^22\theta _{13}=0.01`$, and assume we know that $`|\delta m_{32}^2|3.5\times 10^3`$ eV<sup>2</sup>/c<sup>4</sup> from $`\nu _\mu \nu _\mu `$ measurements, for example. If we store positive muons in the neutrino factory after one year we would expect to observe 11 wrong-sign muon events if $`\delta m_{32}^2>0`$ but only 2 events if $`\delta m_{32}^2<0`$. To reduce the uncertainties due to the lack of precise knowledge of the other oscillation parameters, we can then take data with negative muons stored. We would then expect to observe less than $`2`$ wrong-sign muon events per year if $`\delta m_{32}^2>0`$, but 6 events per year if $`\delta m_{32}^2<0`$. Clearly with these statistics and several years of data taking the sign of $`\delta m_{32}^2`$ could be established. From this example we conclude that with a few years of data taking a neutrino factory that enabled the observation of 10 $`\nu _\mu \nu _\mu `$ events per year would also enable the sign of $`\delta m_{32}^2`$ to be determined provided the baseline was sufficiently long ($`L>2000`$ km) so that the prediction for $`R_{e\mu }`$ changes by a large factor ($`>5`$) when the assumed sign of $`\delta m_{32}^2`$ is changed. Hence, provided $`\mathrm{sin}^22\theta _{13}>0.01`$, our entry-level neutrino factory would make the first observation of $`\nu _e\nu _\mu `$ oscillations, measure $`\mathrm{sin}^22\theta _{13}`$, and determine the pattern of neutrino masses.
## V One Step Beyond an Entry Level Neutrino Factory
An entry-level neutrino factory is attractive if there is a beam intensity and/or energy upgrade path that enables a more comprehensive physics program beyond the initial observation of $`\nu _e\nu _\mu `$ oscillations and determination of the sign of $`\delta m_{32}^2`$. In this section we consider the energy and/or intensity upgrades needed to achieve a reasonable upgrade physics goal, which we take to be the first observation of $`\nu _e\nu _\tau `$ oscillations.
In ref. we have shown that in a LAM-like scenario the $`\nu _e\nu _\tau `$ oscillation event rates in a multi-kt detector are expected to be significant if $`E_\mu `$ is 20 GeV or greater. In the following we consider the neutrino factory beam energy and intensity needed to make a first observation of $`\nu _e\nu _\tau `$ oscillations at the 10 event level in one year of running with a fully efficient detector (or several years with a realistic detector). Our detector must be able to measure $`\tau `$ appearance and, to separate the signal from $`\nu _\mu \nu _\tau `$ oscillation backgrounds, measure the sign of the charge of the $`\tau `$. Hybrid emulsion detectors in an external magnetic field provide an example of a candidate detector technology that might be used. Consideration of detector technologies and their performance is under study , and is outside the scope of the present paper. We will take $`M=5`$ kt as a plausible, but aggressive, choice for the detector mass.
Consider first an intensity-upgraded ENuF, namely a 20 GeV storage ring in which there are $`10^{20}`$ muon decays per year in the beam forming straight section. The $`\tau `$ appearance rates from $`\nu _e\nu _\tau `$ and $`\nu _\mu \nu _\tau `$ oscillations are shown as a function of $`\mathrm{sin}^22\theta _{13}`$ and $`\delta m_{32}^2`$ in Fig. 6 for a 5 kt detector at $`L=2800`$ km. In contrast to the $`\nu _\mu \nu _\tau `$ background rate, which is independent of $`\mathrm{sin}^22\theta _{13}`$, the $`\nu _e\nu _\tau `$ signal rate increases linearly with $`\mathrm{sin}^22\theta _{13}`$. Note that for the LAM scenario with $`\mathrm{sin}^22\theta _{13}=0.04`$ and $`\delta m_{32}^2=0.0035`$ eV<sup>2</sup>, there are about 10 signal events per $`10^{20}\mu ^+`$ decays, and 300 $`\nu _\mu \nu _\tau `$ background events. Thus it is desirable that the $`\tau `$ sign mis-determination be less than of order 1 in 100. Whether this requirement can be met by placing hybrid emulsion or liquid Argon detectors in a magnetic field, or by the development of new $`\nu _\tau `$ detector technology, remains to be seen.
Next consider the dependence of the $`\mathrm{sin}^22\theta _{13}`$ reach (for detecting 10 $`\nu _e\nu _\tau `$ events) on $`L`$ and the storage ring energy. Fixing $`\delta m_{32}^2=0.0035`$ eV<sup>2</sup>, the $`\mathrm{sin}^22\theta _{13}`$ reach is shown in Fig. 7 to improve with energy, and to be almost independent of $`L`$ over the range considered except at the highest energies and longest baselines, for which the reach is degraded. For $`L3000`$ km, an energy upgrade from 20 GeV to 50 GeV would improve the reach by about a factor of 5. The energy dependence of the muon intensity required to meet our $`\nu _e\nu _\tau `$ discovery goal is summarized in Fig. 4. We conclude that a neutrino factory consisting of a 20 GeV storage ring in which there are $`10^{20}`$ muon decays per year in the beam forming straight section would enable the first observation of $`\nu _e\nu _\tau `$ oscillations in LAM, SAM, and LOW-type scenarios with $`\mathrm{sin}^22\theta _{13}>0.01`$ provided a 5 kt detector with good $`\tau `$ signal efficiency and charge-sign determination is practical.
## VI Measuring the CP Non-Conserving Phase
In the event that the solar solution is LAM, CP non-conserving effects may be large enough to allow a measurement of the MNS phase $`\delta `$ at a high-intensity neutrino factory . The total rate of appearance events very strongly depends on the value of $`\mathrm{sin}^22\theta _{13}`$. For example, Figs. 8 and 9 show the event rates versus $`L`$ for the LAM solution in Table I with $`\delta =0`$ and $`\mathrm{sin}^22\theta _{13}=0.04`$, $`0.004`$, and $`0`$. For $`L`$ less than about 5000 km the event rates for the BIMAX solution are about 25% higher than the $`\mathrm{sin}^22\theta _{13}=0`$ curve. Although the rates decrease significantly with decreasing $`\mathrm{sin}^22\theta _{13}`$, even for $`\mathrm{sin}^22\theta _{13}=0`$ there is a residual signal from the sub-leading oscillation in the LAM scenario which may be detectable.
Figures 5, 10, and 11 show predictions for the CP dependent ratio $`R_{e\mu }`$ versus the baseline $`L`$ for a 50 kt detector in the LAM scenario with $`\delta m_{32}^2=3.5\times 10^3`$ eV<sup>2</sup>. The error bars are representative statistical uncertainties. These figures present the results of calculations with decays/year and $`\mathrm{sin}^22\theta _{13}`$ values of ($`10^{20}`$, 0.04), ($`10^{21}`$, 0.04), and ($`10^{21}`$, 0.004), respectively. Results for phases $`\delta =0^{}`$, $`\delta =90^{}`$, and $`\delta =90^{}`$ are shown in each case, for both positive and negative values of $`\delta m_{32}^2`$. For these values of $`\mathrm{sin}^22\theta _{13}`$ the event rates show a strong dependence on the sign of $`\delta m_{32}^2`$ (due to different matter effects for neutrinos and antineutrinos), and a smaller dependence on the $`CP`$-violating phase. Figure 12 shows the results for $`10^{21}`$ muons and $`\mathrm{sin}^22\theta _{13}=0.04`$ in finer detail. Note that at small distances $`R_{e\mu }`$ is not unity for $`\delta =0`$ (even though matter effects are small) because the $`\overline{\nu }_\mu `$ and $`\nu _\mu `$ CC cross sections are different. We note that the $`CP`$-violating effect is largest in the range $`L2000`$–3000 km, vanishes for $`L7000`$ km, and is nonzero but with large uncertainties for $`L>7000`$ km.
Similar calculations show that for the SAM and LOW model parameters $`R_{e\mu }`$ is essentially independent of $`\delta `$, verifying the conclusions of Sec. III that $`CP`$ violation is negligible in these scenarios. The effect of matter in the SAM and LOW scenarios, which depends on the sign of $`\delta m_{32}^2`$ and the size of $`\mathrm{sin}^22\theta _{13}`$, is similar to the LAM case.
A nonzero $`\mathrm{sin}^22\theta _{13}`$ is needed both to determine the sign of $`\delta m_{32}^2`$ via matter effects, and to have observable $`CP`$ violation from the sub-leading scale; however, whether matter or $`CP`$ violation gives the largest effect depends on the size of $`\mathrm{sin}^22\theta _{13}`$. This is illustrated in Fig. 13, which shows the ratio of wrong sign muon events versus $`\mathrm{sin}^22\theta _{13}`$ for our representative LAM solar solution with $`10^{21}`$ muons/year. For larger values of $`\mathrm{sin}^22\theta _{13}`$, say above about 0.001, the sign of $`\delta m_{32}^2`$ (through matter effects) has the largest effect on the ratio, while for $`\mathrm{sin}^22\theta _{13}<0.001`$ the value of $`\delta `$ (which largely determines the amount of $`CP`$ violation) has the largest effect.
Now consider the neutrino factory energy and intensity needed to begin to probe the CP phase $`\delta `$ in the LAM scenario. We will define the $`\mathrm{sin}^22\theta _{13}`$ reach as that value of $`\mathrm{sin}^22\theta _{13}`$ that (with a 50 kt detector and two years of data taking) will enable a 3$`\sigma `$ discrimination between (a) $`\delta =0`$ and $`\delta =\pi /2`$ and (b) $`\delta =0`$ and $`\delta =\pi /2`$. The measurement will be based on a comparison of wrong-sign muon rates when respectively positive and negative muons are alternately stored in the ring. The $`\mathrm{sin}^22\theta _{13}`$ reach when there are $`10^{21}`$ decays is shown for the 3$`\sigma `$ discrimination between $`\delta =0`$ and $`\pm \pi /2`$ in Fig. 14 as a function of baseline and stored muon energy. The optimum baseline is about 3000 km, for which the $`\mathrm{sin}^22\theta _{13}`$ reach is a little better (worse) than 0.01 for the $`\delta =\pi /2`$ ($`\delta =\pi /2`$) discrimination, and is almost independent of muon energy over the range considered (Fig. 4). Thus, a high intensity 20 GeV neutrino factory providing O($`10^{21}`$) muon decays might begin to probe CP violation in the lepton sector if the LAM scenario is the correct description of neutrino oscillations, and a 50 kt detector with good signal efficiency is practical. This conclusion is consistent with results presented in Ref. in which global fits to the measured oscillation distributions have been studied to determine the precision with which $`\delta `$ and $`\mathrm{sin}^22\theta _{13}`$ can be simultaneously measured at a neutrino factory.
## VII Summary
We briefly summarize the results of our study of the physics goals of an entry-level neutrino factory as follows:
1. An entry-level machine would make a first observation of $`\nu _e\nu _\mu `$ oscillations, measure the corresponding amplitude $`\mathrm{sin}^22\theta _{13}`$, and determine the sign of $`\delta m_{32}^2`$.
2. The $`\mathrm{sin}^22\theta _{13}`$ reach for the first observation of $`\nu _e\nu _\mu `$ oscillations and the measurement of the sign of $`\delta m_{32}^2`$ is insensitive to the solar neutrino oscillation solution (LAM, SAM, or LOW) within a 3-neutrino framework.
3. A 20 GeV neutrino factory providing $`10^{19}`$ muon decays per year would enable our entry-level physics goals to be met provided $`\mathrm{sin}^22\theta _{13}>0.01`$ and a detector with good muon charge-sign determination and a mass of 50 kt is practical. The required beam intensity might be a factor of 2–3 higher or lower depending on where within the SuperK range the $`\delta m_{32}^2`$ parameter sits. The event rates also depend on the muon energy detection threshold ($`E_{\mathrm{min}}=2`$–5 GeV) and we win or lose a factor of 2 in rates depending on how low this threshold can be pushed. To determine the sign of $`\delta m_{32}^2`$ a long baseline ($`L>2000`$ km) must be chosen.
4. A candidate for an intensity-upgraded neutrino factory would be a 20 GeV facility providing $`10^{20}`$ muon decays per year. In addition to the precise determination of the oscillation parameters, the upgraded neutrino source would enable the first observation of $`\nu _e\nu _\tau `$ oscillations provided $`\mathrm{sin}^22\theta _{13}>0.01`$ and a 5 kt detector with good $`\tau `$ signal efficiency and charge-sign determination is practical.
5. With a high-intensity neutrino factory providing a few $`\times 10^{20}`$ muon decays per year, the ratio of $`\mu ^+/\mu ^{}`$ wrong-sign muon rates might enable detection of a maximal CP phase in the case of the LAM solar solution. If $`\mathrm{sin}^22\theta _{13}`$ is vanishingly small, (for example the bimaximal mixing scenario) a first observation of $`\nu _e\nu _\mu `$ oscillations at a high intensity neutrino factory might provide a direct measurement of oscillations driven by the sub-leading $`\delta m^2`$ scale, although wrong-sign muon backgrounds might be problematic.
In conclusion, the required number of muon decays per year to achieve the various physics goals of interest are summarized in Fig. 4. At a 20 GeV neutrino factory $`10^{19}`$ muon decays are required for a thorough search for $`\nu _e\nu _\mu `$ appearance, $`10^{20}`$ decays to search for $`\nu _e\nu _\tau `$ oscillations, and $`10^{21}`$ decays to probe the sub-leading oscillation scale and detect CP violation effects in a three-neutrino LAM scenario.
###### Acknowledgements.
This research was supported in part by the U.S. Department of Energy under Grant No. DE-FG02-95ER40896 and in part by the University of Wisconsin Research Committee with funds granted by the Wisconsin Alumni Research Foundation.
|
no-problem/0003/math0003105.html
|
ar5iv
|
text
|
# Linearization of analytic and non–analytic germs of diffeomorphisms of (ℂ,0)
## 1. introduction
In this paper we study the Siegel center problem \[He\]. Consider two subalgebras $`A_1A_2`$ of $`z\left[\left[z\right]\right]`$ closed with respect to the composition of formal series. For example $`z\left[\left[z\right]\right]`$, $`z\{z\}`$ (the usual analytic case) or Gevrey–$`s`$ classes, $`s>0`$ (i.e. series $`F(z)=_{n0}f_nz^n`$ such that there exist $`c_1,c_2>0`$ such that $`|f_n|c_1c_2^n(n!)^s`$ for all $`n0`$). Let $`FA_1`$ being such that $`F^{}\left(0\right)=\lambda ^{}`$. We say that $`F`$ is linearizable in $`A_2`$ if there exists $`HA_2`$ tangent to the identity and such that
(1.1)
$$FH=HR_\lambda $$
where $`R_\lambda \left(z\right)=\lambda z`$. When $`|\lambda |1`$, the Poincaré-Konigs linearization theorem assures that $`F`$ is linearizable in $`A_2`$. When $`|\lambda |=1`$, $`\lambda =e^{2\pi i\omega }`$, the problem is much more difficult, especially if one looks for necessary and sufficient conditions on $`\lambda `$ which assure that all $`FA_1`$ with the same $`\lambda `$ are linearizable in $`A_2`$. The only trivial case is $`A_2=z\left[\left[z\right]\right]`$ (formal linearization) for which one only needs to assume that $`\lambda `$ is not a root of unity, i.e. $`\omega `$.
In the analytic case $`A_1=A_2=z\{z\}`$ let $`S_\lambda `$ denote the space of analytic germs $`Fz\{z\}`$ analytic and injective in the unit disk $`𝔻`$ and such that $`DF(0)=\lambda `$ (note that any $`Fz\{z\}`$ tangent to $`R_\lambda `$ may be assumed to belong to $`S_\lambda `$ provided that the variable $`z`$ is suitably rescaled). Let $`R(F)`$ denote the radius of convergence of the unique tangent to the identity linearization $`H`$ associated to $`F`$. J.-C. Yoccoz \[Yo\] proved that the Brjuno condition (see Appendix A) is necessary and sufficient for having $`R(F)>0`$ for all $`FS_\lambda `$. More precisely Yoccoz proved the following estimate: assume that $`\lambda =e^{2\pi i\omega }`$ is a Brjuno number. There exists a universal constant $`C>0`$ (independent of $`\lambda `$) such that
$$|\mathrm{log}R(\omega )+B(\omega )|C$$
where $`R(\omega )=inf_{FS_\lambda }R(F)`$ and $`B`$ is the Brjuno function (A.3). Thus $`\mathrm{log}R(\omega )B(\omega )C`$.
Brjuno’s proof \[Br\] gives an estimate of the form
$$\mathrm{log}r(\omega )C^{}B(\omega )C^{\prime \prime }$$
where one can choose $`C^{}=2`$ \[He\]. Yoccoz’s proof is based on a geometric renormalization argument and Yoccoz himself asked whether or not was possible to obtain $`C^{}=1`$ by direct manipulation of the power series expansion of the linearization $`H`$ as in Brjuno’s proof (\[Yo\], Remarque 2.7.1, p. 21). Using an arithmetical lemma due to Davie \[Da\] (Appendix B) we give a positive answer (Theorem 2.1) to Yoccoz’s question.
We then consider the more general ultradifferentiable case $`A_1A_2z\{z\}`$. If one requires $`A_2=A_1`$, i.e. the linearization $`H`$ to be as regular as the given germ $`F`$, once again the Brjuno condition is sufficient. Our methods do not allow us to conclude that the Brjuno condition is also necessary, a statement which is in general false as we show in section 2.3 where we exhibit a Gevrey–like class for which the sufficient condition coincides with the optimal arithmetical condition for the associated linear problem. Nevertheless it is quite interesting to notice that given any algebra of formal power series which is closed under composition (as it should if one whishes to study conjugacy problems) and derivation a germ in the algebra is linearizable in the same algebra if the Brjuno condition is satisfied.
If the linearization is allowed to be less regular than the given germ (i.e. $`A_1`$ is a proper subset of $`A_2`$) one finds a new arithmetical condition, weaker than the Brjuno condition. This condition is also optimal if the small divisors are replaced with their absolute values as we show in section 2.4. We discuss two examples, including Gevrey–$`s`$ classes.<sup>1</sup><sup>1</sup>1We refer the reader interested in small divisors and Gevrey–$`s`$ classes to \[Lo, GY1, GY2\].
Acknwoledgements. We are grateful to J.–C. Yoccoz for a very stimulating discussion concerning Gevrey classes and small divisor problems.
## 2. the Siegel center problem
Our first step will be the formal solution of equation (1.1) assuming only that $`Fz[[z]]`$. Since $`Fz\left[\left[z\right]\right]`$ is assumed to be tangent to $`R_\lambda `$ then $`F(z)=_{n1}f_nz^n`$ with $`f_1=\lambda `$. Analogously since $`Hz\left[\left[z\right]\right]`$ is tangent to the identity $`H(z)=_{n=1}^{\mathrm{}}h_nz^n`$ with $`h_1=1`$. If $`\lambda `$ is not a root of unity equation (1.1) has a unique solution $`Hz\left[\left[z\right]\right]`$ tangent to the identity: the power series coefficients satisfy the recurrence relation
(2.1)
$$h_1=1,h_n=\frac{1}{\lambda ^n\lambda }\underset{m=2}{\overset{n}{}}f_m\underset{n_1+\mathrm{}+n_m=n,n_i1}{}h_{n_1}\mathrm{}h_{n_m}.$$
In \[Ca\] it is shown how to generalize the classical Lagrange inversion formula to non–analytic inversion problems on the field of formal power series so as to obtain an explicit non–recursive formula for the power series coefficients of $`H`$.
### 2.1. The analytic case: a direct proof of Yoccoz’s lower bound
Let $`S_\lambda `$ denote the space of germs $`Fz\{z\}`$ analytic and injective in the unit disk $`𝔻=\{z,|z|<1\}`$ such that $`DF(0)=\lambda `$ and assume that $`\lambda =e^{2\pi i\omega }`$ with $`\omega `$. With the topology of uniform convergence on compact subsets of $`𝔻`$, $`S_\lambda `$ is a compact space. Let $`H_Fz[[z]]`$ denote the unique tangent to the identity formal linearization associated to $`F`$, i.e. the unique formal solution of (1.1). Its power series coefficients are given by (2.1). Let $`R(F)`$ denote the radius of convergence of $`H_F`$. Following Yoccoz (\[Yo\], p. 20) we define
$$R(\omega )=\underset{FS_\lambda }{inf}R(F).$$
We will prove the following
###### Theorem 2.1.
Yoccoz’s lower bound.
(2.2)
$$\mathrm{log}R(\omega )B(\omega )C$$
where $`C`$ is a universal constant (independent of $`\omega `$) and $`B`$ is the Brjuno function (A.3).
Our method of proof of Theorem 2.1 will be to apply an arithmetical lemma due to Davie (see Appendix B) to estimate the small divisors contribution to (2.1). This is actually a variation of the classical majorant series method as used in \[Si, Br\].
###### Proof.
Let $`s\left(z\right)=_{n1}s_nz^n`$ be the unique solution analytic at $`z=0`$ of the equation $`s\left(z\right)=z+\sigma \left(s\left(z\right)\right)`$, where $`\sigma (z)=\frac{z^2(2z)}{(1z)^2}=_{n2}nz^n`$. The coefficients satisfy
(2.3)
$$s_1=1,s_n=\underset{m=2}{\overset{n}{}}m\underset{n_1+\mathrm{}+n_m=n,n_i1}{}s_{n_1}\mathrm{}s_{n_m}.$$
Clearly there exist two positive constants $`\gamma _1,\gamma _2`$ such that
(2.4)
$$|s_n|\gamma _1\gamma _2^n.$$
From the recurrence relation (2.1) and Bieberbach–De Branges’s bound $`|f_n|n`$ for all $`n2`$ we obtain
(2.5)
$$|h_n|\frac{1}{|\lambda ^n\lambda |}\underset{m=2}{\overset{n}{}}m\underset{n_1+\mathrm{}+n_m=n,n_i1}{}|h_{n_1}|\mathrm{}|h_{n_m}|.$$
We now deduce by induction on $`n`$ that $`|h_n|s_ne^{K(n1)}`$ for $`n1`$, where $`K`$ is defined in Appendix B. If we assume this holds for all $`n^{}<n`$ then the above inequality gives
(2.6)
$$|h_n|\frac{1}{|\lambda ^n\lambda |}\underset{m=2}{\overset{n}{}}m\underset{n_1+\mathrm{}+n_m=n,n_i1}{}s_{n_1}\mathrm{}s_{n_m}e^{K(n_11)+\mathrm{}K(n_m1)}.$$
But $`K(n_11)+\mathrm{}K(n_m1)K(n2)K(n1)+\mathrm{log}|\lambda ^n\lambda |`$ and we deduce that
(2.7)
$$|h_n|e^{K(n1)}\underset{m=2}{\overset{n}{}}m\underset{n_1+\mathrm{}+n_m=n,n_i1}{}s_{n_1}\mathrm{}s_{n_m}=s_ne^{K(n1)},$$
as required. Theorem 2.1 then follows from the fact that $`n^1K(n)B(\omega )+\gamma _3`$ for some universal constant $`\gamma _3>0`$ (Davie’s lemma, Appendix B).
### 2.2. The ultradifferentiable case
A classical result of Borel says that the map $`J_{}:𝒞^{\mathrm{}}([1,1],)[[x]]`$ which associates to $`f`$ its Taylor series at $`0`$ is surjective. On the other hand, $`\{z\}=\begin{array}{c}\text{lim}\hfill \\ \hfill \end{array}_{r>0}𝒪(𝔻_r)`$, where $`𝔻_r=\{z,|z|<r\}`$ and $`𝒪(𝔻_r)`$ is the $``$–vector space of $``$–valued functions analytic in $`𝔻_r`$. Between $`[[z]]`$ and $`\{z\}`$ one has many important algebras of “ultradifferentiable” power series (i.e. asymptotic expansions at $`z=0`$ of functions which are “between” $`𝒞^{\mathrm{}}`$ and $`\{z\}`$).
In this part we will study the case $`A_1`$ or $`A_2`$ (or both) is neither $`z\{z\}`$ nor $`z\left[\left[z\right]\right]`$ but a general ultradifferentiable algebra $`z\left[\left[z\right]\right]_{(M_n)}`$ defined as follows.
Let $`(M_n)_{n1}`$ be a sequence of positive real numbers such that:
0. $`inf_{n1}M_n^{1/n}>0`$;
1. There exists $`C_1>0`$ such that $`M_{n+1}C_1^{n+1}M_n`$ for all $`n1`$;
2. The sequence $`(M_n)_{n1}`$ is logarithmically convex;
3. $`M_nM_mM_{m+n1}`$ for all $`m,n1`$.
###### Definition 2.2.
Let $`f=_{n1}f_nz^nz\left[\left[z\right]\right]`$; $`f`$ belongs to the algebra $`z\left[\left[z\right]\right]_{(M_n)}`$ if there exist two positive constants $`c_1,c_2`$ such that
(2.8)
$$|f_n|c_1c_2^nM_n\text{for all}n1.$$
The role of the above assumptions on the sequence $`(M_n)_{n1}`$ is the following: 0. assures that $`z\{z\}z\left[\left[z\right]\right]_{(M_n)}`$; 1. implies that $`z\left[\left[z\right]\right]_{(M_n)}`$ is stable for derivation. Condition 2. means that $`\mathrm{log}M_n`$ is convex, i.e. that the sequence $`(M_{n+1}/M_n)`$ is increasing; it implies that $`z\left[\left[z\right]\right]_{(M_n)_{n1}}`$ is an algebra, i.e. stable by multiplication. Condition 3. implies that this algebra is closed for composition: if $`f,gz\left[\left[z\right]\right]_{(M_n)_{n1}}`$ then $`fgz\left[\left[z\right]\right]_{(M_n)_{n1}}`$. This is a very natural assumption since we will study a conjugacy problem.
Let $`s>0`$. A very important example of ultradifferentiable algebra is given by the algebra of Gevrey$`s`$ series which is obtained chosing $`M_n=(n!)^s`$. It is easy to check that the assumptions 0.–3. are verified. But also more rapidly growing sequences may be considered such as $`M_n=n^{an^b}`$ with $`a>0`$ and $`1<b<2`$.
We then have the following
###### Theorem 2.3.
1. If $`Fz\left[\left[z\right]\right]_{(M_n)}`$ and $`\omega `$ is a Brjuno number then also the linearization $`H`$ belongs to the same algebra $`z\left[\left[z\right]\right]_{(M_n)}`$.
2. If $`Fz\left\{z\right\}`$ and $`\omega `$ verifies
(2.9)
$$\underset{n+\mathrm{}}{lim\; sup}\left(\underset{k=0}{\overset{k(n)}{}}\frac{\mathrm{log}q_{k+1}}{q_k}\frac{1}{n}\mathrm{log}M_n\right)<+\mathrm{}$$
where $`k(n)`$ is defined by the condition $`q_{k(n)}n<q_{k(n)+1}`$, then the linearization $`Hz\left[\left[z\right]\right]_{(M_n)}`$.
3. Let $`Fz\left[\left[z\right]\right]_{(N_n)}`$, where the sequence $`(N_n)`$ verifies 0,1,2,3 and is asymptotically bounded by the sequence $`(M_n)`$ (i.e. $`M_nN_n`$ for all sufficiently large $`n`$). If $`\omega `$ verifies
(2.10)
$$\underset{n+\mathrm{}}{lim\; sup}\left(\underset{k=0}{\overset{k(n)}{}}\frac{\mathrm{log}q_{k+1}}{q_k}\frac{1}{n}\mathrm{log}\frac{M_n}{N_n}\right)<+\mathrm{}$$
where $`k(n)`$ is defined by the condition $`q_{k(n)}n<q_{k(n)+1}`$, then the linearization $`Hz\left[\left[z\right]\right]_{(M_n)}`$.
Note that conditions (2.9) and (2.10) are generally weaker than the Brjuno condition. For example if given $`F`$ analytic one only requires the linearization $`H`$ to be Gevrey–$`s`$ then one can allow the denominators $`q_k`$ of the continued fraction expansion of $`\omega `$ to verify $`q_{k+1}=𝒪(e^{\sigma q_k})`$ for all $`0<\sigma s`$ whereas an exponential growth rate of the denominators of the convergents is clearly forbidden from the Brjuno condition. If the linearization is required only to belong to the class $`z\left[\left[z\right]\right]_{(M_n)}`$ with $`M_n=n^{an^b}`$, with $`a>0`$ and $`1<b<2`$, one can even have $`q_{k+1}=𝒪(e^{\alpha q_k^\beta })`$ for all $`\alpha >0`$ and $`1<\beta <b`$ and the series $`_{k0}\frac{\mathrm{log}q_{k+1}}{q_k^b}`$ converges. This kind of series have been studied in detail in \[MMY\].
###### Proof.
We only prove (2.10) which clearly implies (2.9) (choosing $`N_n1`$) and also assertion 1. (choosing $`M_nN_n`$).
Since it is not restrictive to assume $`c_11`$ and $`c_21`$ in $`|f_n|c_1c_2^nN_n`$ one can immediately check by induction on $`n`$ that $`|h_n|c_1^{n1}c_2^{2n2}s_nN_ne^{K(n1)}`$, where $`s_n`$ is defined in (2.3). Thus by (2.4) and Davie’s lemma one has
$$\frac{1}{n}\mathrm{log}\frac{|h_n|}{M_n}c_3+\frac{1}{n}\mathrm{log}\frac{N_n}{M_n}+\underset{k=0}{\overset{k(n)}{}}\frac{\mathrm{log}q_{k+1}}{q_k}$$
for some suitable constant $`c_3>0`$. ∎
Problem. Are the arithmetical conditions stated in Theorem 2.3 optimal? In particular is it true that given any algebra $`A=z\left[\left[z\right]\right]_{(M_n)}`$ and $`FA`$ then $`HA`$ if and only if $`\omega `$ is a Brjuno number?
We believe that this problem deserves further investigations and that some surprising results may be found. In the next two sections we will give some preliminary results.
### 2.3. A Gevrey–like class where the linear and non linear problem have the same sufficient arithemtical condition
Let $`\left[\left[z\right]\right]_s`$ denote the algebra of Gevrey–$`s`$ complex formal power series, $`s>0`$. If $`s^{}>s>0`$ then $`z\left[\left[z\right]\right]_sz\left[\left[z\right]\right]_s^{}`$; let
$$A_s=\underset{s^{}>s}{}z\left[\left[z\right]\right]_s^{}.$$
Clearly $`A_s`$ is an algebra stable w.r.t. derivative and composition. This algebra can be equivalently characterized requiring that given $`f\left(z\right)=_{n1}f_nz^nz\left[\left[z\right]\right]`$ one has
(2.11)
$$\underset{n\mathrm{}}{lim\; sup}\frac{\mathrm{log}|f_n|}{n\mathrm{log}n}s$$
Consider Euler’s derivative (see \[Du\], section 4)
(2.12)
$$(\delta _\lambda f)(z)=\underset{n=2}{\overset{\mathrm{}}{}}(\lambda ^n\lambda )f_nz^n,$$
with $`\lambda =e^{2\pi i\omega }`$. It acts linearly on $`zA_s`$ and it is a linear automorphism of $`zA_s`$ if and only if
(2.13)
$$\underset{k\mathrm{}}{lim}\frac{\mathrm{log}q_{k+1}}{q_k\mathrm{log}q_k}=0$$
where, as usual, $`\left(q_k\right)_k`$ is the sequence of the denominators of the convergents of $`\omega `$. This fact can be easily checked by applying the law of the best approximation (Lemma A.3, Appendix A) and the charaterization (2.11) to
$$h(z)=(\delta _\lambda ^1f)(z)=\underset{n2}{}\frac{f_n}{\lambda ^n\lambda }z^n.$$
Note that the arithmetical condition $`\mathrm{log}q_{k+1}=o\left(q_k\mathrm{log}q_k\right)`$ is much weaker than Brjuno’s condition.
We now consider the Siegel problem associated to a germ $`FA_s`$. Applying the third statement of Theorem 2.3 with $`N_n=\left(n!\right)^{s+\eta }`$ and $`M_n=\left(n!\right)^{s+ϵ}`$ for any positive fixed $`ϵ>\eta >0`$ one finds that if the following arithmetical condition is satisfied
(2.14)
$$\underset{k\mathrm{}}{lim}\frac{1}{\mathrm{log}q_k}\underset{i=0}{\overset{k}{}}\frac{\mathrm{log}q_{i+1}}{q_i}=0$$
then the linearization $`H_F`$ also belongs to $`A_s`$.<sup>2</sup><sup>2</sup>2In Theorem 2.3 we proved that a sufficient condition with this choice of $`M_n`$ and $`N_n`$ is
$$\underset{n+\mathrm{}}{lim\; sup}\left(\underset{i=0}{\overset{k(n)}{}}\frac{\mathrm{log}q_{i+1}}{q_i}\frac{ϵ\eta }{n}\mathrm{log}\left(n!\right)\right)C<+\mathrm{}$$
which can be rewritten as
$$\underset{n+\mathrm{}}{lim\; sup}\left(\underset{i=0}{\overset{k(n)}{}}\frac{\mathrm{log}q_{i+1}}{q_i}\left(ϵ\eta \right)\mathrm{log}q_{k\left(n\right)}C\right)=0$$
from which (2.14) is just obtained dividing by $`\mathrm{log}q_{k\left(n\right)}`$.
The equivalence of (2.14) and (2.13) is the object of the following
###### Lemma 2.4.
Let $`\left(q_l\right)_{l0}`$ be the sequence of denominators of the convergents of $`\omega `$. The following statements are all equivalent:
1. $`lim_n\mathrm{}\frac{1}{\mathrm{log}n}_{l=0}^{k\left(n\right)}\frac{\mathrm{log}q_{l+1}}{q_l}=0`$
2. $`_{l=0}^{k\left(n\right)}\frac{\mathrm{log}q_{l+1}}{q_l}=o\left(\mathrm{log}q_k\right)`$
3. $`\mathrm{log}q_{k+1}=o\left(q_k\mathrm{log}q_k\right)`$
###### Proof.
1. $``$ 2. is trivial (choose $`n=q_{k\left(n\right)}`$).
2. $``$ 3. Writing for short $`k`$ istead of $`k\left(n\right)`$
$`{\displaystyle \frac{1}{\mathrm{log}q_k}}{\displaystyle \underset{l=0}{\overset{k}{}}}{\displaystyle \frac{\mathrm{log}q_{l+1}}{q_l}}`$ $`={\displaystyle \frac{\mathrm{log}q_{k+1}}{q_k\mathrm{log}q_k}}+{\displaystyle \frac{1}{\mathrm{log}q_k}}{\displaystyle \underset{l=0}{\overset{k1}{}}}{\displaystyle \frac{\mathrm{log}q_{l+1}}{q_l}}`$
$`={\displaystyle \frac{\mathrm{log}q_{k+1}}{q_k\mathrm{log}q_k}}+{\displaystyle \frac{o\left(\mathrm{log}q_{k1}\right)}{\mathrm{log}q_k}}`$
Since $`lim_k\mathrm{}\frac{o\left(\mathrm{log}q_{k1}\right)}{\mathrm{log}q_k}=0`$ we get 3.
3. $``$ 1. First of all note that since $`q_{k\left(n\right)}n`$ 2. trivially implies 1. Thus it is enough to show that 3. $``$2.
$`\mathrm{log}q_{k+1}=o\left(q_k\mathrm{log}q_k\right)`$ means:
$$ϵ>0\widehat{n}\left(ϵ\right)\text{ such that }l>\widehat{n}\left(ϵ\right)\frac{\mathrm{log}q_{l+1}}{q_l\mathrm{log}q_l}<ϵ$$
If $`\mathrm{log}q_{l+1}<aq_l^\alpha `$ for some positive constants $`a`$ and $`\alpha <1`$ then:
$$\frac{1}{\mathrm{log}q_k}\underset{l=0}{\overset{k}{}}\frac{\mathrm{log}q_{l+1}}{q_l}\frac{a}{\mathrm{log}q_k}\underset{l=0}{\overset{\mathrm{}}{}}\frac{1}{q_l^{1\alpha }}\frac{aC}{\mathrm{log}q_k}$$
for some universal constant $`C`$ thanks to (A.2).
If $`\mathrm{log}q_{l+1}aq_l^\alpha `$ and $`\frac{1}{2}<\alpha <1`$, consider the decomposition:
(2.15)
$$\frac{1}{\mathrm{log}q_k}\underset{l=0}{\overset{k}{}}\frac{\mathrm{log}q_{l+1}}{q_l}=\underset{1}{\underset{}{\frac{\mathrm{log}q_{k+1}}{q_k\mathrm{log}q_k}}}+\underset{2}{\underset{}{\frac{1}{\mathrm{log}q_k}\underset{l=0}{\overset{\widehat{n}\left(ϵ\right)}{}}\frac{\mathrm{log}q_{l+1}}{q_l}}}+\underset{3}{\underset{}{\frac{1}{\mathrm{log}q_k}\underset{l=\widehat{n}\left(ϵ\right)+1}{\overset{k1}{}}\frac{\mathrm{log}q_{l+1}}{q_l}}}$$
if $`k1\widehat{n}\left(ϵ\right)+1`$ otherwise the second and the third terms are replaced by $`\frac{1}{\mathrm{log}q_k}_{l=0}^{k1}\frac{\mathrm{log}q_{l+1}}{q_l}`$. The third term can be bounded from above by:
$$\frac{1}{\mathrm{log}q_k}\underset{l=\widehat{n}\left(ϵ\right)+1}{\overset{k1}{}}\frac{\mathrm{log}q_{l+1}}{q_l}\frac{ϵ}{\mathrm{log}q_k}\underset{l=\widehat{n}\left(ϵ\right)}{}+1^{k1}\mathrm{log}q_lϵ\left(k1\widehat{n}\left(ϵ\right)\right)\frac{\mathrm{log}q_{k1}}{\mathrm{log}q_k}.$$
Since $`\mathrm{log}q_j\frac{2}{e}q_j^{\frac{1}{2}}`$, from (A.1) and the hypothesis $`\mathrm{log}q_{l+1}aq_l^\alpha `$ we obtain:
$`{\displaystyle \frac{1}{\mathrm{log}q_k}}{\displaystyle \underset{l=\widehat{n}\left(ϵ\right)+1}{\overset{k1}{}}}{\displaystyle \frac{\mathrm{log}q_{l+1}}{q_l}}`$ $`\left(k1\right){\displaystyle \frac{ϵ}{aq_{k1}^\alpha }}{\displaystyle \frac{2}{e}}q_{k1}^{\frac{1}{2}}`$
$`{\displaystyle \frac{ϵ2}{ea}}\left(k1\right)e^{\left(k2\right)\left(\alpha \frac{1}{2}\right)\mathrm{log}G}ϵC_1`$
with $`C_1=\frac{2}{ea}\frac{e^{1+\left(\alpha \frac{1}{2}\right)\mathrm{log}G}}{\left(\alpha \frac{1}{2}\right)\mathrm{log}G}`$, $`G=\frac{\sqrt{5}+1}{2}`$.
The second term of (2.15) is bounded by
$$\frac{1}{\mathrm{log}q_k}\underset{l=0}{\overset{\widehat{n}\left(ϵ\right)}{}}\frac{\mathrm{log}q_{l+1}}{q_l}\frac{C_2}{\left(k1\right)\mathrm{log}G\mathrm{log}2}ϵC_2$$
if $`k>k\left(ϵ\right)>\widehat{n}(ϵ)`$, for some positive constant $`C_2`$.
Putting these estimates together we can bound (2.15) with:
$$\frac{1}{\mathrm{log}q_k}\underset{l=0}{\overset{k}{}}\frac{\mathrm{log}q_{l+1}}{q_l}ϵ+ϵC_1+ϵC_2$$
for all $`ϵ>0`$ and for all $`k>k\left(ϵ\right)`$, thus $`_{l=0}^k\frac{\mathrm{log}_{l+1}}{q_l}=o\left(\mathrm{log}q_k\right)`$
### 2.4. Divergence of the modified linearization power series when the artihmetical conditions of Theorem 2.3 are not satisfied
In Theorem 2.3 we proved that if $`Fz\left\{z\right\}`$ and $`\omega `$ verifies condition (2.9) then the linearization $`Hz\left[\left[z\right]\right]_{\left(M_n\right)}`$. The power series coefficients $`h_n`$ of $`H`$ are given by (2.1).
Let us define the sequence of strictly positive real numbers $`(\stackrel{~}{h}_n)_{n0}`$ as follows:
(2.16)
$$\stackrel{~}{h}_0=1,\stackrel{~}{h}_n=\frac{1}{|\lambda ^n1|}\underset{m=2}{\overset{n+1}{}}|f_m|\underset{n_1+\mathrm{}+n_m=n+1m,n_i0}{}\stackrel{~}{h}_{n_1}\mathrm{}\stackrel{~}{h}_{n_m}.$$
Clearly $`|h_n|\stackrel{~}{h}_{n+1}`$. Let $`\stackrel{~}{H}`$ denote the formal power series associated to the sequence $`(\stackrel{~}{h}_n)_{n0}`$
(2.17)
$$\stackrel{~}{H}(z)=\underset{m=1}{\overset{\mathrm{}}{}}\stackrel{~}{h}_{n1}z^n$$
Following closely \[Yo\], Appendice 2, in this section we will prove that if condition (2.9) is violated then $`\stackrel{~}{H}`$ doesn’t belong to $`z\left[\left[z\right]\right]_{\left(M_n\right)}`$.
Note that since it is not restrictive to assume that $`|f_2|1`$ one has
(2.18)
$$\stackrel{~}{h}_n>\underset{k=0}{\overset{n1}{}}\stackrel{~}{h}_k\stackrel{~}{h}_{n1k}\stackrel{~}{h}_{n1},$$
thus the sequence $`(\stackrel{~}{h}_n)_{n0}`$ is strictly increasing.
Let $`\omega `$ be an irrational number which violates (2.9) and let $`U=\{q_j:q_{j+1}\left(q_j+1\right)^2\}`$ where $`\left(q_j\right)_{j1}`$ are the denominators of the convergents of $`x`$. Since $`inf_n\frac{1}{n}\mathrm{log}M_n=c>\mathrm{}`$ we have:
$$\underset{q_jU,j=0}{\overset{k\left(n\right)}{}}\frac{\mathrm{log}q_{j+1}}{q_j}\frac{\mathrm{log}M_n}{n}\underset{q_jU,j=0}{\overset{k\left(n\right)}{}}\frac{2\mathrm{log}\left(q_j+1\right)}{q_j}c=\stackrel{~}{c}<+\mathrm{}$$
where $`k\left(n\right)`$ is defined by: $`q_{k\left(n\right)}n<q_{k\left(n\right)+1}`$.
On the other hand $`lim\; sup_n\mathrm{}\left(_{j=0}^{k\left(n\right)}\frac{\mathrm{log}q_{j+1}}{q_j}\frac{\mathrm{log}M_n}{n}\right)=\mathrm{}`$ thus
(2.19)
$$\underset{n\mathrm{}}{lim\; sup}\left(\underset{q_jU:j=0}{\overset{k\left(n\right)}{}}\frac{\mathrm{log}q_{j+1}}{q_j}\frac{\mathrm{log}M_n}{n}\right)=\mathrm{}$$
this implies that $`U`$ is not empty. From now on the elements of $`U`$ will be denoted by: $`q_0^{}<q_1^{}<\mathrm{}`$.
Let $`n_i=\frac{q_{i+1}^{}}{q_i^{}+1}`$.
###### Lemma 2.5.
The subsequence $`\left(\stackrel{~}{h}_{q_i^{}}\right)_{i0}`$ verifies:
(2.20)
$$\stackrel{~}{h}_{q_{i+1}^{}}\frac{1}{|\lambda ^{q_{i+1}^{}}1|}\stackrel{~}{h}_{q_i^{}}^{n_i}.$$
###### Proof.
From the definition (2.16) and the assumption $`|f_2|1`$ it follows that
$$\stackrel{~}{h}_{2s1}\frac{|f_2|}{|\lambda ^{2s1}1|}\stackrel{~}{h}_{s1}^2\frac{\stackrel{~}{h}_{s1}^2}{2}$$
thus for all $`i2`$ and $`s1`$ one has
(2.21)
$$\stackrel{~}{h}_{2s1}\frac{\stackrel{~}{h}_{s1}^i}{2}.$$
Choosing $`s=q_i^{}+1`$, $`i=n_i`$ this leads to the desired estimate:
$$\stackrel{~}{h}_{q_{i+1}^{}}\frac{2|f_2|}{|\lambda ^{q_{i+1}^{}}1|}\stackrel{~}{h}_{q_{i+1}^{}1}\frac{2|f_2|}{|\lambda ^{q_{i+1}^{}}1|}\stackrel{~}{h}_{n_i(q_i^{}+1)1}\frac{\stackrel{~}{h}_{q_i^{}}^{n_i}}{|\lambda ^{q_{i+1}^{}}1|}.$$
By means of the previous lemma we can now prove that $`lim\; sup_n\mathrm{}\frac{1}{n}\mathrm{log}\frac{\stackrel{~}{h}_n}{M_n}=+\mathrm{}`$.
Let $`\alpha _i=n_i\frac{q_i^{}}{q_{i+1}^{}}`$. Then $`1\alpha _i\left(1\frac{1}{q_i^{}+1}\right)^2`$, which assures that $`_{i0}\alpha _i=c`$ for some finite constant $`c`$ (depending on $`\omega `$). Then from (2.20) we get:
$$\frac{1}{q_{i+1}^{}}\mathrm{log}\frac{\stackrel{~}{h}_{q_{i+1}^{}}}{M_{q_{i+1}^{}}}c\left[\underset{j=1}{\overset{i+1}{}}\frac{\mathrm{log}|\lambda ^{q_j^{}}1|}{q_j^{}}\frac{1}{q_{i+1}^{}}\mathrm{log}M_{q_{i+1}^{}}\right]+c_4$$
which diverges as $`i\mathrm{}`$.
## Appendix A continued fractions and Brjuno’s numbers
Here we summarize briefly some basic notions on continued fraction development and we define the Brjuno numbers.
For a real number $`\omega `$, we note $`\omega `$ its integer part and $`\{\omega \}=\omega \omega `$ its fractional part. We define the Gauss’ continued fraction algorithm:
* $`a_0=\omega `$ and $`\omega _0=\{\omega \}`$
* for all $`n1`$: $`a_n=\frac{1}{\omega _{n1}}`$ and $`\omega _n=\{\frac{1}{\omega _{n1}}\}`$
namely the following representation of $`\omega `$:
$$\omega =a_0+\omega _0=a_0+\frac{1}{a_1+\omega _1}=\mathrm{}$$
For short we use the notation $`\omega =[a_0,a_1,\mathrm{},a_n,\mathrm{}]`$.
It is well known that to every expression $`[a_0,a_1,\mathrm{},a_n,\mathrm{}]`$ there corresponds a unique irrational number. Let us define the sequences $`\left(p_n\right)_n`$ and $`\left(q_n\right)_n`$ as follows:
$`q_2=1\text{}q_1=0\text{}q_n=a_nq_{n1}+q_{n2}`$
$`p_2=0\text{}p_1=1\text{}p_n=a_np_{n1}+p_{n2}`$
It is easy to show that: $`\frac{p_n}{q_n}=[a_0,a_1,\mathrm{},a_n]`$.
For any given $`\omega `$ the sequence $`\left(\frac{p_n}{q_n}\right)_n`$ satisfies
(A.1)
$$q_n\left(\frac{\sqrt{5}+1}{2}\right)^{n1},n1$$
thus
(A.2)
$$\underset{k0}{}\frac{1}{q_k}\frac{\sqrt{5}+5}{2}\text{ and }\underset{k0}{}\frac{\mathrm{log}q_k}{q_k}\frac{1}{e}\frac{2^{\frac{5}{4}}}{2^{\frac{3}{4}}1},$$
and it has the following important properties:
###### Lemma A.1.
for all $`n1`$ then: $`\frac{1}{q_n+q_{n+1}}|q_n\omega p_n|<\frac{1}{q_{n+1}}`$.
###### Lemma A.2.
If for some integer $`r`$ and $`s`$, $`\omega \frac{r}{s}\frac{1}{2s^2}`$ then $`\frac{r}{s}=\frac{p_k}{q_k}`$ for some integer $`k`$.
###### Lemma A.3.
The law of best approximation: if $`1qq_k`$, $`(p,q)(p_n,q_n)`$ and $`n1`$ then $`|qxp|>|q_nxp_n|`$. Moreover if $`(p,q)(p_{n1},q_{n1})`$ then $`|qxp|>|q_{n1}xp_{n1}|`$.
For a proof of these standard lemmas we refer to \[HW\].
The growth rate of $`\left(q_n\right)_n`$ describes how rapidly $`\omega `$ can be approximated by rational numbers. For example $`\omega `$ is a diophantine number \[Si\] if and only if there exist two constants $`c>0`$ and $`\tau 1`$ such that $`q_{n+1}cq_n^\tau `$ for all $`n0`$.
To every $`\omega `$ we associate, using its convergents, an arithmetical function:
(A.3)
$$B\left(\omega \right)=\underset{n0}{}\frac{\mathrm{log}q_{n+1}}{q_n}$$
We say that $`\omega `$ is a Brjuno number or that it satisfies the Brjuno condition if $`B\left(\omega \right)<+\mathrm{}`$. The Brjuno condition gives a limitation to the growth rate of $`\left(q_n\right)_n`$. It was originally introduced by A.D.Brjuno \[Br\]. The Brjuno condition is weaker than the Diophantine condition: for example if $`a_{n+1}ce^{a_n}`$ for some positive constant $`c`$ and for all $`n0`$ then $`\omega =[a_0,a_1,\mathrm{},a_n,\mathrm{}]`$ is a Brjuno number but is not a diophantine number.
## Appendix B Davie’s lemma
In this appendix we summarize the result of \[Da\] that we use, in particular Lemma B.4. Let $`\omega `$ and $`\left\{q_n\right\}_n`$ the partial denominators of the continued fraction for $`\omega `$ in the Gauss’ development.
###### Definition B.1.
Let $`A_k=\{n0n\omega \frac{1}{8q_k}\}`$, $`E_k=\mathrm{max}(q_k,q_{k+1}/4)`$ and $`\eta _k=q_k/E_k`$. Let $`A_k^{}`$ be the set of non negative integers $`j`$ such that either $`jA_k`$ or for some $`j_1`$ and $`j_2`$ in $`A_k`$, with $`j_2j_1<E_k`$, one has $`j_1<j<j_2`$ and $`q_k`$ divides $`jj_1`$. For any non negative integer $`n`$ define:
$$l\left(n\right)=\mathrm{max}\{\left(1+\eta _k\right)\frac{n}{q_k}2,\left(m_n\eta _k+n\right)\frac{1}{q_k}1\}$$
where $`m_n=\mathrm{max}\{j0jn,jA_k^{}\}`$. We then define the function $`h_k\left(n\right)`$
$$h_k\left(n\right)=\{\begin{array}{cc}\frac{m_n+\eta _kn}{q_k}1\hfill & \text{if }m_n+q_kA_k^{}\hfill \\ l\left(n\right)\hfill & \text{if }m_n+q_kA_k^{}\hfill \end{array}$$
The function $`h_k\left(n\right)`$ has some properties collected in the following proposition
###### Proposition B.2.
The function $`h_k\left(n\right)`$ verifies;
1. $`\frac{\left(1+\eta _k\right)n}{q_k}2h_k\left(n\right)\frac{\left(1+\eta _k\right)n}{q_k}1`$ for all $`n`$.
2. If $`n>0`$ and $`nA_k^{}`$ then $`h_k\left(n\right)h_k\left(n1\right)+1`$.
3. $`h_k\left(n\right)h_k\left(n1\right)`$ for all $`n>0`$.
4. $`h_k\left(n+q_k\right)h_k\left(n\right)+1`$ for all $`n`$.
Now we set $`g_k\left(n\right)=\mathrm{max}(h_k\left(n\right),\frac{n}{q_k})`$ and we state the following proposition
###### Proposition B.3.
The function $`g_k`$ is non negative and verifies:
1. $`g_k\left(0\right)=0`$
2. $`g_k\left(n\right)\frac{\left(1+\eta _k\right)n}{q_k}`$ for all $`n`$
3. $`g_k\left(n_1\right)+g_k\left(n_2\right)g_k\left(n_1+n_2\right)`$ for all $`n_1`$ and $`n_2`$
4. if $`nA_k`$ and $`n>0`$ then $`g_k\left(n\right)g_k\left(n1\right)+1`$
The proof of these propositions can be found in \[Da\].
Let $`k(n)`$ be defined by the condition $`q_{k(n)}n<q_{k(n)+1}`$. Note that $`k`$ is non–decreasing.
###### Lemma B.4.
Davie’s lemma Let
$$K(n)=n\mathrm{log}2+\underset{k=0}{\overset{k(n)}{}}g_k(n)\mathrm{log}(2q_{k+1}).$$
The function $`K\left(n\right)`$ verifies:
1. There exists a universal constant $`\gamma _3>0`$ such that
$$K(n)n\left(\underset{k=0}{\overset{k(n)}{}}\frac{\mathrm{log}q_{k+1}}{q_k}+\gamma _3\right);$$
2. $`K(n_1)+K(n_2)K(n_1+n_2)`$ for all $`n_1`$ and $`n_2`$;
3. $`\mathrm{log}|\lambda ^n1|K(n)K(n1)`$.
The proof is a straightforward application of Proposition B.3.
|
no-problem/0003/hep-th0003064.html
|
ar5iv
|
text
|
# Untitled Document
Solvable Matrix Models
Vladimir Kazakov kazakov@physique.ens.fr
<sup>1</sup> Laboratoire de Physique Théorique de l’Ecole Normale Supérieure Unité Mixte du Centre National de la Recherche Scientifique et de l’Ecole Normale Supérieure.
75231 Paris CEDEX, France
We review some old and new methods of reduction of the number of degrees of freedom from $`N^2`$ to $`N`$ in the multi-matrix integrals.
A talk delivered at the MSRI Workshop ‘‘Matrix Models and Painlevé Equations’’ , Berkeley (USA) 1999
LPTENS-00/09
February, 2000
1. Introduction
Multi-matrix integrals of various types appear in many mathematical and physical applications, such as combinatorics of graphs, topology, integrable systems, string theory, theory of mesoscopic systems or statistical mechanics on random surfaces.
A general Q-matrix integral of the form
$$Z=\underset{q=1}{\overset{Q}{}}d^{N^2}M_q\mathrm{exp}S(M_1,\mathrm{},M_Q)$$
usually goes over the $`N\times N`$ hermitian, real symmetric or symplectic matrices $`M_q`$ with the action $`S`$ and the measure symmetric under the simultaneous group rotation: $`M_q\mathrm{\Omega }^+M_q\mathrm{\Omega }`$. Some other multi-matrix integrals, such as these with complex matrices or with general real matrices, can be reduces to those three basic cases.
We will consider here only the case of hermitean matrices for which $`\mathrm{\Omega }`$ belongs to the $`U(N)`$-group.
In many applications ”to solve” the corresponding matrix model usually means to reduce the number of variables by explicit integrations over most of the variables in such a way that instead of $`QN^2`$ original integrations (matrix elements) one would be left in the large $`N`$ limit only with $`N`$ integration variables. In this case the integration over the rest of the variables can be performed, at least in the widely used large N limit, by means of the saddle point approximation. A more sophisticated double scaling limit is also possible (if possible at all) only after such a reduction. The key of success is in the fact that after reduction the effective action at the saddle point is still of the order $`N^2`$ whereas the corrections given by the logarithm of determinant of the second variation of the action cannot be bigger than $`N`$ (the ”entropy” of the remaining variables). The problem is thus reduced to the solution of the ”classical” saddle point equations, instead of the ”quantum” problem of functional (in the large N limit) integration over the original matrix variables.
Such an explicit reduction of the number of ”degrees of freedom” is in general possible only for a few rather restricted, though physically and mathematically interesting, classes of multi-matrix integrals. The purpose of our present notes is to review the basic old and new methods of such a reduction. Before going to the particular cases let us stress the importance of the search for new methods of such a reduction: any nontrivial finding on this way leads immediately to numerous fruitful applications.
2. Some old examples
The best known example of such a reduction of the number of degrees of freedom is the one matrix integral:
$$Z=d^{N^2}M\mathrm{exp}N\mathrm{Tr}S(M)$$
where $`S(M)`$ is an arbitrary function of one variable. Let us use the decomposition:
$$M=\mathrm{\Omega }^+x\mathrm{\Omega }$$
where $`x=diag(x_1,\mathrm{},x_N)`$ is a diagonal matrix of the eigenvalues and $`\mathrm{\Omega }`$ is the $`U(N)`$ group variable. The corresponding (Dyson) measure can be written as:
$$d^{N^2}M=d[\mathrm{\Omega }]_{U(N)}\mathrm{\Delta }^2(x)\underset{k=1}{\overset{N}{}}dx_k$$
where $`\mathrm{\Delta }(x)=_{i>j}(x_ix_j)`$ is the Van-der-Monde determinant. The integrand as an invariant function does not depend at all on $`\mathrm{\Omega }`$ (the integration over it produces just a group volume factor which we will always omit). The remaining integral over the eigenvalues reads:
$$Z=\underset{k=1}{\overset{N}{}}dx_k\mathrm{exp}[NS(x_k)]\mathrm{\Delta }^2(x)$$
In the large N limit the corresponding saddle point equation takes the form
$$\frac{1}{N}\frac{S}{x_k}=S^{}(x_k)+\frac{1}{N}\underset{jk}{}\frac{1}{x_kx_j}=0$$
These arguments were successfully used for an interesting combinatorial problem: enumeration of graphs of fixed two dimensional topologies , . There exist powerful methods to analyze this equation but it is not our present goal to review them here.
The next fruitful example is the so called two matrix model:
$$Z=d^{N^2}Ad^{N^2}B\mathrm{exp}N\mathrm{Tr}\left(A^2B^2+cAB+U(A)+V(B)\right)$$
where $`U`$ and $`V`$ are some arbitrary functions of one variable. After the decomposition $`A=\mathrm{\Omega }_1^+x\mathrm{\Omega }_2`$, $`B=\mathrm{\Omega }_2^+y\mathrm{\Omega }_2`$ we are left, due to the term $`\mathrm{Tr}(AB)`$ in the action, with one nontrivial unitary integral over the variable $`\mathrm{\Omega }=\mathrm{\Omega }_1\mathrm{\Omega }_2^+`$. Fortunately, this integral was explicitly calculated by Charish-Chandra, Itzykson and Zuber , :
$$d[\mathrm{\Omega }]_{U(N)}\mathrm{exp}\mathrm{Tr}(\mathrm{\Omega }^+x\mathrm{\Omega }y)=\underset{k=1}{\overset{N1}{}}k!\frac{\underset{ij}{det}e^{x_iy_j}}{\mathrm{\Delta }(x)\mathrm{\Delta }(y)}$$
Substituting (2.1) and the Dyson measure (2.1) into (2.1) we are left again with only $`2N`$ variables $`x_k`$ and $`y_k`$ and we can write again the saddle point equations in the large N limit. They are more complicated than in the one matrix integral but can be nevertheless solved quite explicitly. The first solution of that kind was found in in an indirect way, using the method of orthogonal polynomials, but the direct solution is also possible (see ).
This model was used in to solve exactly the first example of new statistical mechanical models of interacting spins on random planar graphs: in this case it was a model of Ising spins on random planar graphs.
An obvious generalization of the two matrix model is the matrix chain model:
$$Z=\underset{q=1}{\overset{Q}{}}d^{N^2}M_q\mathrm{exp}\mathrm{Tr}\left(\underset{p=1}{\overset{Q}{}}V_q(M_q)+\underset{p=1}{\overset{Q1}{}}M_{p1}M_p\right)$$
One easily notices that the same unitary decomposition $`M_q=\mathrm{\Omega }_q^+x_q\mathrm{\Omega }_q`$ leads to $`Q1`$ independent integrals over the variables $`U_q=\mathrm{\Omega }_{q1}^+\mathrm{\Omega }_q`$ of the type (2.1). We are left again with only $`QN`$ eigenvalues instead of $`QN^2`$ matrix elements and are ready to apply the saddle point approximation to this integral. This model was first analyzed by the method of orthogonal polynomials by . It was shown in that by special choices of the potential $`V`$ the model can be described by the KP integrable flow with respect to the coupling constant of the potential.
Note that if we imposed the periodicity condition $`M_1=M_q`$ on this matrix chain and add the term $`M_1M_q`$ to the action the problem would become much more complicated (and actually not solved so far), since this would give an extra condition $`_qU_q=I`$ making the variables $`U_q`$ not independent.
Another solvable matrix chain describing the statistical RSOS RSOS models on random planar graphs was proposed and solved in . Similar models were considered in .
Some multi-matrix models can be reduced to the solvable ones by means of simple matrix integral transformations. The first example of such transformation was described in the paper for the matrix integral describing the Q-state Potts model on random dynamical planar graphs. Its partition function is
$$Z=\underset{q=1}{\overset{Q}{}}d^{N^2}M_q\mathrm{exp}\mathrm{Tr}\left(\underset{q=1}{\overset{Q}{}}V_q(M_q)+\underset{p,q=1}{\overset{Q}{}}M_pM_q\right)$$
One can represent the last factor under the integral as
$$d^{N^2}X\mathrm{exp}\mathrm{Tr}\left(\frac{1}{2}X^2+X\underset{q=1}{\overset{Q}{}}M_q\right).$$
Let us consider the case $`V_1=\mathrm{}=V_Q=V`$. Then the whole integral can be expressed as
$$Z=d^{N^2}X\mathrm{exp}(\frac{1}{2}\mathrm{Tr}X^2)\left[d^{N^2}M\mathrm{exp}\mathrm{Tr}\left(XM+V(M)\right)\right]^Q$$
The integrals in (2.1) can be reduced to the eigenvalues: in the integral under the power the only nontrivial “angular” integration over the relative $`U(N)`$-”angle” can be done by means of the formula (2.1) and the external one will also depend only on the eigenvalues of $`X`$. The solution of the corresponding saddle point equations was found in and analyzed in and .
Combining these methods in the obvious ways one can generalize the large $`N`$ solvability on a certain larger class of multi-matrix models.
3. Matrix Quantum Mechanics
In the limit when $`Q\mathrm{}`$ and with the special scaling of coupling constants the matrix chain (2.1) becomes matrix quantum mechanics. It is defined by the Hamiltonian
$$\widehat{H}_M=\mathrm{\Delta }_M+\mathrm{Tr}V(M)$$
where $`\mathrm{\Delta }_M`$ is the usual $`U(N)`$ invariant Laplacian on the homogeneous space of hermitian matrices and the potential $`V(M)`$ can actually explicitly depend on time $`t`$.
The Schroedinger equation can be written in the form of a minimization principle:
$$min_\mathrm{\Psi }d^{N^2}M\mathrm{Tr}(\frac{1}{2}|_M\mathrm{\Psi }(M)|^2+V(M)|\mathrm{\Psi }(M)|^2)$$
To reduce this problem to the eigenvalues we use the $`U(N)`$ symmetry of our model and look for a wave function $`\mathrm{\Psi }(M)`$ transforming according to a certain irreducible representation $`R`$ of $`U(N)`$:
$$\mathrm{\Psi }_R^I(\mathrm{\Omega }^+M\mathrm{\Omega })=\underset{J}{}\mathrm{\Omega }_R^{IJ}\mathrm{\Psi }_R^J(M)$$
where $`\mathrm{\Omega }_R`$ is a group element $`\mathrm{\Omega }`$ in representation $`R`$ and $`I,J`$ are the indices of the representation. Such a function may be decomposed as
$$\mathrm{\Psi }_R^I(M)=\underset{J}{}\mathrm{\Omega }_R^{IJ}\psi _R^J(x).$$
Here $`\psi _R^I(x_1,\mathrm{},x_N)`$ is a vector in the representation $`R`$.
Near the unity element on the group space $`\mathrm{\Omega }I+\omega `$ we have $`\mathrm{\Omega }_RP_R+_{ij}\omega _{ij}T_{ij}^R`$ where $`P_R`$ is a projector (unity element) in the $`R`$ space, $`\omega `$ is a small deviation from it and $`T_{ij}^R`$ are the $`u(N)`$ algebra generators. This gives:
$$\frac{}{M_{ij}}=\delta _{kj}\frac{}{x_k}+\underset{m=1}{\overset{N}{}}\frac{1}{x_kx_m}\frac{}{\omega _{mj}}$$
and we finally obtain from (3.1) the following variational principle:
$$\mathrm{min}_{\psi _R}\underset{k}{}dx_k\mathrm{\Delta }^2(x)\mathrm{Tr}_R\left(\frac{1}{2}\underset{j}{}|\frac{}{x_j}\psi _R(x)|^2+\frac{1}{2}\underset{ij}{}|T_{ij}^R\psi _R|^2+\underset{m}{}V(x_m)|\psi _R|^2\right)$$
where all the quantities and operators with the subscript $`R`$ are subjected to the corresponding matrix operations in the matrix space of representation.
The Schroedinger equation now reads:
$$\underset{k}{}\mathrm{\Delta }^2(x)\frac{}{x_k}\mathrm{\Delta }^2(x)\frac{}{x_k}\psi _R(x)\underset{ij}{}T_{ij}^RT_{ji}^R\psi _R(x)=\left(E\underset{k}{}V(x_k)\right)\psi _R(x)$$
It is useful to introduce a new function $`\varphi _R(x)=\frac{1}{\mathrm{\Delta }(x)}\psi _R(x)`$ obeying the equation
$$\underset{k}{}\left(\frac{}{x_k}\right)^2\varphi _R(x)\underset{ij}{}\frac{T_{ij}^RT_{ji}^R}{(x_ix_j)^2}\varphi _R(x)=\left(E\underset{i}{}V(x_i)\right)\varphi _R(x)$$
Note that any translation $`\omega _{ij}\omega _{ij}+\delta _{ij}ϵ`$ does not change the wave function $`\mathrm{\Psi }_R`$. That means that we are looking only for the states on which the condition is imposed
$$T_{kk}^R\psi _R=0,k=1,\mathrm{},N$$
On the first sight, we fulfilled our main task for the matrix quantum mechanics: we reduced it to an eigenvalue problem and are now dealing with only $`N`$ variables. But the Schroedinger equation (3.1) contains the Hamiltonian which is a matrix in the representation space acting on the wave function which is a vector in this space. For small representations whose Young tableaux contain $`<<N^2`$ boxes the problem is still solvable in the large N limit (as we will demonstrate below). For a very interesting case of big representations ($`N^2`$ boxes in the Young tableaux) the problem remains a serious challenge.
In the simplest case of singlet representation (solved long ago in ) the wave function is a scalar and the last term in the r.h.s. of the Schroedinger equation (3.1) drops out. The problem appears to be equivalent to the quantum mechanical system of $`N`$ non-interacting fermions (due to the antisymmetry of $`\varphi (x)`$) in a potential $`V(x)`$. It was used in many applications, including the solution of the non-critical string theory in 1+1 dimensions .
The next smallest representation is adjoint. The adjoint wave function satisfying the relation (3.1) should be a function of the type
$$\mathrm{\Psi }(M;x)=\underset{a=0}{\overset{N1}{}}C_a(x)M^a$$
where the coefficients $`C_A`$ possibly depend on the invariants (eigenvalues). If we denote $`\varphi _{adj}(x_i;x)\varphi _i(x)`$ (depending of course on all $`N`$ $`x_i`$) we can write the Schroedinger equation for the adjoint wave function in the form :
$$\underset{i}{}\left(\left(\frac{}{x_i}\right)^2+V(x_i)\right)\varphi _k(x)\frac{1}{N^2}\underset{i(k)}{}\frac{\varphi _i(x)\varphi _k(x)}{(x_ix_k)^2}=E\varphi _k(x)$$
One can see that the last term in the l.h.s. of this equation is $`N^2`$ smaller than the other terms and can be regarded as a small perturbation on the background of the free fermion solution of the singlet sector.
For one of physically most interesting applications, the 1+1 dimensional string theory, we need to solve the model in the inverted oscillatorial potential $`V(M)=M^2`$. The model is unstable and one needs to specify the boundary conditions for big $`M`$’s. Usually one considers the boundary conditions when the absolute value of any of the eigenvalues of $`M`$ cannot exceed some maximum value $`\mathrm{\Lambda }`$ (a cut-off wall). In the case of the large $`N`$ limit one takes $`\mathrm{\Lambda }N`$ and it happens that the spectrum density of the model depends in a very universal (logarithmic) way on $`\mathrm{\Lambda }`$. In the singlet state the spectrum is that of $`N`$ independent fermions (eigenvalues) in the same potential and the eigenfunctions are the Slater determinants of the parabolic cylinder functions (see the review in . In the non-singlet sectors the eigenvalues start interacting and obey a more complicated statistics corresponding to the symmetry of the Young tableau of representation (see the review for the details). Although the problem is clearly integrable the spectrum of the non-singlet sectors of the inverted matrix quantum oscillator is still unknown (for the large N estimates of the mass gap of adjoint representation see , and ).
It was conjectured in and shown in that the adjoint representation describe the vortex anti-vortex sector in the 1+1 dimensional string theory with one compact dimension. Higher representation describe higher numbers of vortex anti-vortex pairs (corresponding to the number of boxes in the Young tableau of the representation).
4. Character expansion and new solvable (multi) matrix models
The group character expansion has shown its power in the lattice gauge theory long time ago, starting from the work of A. Migdal .
The character expansion method proposed in the papers - and inspired by the result of paper is the most general approach for the reduction of the number of degrees of freedom from $`N^2`$ to $`N`$ in a new big class of (multi) matrix integrals. The matrix integral considered in these papers looks as follows:
$$Z=d^{N^2}M\mathrm{exp}[\mathrm{Tr}M^2+\mathrm{Tr}V(AM)]$$
where $`V(y)=_{k>2}t_ky^k`$ is an arbitrary potential and A is an arbitrary hermitian matrix (which can be taken diagonal without a loss of generality). We again diagonalize the matrix $`M`$ as
$$M=\mathrm{\Omega }^+X\mathrm{\Omega }$$
The integral over the $`U(N)`$ variable $`\mathrm{\Omega }`$ looks difficult to do directly since the Itzykson-Zuber formula (2.1) seems to be of little use here. Instead of it let us expand $`\mathrm{exp}[\mathrm{Tr}V(AM)]`$ as an invariant function of the variable $`AM`$ in terms of the characters $`\chi _R(AM)`$ of irreducible representations $`R`$ of the $`GL(N)`$ group:
$$\mathrm{exp}[\mathrm{Tr}V(AM)]=\underset{R}{}f_R\chi _R(AM)$$
where the coefficients $`f_R`$ are the functions of $`N`$ highest weight components of a representation
$$R=\{0m_Nm_{N1}\mathrm{}m_1<\mathrm{}\}.$$
The sum $`_R`$ is nothing but the sum over N ordered integers. They can be calculated due to the orthogonality of characters as the following unitary integrals:
$$f_R=[d\mathrm{\Omega }]_{U(N)}\mathrm{exp}[\mathrm{Tr}V(\mathrm{\Omega })]\chi _R(\mathrm{\Omega }^+)$$
Note that this integral can be represented as an explicit integrals only over the Cartan subgroup $`\mathrm{\Omega }=\{e^{i\omega _1},\mathrm{},e^{i\omega _N}\}`$ and thus contains only $`N`$ integration variables. We have $`\mathrm{Tr}V(\mathrm{\Omega })=_kV(e^{i\omega _k})`$ and $`[d\mathrm{\Omega }]_{U(N)}_kd\theta _k_{i>j}\mathrm{sin}^2\frac{\theta _i\theta _j}{2}`$. Now if we plug (4.1) into (4.1) we realize that the decomposition (4.1) is actually useful and we can integrate over $`\mathrm{\Omega }`$ using the following orthogonality relation between matrix elements of representation $`R`$:
$$[d\mathrm{\Omega }]_{U(N)}\chi _R(A\mathrm{\Omega }^+X\mathrm{\Omega })=\frac{1}{\mathrm{𝚍𝚒𝚖}_R}\chi _R(A)\chi _R(X)$$
where $`\mathrm{𝚍𝚒𝚖}_R`$ is the dimension of a representation R. We see that we achieved our main goal: due to the formulas (4.1), (4.1) and (4.1) we reduced the original matrix integral (4.1) to an integral over only $`N`$ eigenvalues $`x_1,\mathrm{},x_N`$ of the matrix $`M`$ and the sum over $`N`$ highest weight components $`m_1,\mathrm{},m_N`$. In the large N limit, if we scale appropriately the constants in the potential $`V(M)`$, the sums over $`m`$’s can be replaced by integrals and we can again apply the saddle point approximation in all $`2N`$ integration variables. To get explicitly the right large $`N`$ scaling of the couplings one usually changes $`e^Ve^{NV}`$. Then the effective action at the saddle point is always of the order $`1/N^2`$ and the new couplings of the potential $`V`$ can be kept finite in this limit.
As was shown in (see also ), the integral over $`x_1,\mathrm{},x_N`$ can be calculated exactly and the remaining sum over strictly ordered nonnegative integers $`h_i=m_i+Ni`$ (shifted highest weights) reads:
$$Z=\underset{h_1<h_2<\mathrm{}<h_N}{}\frac{(h^e1)!!h^o!!}{(h^eh^o)}\chi _R(A)\chi _R(t)$$
where $`\{h^e\}`$ and $`\{h^o\}`$ are the collections of even and odd integers $`h_k`$ (their number is equal). Only the representations with equal amounts of even and odd $`h`$’es contribute to (4.1). The products in the numerator go over all even and odd $`h`$’es and the product in the denumenator goes over all couples $`h^e,h^o`$. $`\chi _R(t)`$ is a character of the coupling constants $`t_k`$ written in the Schur form:
$$\chi _R=det_{ij}P_{h_ij}(t)$$
and the Schur polynomials $`P_k(t)`$ are defined as usually: $`_nP_n(t)z^n=e^{_kt_kz^k}`$.
So in the large $`N`$ limit we have to do the saddle point calculation only with respect to $`N`$ summation variables $`h_1,\mathrm{},h_N`$.
The details of these formulas can be found in , -. One can also find in these papers the geometrical interpretation of the integral (4.1) in terms of the so called dually weighted planar graphs. It gives the generating function of planar graphs where both vertices and faces are weighted by the generating parameters depending on their orders. In - one can find the solutions of some combinatorial problems related to the enumeration of planar graphs which were possible only due to the power of the character expansion method. The particular solutions of the saddle point equations could be very tricky but it is already a “classical” problem of solution of various integral equations rather than a “quantum” problem of functional integration over infinite matrices. In that sense this model is solvable.
It is obvious that there exist many ways to generalize the model (4.1) to other matrix integrals. An immediate generalization is to substitute the $`\mathrm{Tr}M^2`$ term in (4.1) by an arbitrary function $`W(M)`$. In that case we cannot calculate explicitly the coefficients $`f_R`$ (except when $`W`$ is a monomial: $`W(M)=M^k`$) but we still get an explicit integral over $`3N`$ variables $`x_i`$, $`\omega _i`$ and $`m_i`$. So the model is again solvable.
Another solvable matrix model of this kind involving general complex matrices was proposed and investigated in , . Its free energy gives a generating functional counting branched coverings of two dimensional surfaces.
The most general solvable two matrix model reads as
$$Z=d^{N^2}Ad^{N^2}B\mathrm{exp}N\mathrm{Tr}\left(U(AB)+V(A)+W(B)\right)$$
where $`U`$,$`V`$ and $`W`$ are arbitrary functions. The way to reduce it to $`N`$ degrees of freedom is again to expand in characters
$$\mathrm{exp}[\mathrm{Tr}U(AB)]=\underset{R}{}u_R\chi _R(AB)$$
diagonalize the matrices $`A`$ and $`B`$ and integrate over the $`U(N)`$ variable between them by means of (4.1). In a particular case
$$Z=d^{N^2}Ad^{N^2}B\mathrm{exp}N\mathrm{Tr}\left(\frac{1}{2}(A^2+B^2)\frac{\alpha }{4}(A^4+B^4)\frac{\beta }{2}(AB)^2\right)$$
the model describes a special trajectory of the 8-vertex model on random graphs. It was completely solved in . Again it was possible, using character orthogonality relations, to integrate over the relative angle between $`A`$ and $`B`$; this leads to separation into one-matrix integrals:
$$\mathrm{Z}(\alpha ,\beta )\underset{\{h\}}{}(N\beta /2)^{\mathrm{\#}h/2}c_{\{h\}}[P_{\{h\}}(\alpha )]^2$$
where $`c_{\{h\}}`$ is a coefficient:
$$c_{\{h\}}=\frac{1}{_ih_i/2!_{i,j}(h_i^{\mathrm{even}}h_j^{\mathrm{odd}})}$$
and $`S_{\{h\}}(\alpha )`$ is a one-matrix integral
$$S_{\{h\}}(\alpha )=d^{N^2}M\chi _{\{h\}}(M)\mathrm{exp}N\left[\frac{1}{2}\mathrm{tr}M^2+\frac{\alpha }{4}\mathrm{tr}M^4\right]$$
which appears squared in (4.1) because the contributions from the two matrices $`A`$ and $`B`$ are identical.
Now we can reduce the calculation of the one-matrix integral $`S_{\{h\}}`$ to eigenvalue integrations:
$$S_{\{h\}}(\alpha )=\underset{k}{}d\lambda _k\mathrm{\Delta }(\lambda )det\left(\lambda _k^{h_j}\right)\mathrm{exp}N\left[\frac{1}{2}\underset{k}{}\lambda _k^2+\frac{\alpha }{4}\underset{k}{}\lambda _k^4\right]$$
where $`\mathrm{\Delta }(\lambda )=det\left(\lambda _k^{Nj}\right)=_{j<k}(\lambda _j\lambda _k)`$.
Now we are left only with $`N`$ degrees of freedom and the action of the order $`N^2`$, so the integration is reduced to the saddle point calculation with respect to the eigenvalues $`\lambda _k`$ (see for the details).
We can immediately propose some solvable generalization of the general two matrix model (4.1) to a multi-matrix chain:
$$Z=\underset{q=1}{\overset{Q}{}}d^{N^2}M_q\mathrm{exp}\mathrm{Tr}\left(\underset{q=1}{\overset{Q}{}}V_q(M_q)+\underset{p=1}{\overset{Q1}{}}W(M_{p1}M_p)\right)$$
where $`V`$ and $`W`$ are arbitrary functions.
Another interesting model solvable by the character expansion method can be written in the following general form:
$$Z=\underset{q=1}{\overset{Q}{}}d^{N^2}M_q\mathrm{exp}\left(\mathrm{Tr}V_q(M_q)+V(\underset{p=1}{\overset{Q}{}}M_p)\right)$$
It can be solved by character expansion with respect to the last factor by means of the formulas (4.1),(4.1) and the multiple application of the formula (4.1) by induction.
The results in terms of the sum over $`N`$ highest weight components of the representations $`R`$ reads:
$$Z=\underset{R}{}\frac{f_R}{[\mathrm{𝚍𝚒𝚖}_R]^{Q1}}\underset{q=1}{\overset{Q}{}}[S_{\{h\}}^{(q)}]$$
where
$$S_{\{h\}}^{(q)}=d^{N^2}M\chi _{\{h\}}(M)\mathrm{exp}\mathrm{Tr}V_q(M)$$
The last integral can be immediately reduced to the integrations of a type (4.1) over eigenvalues of the matrix $`M`$.
5. Comments and unsolved problems
A few comments are in order:
1. The non-singlet sectors in the matrix quantum mechanics (3.1) can be effectively studied for the oscillatorial potential $`V(M)=M^2`$. In this case the Hamilton-Ian is a collection of $`N^2`$ independent oscillators represented by the matrix elements of $`M`$. The spectrum of Hamilton-Ian of this model in a given irreducible representation of $`U(N)`$, is encoded into the partition functions $`Z_R(q)`$ for finite inverse temperature $`\beta `$ (where $`q=e^\beta `$) in a given representation $`R`$. The effective way to study $`Z_R(q)`$ can be found in papers or .
2. The character expansion is nothing but the Fourier expansion on a group manifold. As trivial as it looks for us now the Fourier transform was always a powerful method of solving problems using their symmetries. Many of the matrix models presented in the previous section and solved by this method seemed hopeless just a few years ago.
3. One of the interesting and not well studied questions is how to classify all the matrix integrals which can be reduced from $`N^2`$ to $`N`$ integrals or sums by use of the character expansion.
4. In many physically interesting cases we don’t need a general form of potentials $`V(M)`$ or $`W(M)`$ mentioned through this paper. For example, as it was mentioned, to study the universal behavior of the large $`N`$ matrix quantum mechanics near the instability point we need to know only the solution in the vicinity of a quadratic top of the potential $`V(M)\mathrm{Tr}(MM_0)^2`$. The rest of the potential $`V(M)`$ has small influence on the behavior of the eigenvalues and serves only as a $`U(N)`$ invariant cutoff wall. It simplifies greatly the problem. For instance, all the applications in string theory, two-dimensional quantum gravity and most of statistical-mechanical applications need only the analyses of the vicinity of such critical points. The lesson to draw from it is that for some physically most interesting regimes the seemingly hopeless matrix integrals become not so hopeless and look “almost Gaussian”. May be a general method of the investigation of these instability points can be worked out.
5. Another question is related to the integrability properties of sums and integrals after such reduction. The partition functions of some of them (such as the old one matrix and two matrix models) are known to be $`\tau `$-functions of some integrable hierarchies of classical differential or difference equations, like Toda hierarchy (4.1), or KP hierarchy (see for a good introduction). But many others, like the model (4.1), cannot be represented by free fermions. On the other hand, the Itzykson-DiFrancesco formula (4.1) suggests that it might exist some interacting fermion representation of the partition function of the model of dually weighted graphs (4.1).
The method of character expansion as well as all other methods of calculation of the large $`N`$ matrix integrals presented here represent just another refining and generalization of the usual method of reduction to the matrix eigenvalues invented long ago by Dyson. Its range of applicability is quite limited although it includes quite a few important matrix integrals known from physics and mathematics. Many more interesting matrix integrals look not hopeless for the investigation in the large $`N`$ limit. The search for new tricks of integration over matrices is a fascinating and potentially extremely rewarding research direction.
6. Acknowledgments
I am grateful to the organizers of the MSRI Workshop “Matrix models and Pinlevé equations” P. Bleher and A. Itz for the kind hospitality and fruitful discussion during the Workshop.
I also would like to thank I. K. Kostov for useful comments and M. Fukuma for the illuminating discussions concerning the matrix quantum mechanics in adjoint representation.
References
relax E. Brézin, V. A. Kazakov, Exactly solvable field theories of closed strings, Phys. Lett. B236 (1990) 144; M. R. Douglas, S. Shenker, Strings in less than one dimension, Nucl.Phys. B335 (1990) 635; D. Gross, A. Migdal, Non-perturbative two-dimensional quantum gravity, Phys.Rev.Lett. 64 (1990) 127. relax E. Brezin, C. Itzykson, G. Parisi and J.-B. Zuber, Planar approximation, Comm. Math. Phys. 59 (1978) 35. relax C.Itzykson and J.B.Zuber, Planar approximation II, J.Math.Phys. 21 (1980) 411. relax Charich-Chandra, Amer. J. Math., 79 (1957) 87. relax M. L. Mehta, A method of integration over matrix variables, Comm. Math. Phys. 79 (1981) 327. relax V. A. Kazakov, D-dimensional induced gauge theory as a solvable matrix model, Proc. Intern. Symp. ”Lattice ’92” on Lattice gauge theory, Amsterdam, Sept. 1992, Nucl. Phys. B(Proc. Suppl.) 30 (1993) 149. relax V. A. Kazakov, Ising model on dynamical planar random lattice: exact solution, Phys. Lett. 119A (1986) 140; D. V. Boulatov and V. A. Kazakov, The Ising model on a random planar lattice : the structure of phase transition and the exact critical exponents, Phys.Lett. B186 (1987) 379. relax S. Chadha, G. Mahoux and M. L. Mehta, A method of integration over matrix variables 2, J. Phys. A: Math. Gen. 14 (1981) 579. relax M. R. Douglas, Strings in less than one dimention and generalized KdV hierarchies, Phys.Lett. 238B (1990) 176. relax I. K. Kostov, Gauge Invariant Matrix Model for the ADE Closed Strings, hep-th/9208053, Phys.Lett. B297 (1992) 74-81. relax S. Kharchev, A. Marshakov, A. Mironov, A. Morozov, S. Pakuliak, Conformal Matrix Models as an Alternative to Conventional Multi-Matrix Models, hep-th/9208044, Nucl. Phys. B404 (1993) 717. relax V. A. Kazakov, Exactly solvable Potts models, bond and tree-like percolation on dynamical (random) planar lattice, Nucl. Phys. B 4 (Proc. Supp.) (1988) 93. relax V. A. Kazakov and I. K. Kostov, published in the review of I. K. Kostov, Random surfaces, solvable matrix models and discrete quantum gravity in two dimensions, Lecture given at GIFT Int. Seminar on Nonperturbative Aspects of the Standard Model, Jaca, Spain, Jun 6-11, 1988. Published in GIFT Seminar 0295 (1988) 322. relax J-M. DAUL, Q-states Potts model on a random planar lattice, hep-th/9502014. relax B. Eynard, G. Bonnet, The Potts-q random matrix model : loop equations, critical exponents, and rational case, Phys.Lett. B463 (1999) 273-279. relax V. Kazakov and A. Migdal, Recent Progress in the Theory of Non-critical Strings, Nucl. Phys. B 311 (1988) 171. relax G. Marchesini and E. Onofri, Planar limit for $`SU(N)`$ symmetric quantum dynamical systems, J.Math.Phys. 21 (1980) 1103. relax V. A. Kazakov, Bosonic strings and string field theories in one-dimensional target space, in ”Random surfaces and quantum gravity”, Carg se 1990, O. Alvarez, E. Marinari, P. Windey eds., (1991). relax A. P. Polychronakos, Generalized statistics in one dimension, hep-th/9902157, Les Houches 1998 Lectures; 54 pages. relax D.Gross and I.Klebanov, Vortices and the non-singlet sector of the c=1 matrix model, Nucl. Phys. B354 (1990) 459. relax D. Boulatov and V. Kazakov, One dimensional string theory with vortices as an upside down matrix oscillator, J.Mod.Phys A8 (1993) 809. relax A. A. Migdal, Recursion equations in lattice gauge theories, Sov.Phys.JETP 42 (1975) 413, Zh.Eksp.Teor.Fiz.69 (1975) 810-822. relax V.A. Kazakov, M. Staudacher and T. Wynter, Exact Solution of Discrete Two-Dimensional $`R^2`$ Gravity, hep-th/9601069, Nucl.Phys.B471 (1996) 309-333. relax V.A. Kazakov, M. Staudacher, T. Wynter, Almost flat planar graphs, hep-th/9506174, Comm. Math.Phys. 179 (1996) 235-256. relax Ph. DiFrancesco and C. Itzykson, Fat graphs, Ann. Inst. Henri Poincarè, 59(2) (1993) 117. relax I. K. Kostov and M. Staudacher, Two-Dimensional Chiral Matrix Models and String Theories, hep-th/9611011, Phys.Lett. B394 (1997) 75-81. relax I. K. Kostov, M. Staudacher, T. Wynter, Complex Matrix Models and Statistics of Branched Coverings of 2D Surfaces, hep-th/9703189, Comm.Math.Phys. 191 (1998) 283-298. relax V. A. Kazakov, P. Zinn-Justin, Two-Matrix model with ABAB interaction, hep-th/9808043, Nucl.Phys. B546 (1999) 647-668. relax J. Hoppe, V. A. Kazakov, I. K. Kostov, Dimensionally Reduced $`SYM_4`$ as Solvable Matrix Quantum Mechanics, hep-th/9907058, Nucl.Phys.B (to be published). relax V. A. Kazakov, I. K. Kostov, N. Nekrasov, D-particles, Matrix Integrals and KP hierarchy, hep-th/9810035, Nucl.Phys. B557 (1999) 413-442. relax I. K. Kostov, Bilinear Functional Equations in 2D Quantum Gravity, hep-th/9602117, Talk delivered at the Workshop on ”New Trends in Quantum Field Theory”, 28 August - 1 September 1995, Razlog, Bulgaria.
|
no-problem/0003/astro-ph0003101.html
|
ar5iv
|
text
|
# Combination Frequencies in the Fourier Spectra of White Dwarfs
## 1 Introduction
Long sequences of almost uninterrupted light curves, obtained during the Whole Earth Telescope (WET) campaign on the helium variable white dwarf GD358 (Winget et al. 1994, for a more recent analysis, see Vuille et al. 2000), disclosed not only the presence of a large number of stellar pulsational modes in the Fourier spectra, but also the presence of ‘combination frequencies’, signals that lie at the sum or difference frequencies of the stellar eigen-modes (see Fig. 7 in the first paper).
Combination frequencies have been observed in other pulsating white dwarfs with either hydrogen or helium atmospheres, e.g., ZZ Psc (aka G29-38, McGraw 1976; Kleinman et al. 1998), GD154 , BPM31594 (McGraw 1976; O’Donoghue, Warner & Cropper 1992), G117-B15A and GD165 . Indeed, every variable hydrogen white dwarf (class name ZZ Ceti, or DAV) that has been observed with sufficiently high signal-to-noise ratio exhibits combination frequencies (Brassard, Fontaine & Wesemael 1995). The same likely holds for helium variables (DBV).
Combination frequencies are thought to result from nonlinear mixing of sinusoidal signals that are associated with the eigenmodes (named the ‘principal modes’ in this article). This conclusion is based on the following arguments: combination frequencies are too numerous to be eigenmodes themselves (Winget et al. 1994); amplitudes of the combination frequencies have been shown to correlate with those of their principal modes (for an early review, see McGraw 1978); combination frequencies tend to have more complicated fine structure than their principal modes, which can be explained naturally by a linear superposition of the principal modes’ rotationally split multiplets .
Brickhill showed that nonlinear mixing arises naturally in the context of his theory of convective driving (Brickhill 1983, 1990, 1991a, 1991b). He realized that the convective turn-over time scale in DA and DB variable white dwarfs is much shorter than the pulsation period. Thus one can safely assume that the surface convective region adjusts instantaneously during pulsation. Brickhill found that under this assumption, the photospheric flux variation is delayed and reduced relative to that entering the bottom of the convection zone, by an amount depending on the depth of the convection zone. Instantaneous adjustment of the convection zone also implies that the extent of the convection zone varies during the pulsation cycle, thus leading to variations in the reduction and delay of the flux variation. This distorts the shape of the light curve at the photosphere, and brings about the combination frequencies in the Fourier power spectrum. Using a numerical analysis, Brickhill found that for reasonable amplitudes of the principal modes, he could reproduce the observed amplitudes of the combination frequencies. Note that in this theory, the combination frequencies reflect distortion of the light curve by the nonlinear medium; they are not associated with physical displacements and velocities.<sup>1</sup><sup>1</sup>1Fast convection enforces uniform movement throughout the convective region . There is no distortion to the velocity signal. This is indeed confirmed observationally (van Kerkwijk, Clemens & Wu 1999).
In this paper, we use a perturbative analysis to derive analytical formulae for the strength and phase of the combination frequencies. The advantage of this analysis over Brickhill’s numerical approach is that the dependence on stellar properties becomes explicit. We find that two parameters, namely, the depth of the surface convection zone when the star is at rest, and the sensitivity of this depth towards changes in stellar effective temperature, determine the efficiency of the mixing process. We also show that two geometric factors, the spherical degrees of the principal modes, and the inclination angle of the stellar pulsation axis, enter the analytical expressions. We compare our formulae with data on GD358 (a DBV) and G29-38 (a DAV), adopting appropriate values for the above stellar parameters. Despite imperfect agreement, we show that it is possible to infer the spherical degree for the principal modes. In §4.2, we briefly discuss the prospects of explaining the combination frequencies in other types of variable stars.
## 2 Perturbation Analysis
### 2.1 Origin
In this section, we demonstrate that the surface convection zone can nonlinearly mix sinusoidal flux variations to produce combination frequencies.
We adopt the following three simplifying assumptions. Firstly, we assume neighbouring angular directions do not affect each other. Observed pulsations in white dwarfs are associated with eigenmodes of low spherical degree ($`\mathrm{}=1`$ or $`2`$). Horizontal variations in these modes occur over a scale of order the stellar radius. One can safely ignore the effect of horizontal heat transfer. This assumption allows us to study one angular direction and later generalize the results to other directions without modification.
Secondly, we restrict our analysis to the surface convective region and assume that the radiative interior caused little nonlinearity in the pulsation signals. In the deep radiative interior where the pulsation is adiabatic, the relative importance of the nonlinearity is measured by the local pressure perturbation, $`(\delta p/p)`$, which is smaller than a few percent for all observed modes. In the radiative region immediately below the convection zone, the situation is less clear. However, it seems reasonable to assume that the nonlinearity in this region is weak compared to that arising from the convective region as the reaction time of the former region (local thermal time) is much longer than that in the latter (see later). Further study is necessary to confirm or to refute this assumption. This assumption allows us to take the flux perturbation incident upon the bottom of the convection zone, $`(\delta F/F)_b`$, to be sinusoidal. This also allows us to ignore entropy variations in the radiative region (unless the material becomes convective).
Thirdly, we adopt equilibrium models that are adjacent in effective temperature to quantify the time-dependent nature of the convective region. The three models in Figure 1 resemble a radial column of gas at flux maximum, passing through the rest point, and at flux minimum, respectively. This simplification is possible because the thin convection zone in a pulsating white dwarf has eddy turn-over times much shorter than the pulsation period and can react instantaneously to pulsation. The entropy of the convection zone varies in phase with the surface flux perturbation (this is an important feature for driving the gravity-modes, see Brickhill 1983, 1990; Goldreich & Wu 1999, hereafter Paper I), the size of the convection zone adjusts accordingly. In this analysis, we consider changes in the depth of the convection zone caused by entropy variations only. Other effects, e.g., convective overshoot and shear turbulence, are not taken into account.
Under these assumptions, we derive the flux perturbation emerging at the photosphere, $`(\delta F/F)_{\mathrm{ph}}`$, for given $`(\delta F/F)_b`$. These two flux perturbations are related through energy conservation,
$$\left(\frac{\delta F}{F}\right)_{\mathrm{ph}}=\left(\frac{\delta F}{F}\right)_b\frac{1}{F}\frac{d\mathrm{\Delta }Q}{dt},$$
(1)
where $`\mathrm{\Delta }Q`$ is the amount of excess heat (compared to that at equilibrium) stored in the convective region, and $`F`$ is the stellar flux at equilibrium.
Denoting the depth of the convection zone at time $`t`$ as $`z_b(t)`$, we find
$`\mathrm{\Delta }Q`$ $`=`$ $`{\displaystyle _0^{z_b(t)}}𝑑z\rho T{\displaystyle \frac{k_B}{m_p}}(ss_0)`$ (2)
$``$ $`F\tau _{b_0}\mathrm{\Delta }s+{\displaystyle \frac{k_B}{m_p}}{\displaystyle _{z_{b_0}}^{z_b(t)}}𝑑z\rho T(ss_0),`$
where the second term on the right hand side is small compared to the first term. Here, all quantities marked with subscript $`0`$ are equilibrium quantities, and the thermal time constant $`\tau _b`$ at depth $`z_b`$ is defined as (see Paper I),
$$\tau _b\frac{1}{F}\frac{k_B}{m_p}_0^{z_b}𝑑z\rho T.$$
(3)
This time constant is closely related to the conventional thermal relaxation time, the latter being defined as $`\tau _{\mathrm{th}}1/F(k_B/m_p)_0^{z_b}𝑑z\rho Tc_p`$; in an isentropic, ionised hydrogen plasma, $`\tau _b2/5\tau _{\mathrm{th}}`$.
The first term on the right-hand side of equation (2) quantifies the heat absorbed by the convective region above $`z_{b_0}`$ when its entropy rises uniformly by $`\mathrm{\Delta }s`$. This entropy variation is constant throughout most of the region because convection carries most of the stellar flux and because the reaction of the convection towards pulsation is fast (Paper I, also see Fig. 1). The second term in equation (2) represents the heat required in expanding or evaporating the convection zone. This term is essential for introducing nonlinearity into the flux variation.
Discarding terms higher than second order in $`\mathrm{\Delta }s`$, we convert equation (1) into,
$`\left({\displaystyle \frac{\delta F}{F}}\right)_{\mathrm{ph}}`$ $``$ $`\left({\displaystyle \frac{\delta F}{F}}\right)_b\tau _{b_0}{\displaystyle \frac{d\mathrm{\Delta }s}{dt}}(\tau _b(t)\tau _{b_0}){\displaystyle \frac{d\mathrm{\Delta }s}{dt}}`$ (4)
$`{\displaystyle \frac{dz_b(t)}{dt}}\left[\rho T{\displaystyle \frac{k_B}{m_p}}(ss_0)\right]_{z_b(t)}`$
$``$ $`\left({\displaystyle \frac{\delta F}{F}}\right)_\mathrm{b}\tau _b(t){\displaystyle \frac{d\mathrm{\Delta }s}{dt}}.`$
We use $`ss_0=0`$ at $`z_b(t)`$ as the entropy perturbation in the region $`z>z_b(t)`$ is assumed to be zero. Equation (4) is similar to equations (40) and (42) in Paper I, except that in the present case $`\tau _b`$ is time-dependent.
The entropy variation, $`\mathrm{\Delta }s`$, is intimately related to the photospheric flux perturbation (eqs. - in Paper I) as,
$$\mathrm{\Delta }s=(B+C)\left(\frac{\delta F}{F}\right)_{\mathrm{ph}},$$
(5)
where the dimensionless numbers $`B`$ and $`C`$ quantify respectively the response of the photosphere and the superadiabatic region towards gravity-mode pulsation (eqs. & in Paper I). In ZZ Ceti stars, $`B`$ and $`C`$ are both of order $`8`$. Again, we take $`(B+C)`$ to be $`(B+C)(t)`$.
We define the following dimensionless derivatives,
$`\beta `$ $``$ $`{\displaystyle \frac{\mathrm{ln}(B+C)}{\mathrm{ln}F}}={\displaystyle \frac{1}{4}}{\displaystyle \frac{\mathrm{ln}(B+C)}{\mathrm{ln}T_{\mathrm{eff}}}},`$
$`\gamma `$ $``$ $`{\displaystyle \frac{\mathrm{ln}\tau _b}{\mathrm{ln}F}}={\displaystyle \frac{1}{4}}{\displaystyle \frac{\mathrm{ln}\tau _b}{\mathrm{ln}T_{\mathrm{eff}}}},`$ (6)
and combine equations (4), (5) and (6) to produce
$$\left(\frac{\delta F}{F}\right)_\mathrm{b}=X+\tau _{c_0}[1+(2\beta +\gamma )X]\frac{dX}{dt}.$$
(7)
Here, $`\tau _c(B+C)\tau _b`$, and $`X`$ is the simplification of $`(\delta F/F)_{\mathrm{ph}}`$. In the temperature range of ZZ Ceti variables, our mixing length models yield $`\beta 1.2`$ and $`\gamma 15`$ (obtained using Fig. 1 of Wu & Goldreich ). Note that $`2\beta +\gamma <0`$; the thermal content of the convection zone increases with decreasing effective temperature. The magnitude of the nonlinear term $`|(2\beta +\gamma )X|`$ falls not far below unity when $`X`$ is of order a few percent. Mathematically, equation (7) also describes the motion of a pendulum with a velocity-dependent mass under the influences of a periodic external force and viscosity.
### 2.2 Solutions
We solve for $`(\delta F/F)_{\mathrm{ph}}`$ ($`X`$) using equation (7), first for the case when the input signal is a single mode and then for the case of two modes.
#### 2.2.1 Solution for a Single Mode
At any point on the stellar surface, for a single sinusoidal input of the form
$$\left(\frac{\delta F}{F}\right)_\mathrm{b}=A_i\mathrm{cos}(\omega _it+\mathrm{\Psi }_i),$$
(8)
the solution for $`(\delta F/F)_{\mathrm{ph}}`$ can be expanded into
$$\left(\frac{\delta F}{F}\right)_{\mathrm{ph}}=a_i\mathrm{cos}(\omega _it+\psi _i)+a_{2i}\mathrm{cos}(2\omega _it+\psi _{2i})+\mathrm{},$$
(9)
with the amplitudes and phases obtained from equation (7) as,
$`a_i`$ $`=`$ $`{\displaystyle \frac{A_i}{\sqrt{1+(\omega _i\tau _{c_0})^2}}},`$
$`a_{2i}`$ $`=`$ $`{\displaystyle \frac{a_i^2}{4}}{\displaystyle \frac{|2\beta +\gamma |(2\omega _i\tau _{c_0})}{\sqrt{1+(2\omega _i\tau _{c_0})^2}}},`$ (10)
and
$`\psi _i`$ $`=`$ $`\mathrm{\Psi }_i\mathrm{arctan}(\omega _i\tau _{c_0}),`$
$`\psi _{2i}`$ $`=`$ $`2\psi _i+\mathrm{arctan}\left({\displaystyle \frac{1}{2\omega _i\tau _{c_0}}}\right).`$ (11)
The expressions for $`a_i`$ and $`\psi _i`$ are identical to equation (66) in Paper I; the surface flux perturbation is reduced in magnitude and delayed in phase relative to that entering the convection zone, as a result of the heat retention (or release) of the convection zone during pulsation. Typically in pulsating white dwarfs, $`a_i`$ ranges from a few tenths of a percent to a few percent. The fact that the harmonic frequency of a such a weak mode is actually observable is largely because $`|2\beta +\gamma |1`$. The amplitude of the second harmonic ($`a_{3i}`$), not considered here, is of order $`|\beta \gamma |a_i^3`$, and should be detectable when $`a_i`$ is large.
The above solution is best understood in terms of light curve distortion. The phase lag and the reduction factor between $`(\delta F/F)_{\mathrm{ph}}`$ and $`(\delta F/F)_b`$ scale with the thickness of the convection zone. Since the convection zone is at its thinnest when $`(\delta F/F)_{\mathrm{ph}}`$ approaches its maximum, one expects the phase lag and the reduction factor to be smaller at the maximum of $`(\delta F/F)_{\mathrm{ph}}`$ than at the minimum. This leads to peaked light curves with sharp ascent and shallow descent. Fourier transform of such a light curve exhibit harmonics that lead the fundamentals in time, $`\psi _{2i}2\psi _i>0`$ .
#### 2.2.2 Solution for Two Modes
When $`(\delta F/F)_b`$ is comprised of two sinusoidal signals with radian frequencies $`\omega _i`$ and $`\omega _j`$,
$$\left(\frac{\delta F}{F}\right)_\mathrm{b}=A_i\mathrm{cos}(\omega _it+\mathrm{\Psi }_i)+A_j\mathrm{cos}(\omega _jt+\mathrm{\Psi }_j),$$
(12)
the following form of solution is adopted,
$`\left({\displaystyle \frac{\delta F}{F}}\right)_{\mathrm{ph}}`$ $`=`$ $`a_i\mathrm{cos}(\omega _it+\psi _i)+a_{2i}\mathrm{cos}(2\omega _it+\psi _{2i})`$ (13)
$`+a_j\mathrm{cos}(\omega _jt+\psi _j)+a_{2j}\mathrm{cos}(2\omega _jt+\psi _{2j})`$
$`+a_{ij}\mathrm{cos}((\omega _i\omega _j)t+\psi _{ij})`$
$`+a_{i+j}\mathrm{cos}((\omega _i+\omega _j)t+\psi _{i+j})+\mathrm{}.`$
Subscripts for the different coefficients are chosen to represent their corresponding frequencies. We obtain the following generalized expressions for the amplitudes and phases of the combination frequencies (including both harmonics and mixed combinations),
$`a_{i\pm j}`$ $`=`$ $`{\displaystyle \frac{n_{ij}}{2}}{\displaystyle \frac{a_ia_j}{2}}{\displaystyle \frac{|2\beta +\gamma |(\omega _i\pm \omega _j)\tau _{c_0}}{\sqrt{1+((\omega _i\pm \omega _j)\tau _{c_0})^2}}},`$ (14)
$`\psi _{i\pm j}`$ $`=`$ $`(\psi _i\pm \psi _j)+\mathrm{arctan}\left({\displaystyle \frac{1}{(\omega _i\pm \omega _j)\tau _{c_0}}}\right).`$ (15)
where $`n_{ij}`$ counts the number of possible permutations given $`i`$ and $`j`$: $`n_{ij}=2`$ if $`ij`$, and $`1`$ if otherwise.
### 2.3 Angular Integration and Bolometric Corrections
Equations (14)-(15) quantify the amplitudes and the phases for a combination frequency at every point on the stellar surface. In this section, we relate them to the observable quantities.
Defining $`\mathrm{\Theta }`$ and $`\mathrm{\Phi }`$ to be the spherical coordinates in the stellar rotating frame, we adopt the following form for the flux perturbation incident upon the convective bottom,
$$\left(\frac{\delta F}{F}\right)_b=\underset{i}{}A_iP_\mathrm{}_i^{m_i}(\mathrm{\Theta },\mathrm{\Phi })\mathrm{cos}(\omega _it+m_i\mathrm{\Phi }+\mathrm{\Psi }_{i_0}).$$
(16)
Here, $`P_{\mathrm{}}^me^{im\mathrm{\Phi }}=Y_\mathrm{}m`$, the latter being the spherical harmonic function normalized to unity over the sphere. The $`\mathrm{\Psi }_i`$ appearing in equation (8) is now $`m_i\mathrm{\Phi }+\mathrm{\Psi }_{i_0}`$, while $`\mathrm{\Psi }_{i_0}`$ is a constant over angle and time. Similarly, the linear part of the photospheric flux variation can be written as
$$\left(\frac{\delta F}{F}\right)_{\mathrm{ph}}=\underset{i}{}a_iP_\mathrm{}_i^{m_i}(\mathrm{\Theta },\mathrm{\Phi })\mathrm{cos}(\omega _it+m_i\mathrm{\Phi }+\psi _{i_0}),$$
(17)
with $`a_i`$ and $`\psi _{i_0}`$ related to $`A_i`$ and $`\mathrm{\Psi }_{i_0}`$ as in equations (10) - (11).
An observer with a line-of-sight inclined relative to the rotation axis by an angle $`\mathrm{\Theta }_0`$<sup>2</sup><sup>2</sup>2Degeneracy in $`\mathrm{\Phi }`$ implies that we can take $`\mathrm{\Phi }_0=0`$. would detect the following bolometric flux variation
$$\left(\frac{\delta f}{f}\right)_{\mathrm{bol}}=\underset{i}{}a_ig_\mathrm{}_i^{m_i}(\mathrm{\Theta }_0)\mathrm{cos}(\omega _it+\psi _{i_0}),$$
(18)
where the factor $`g_{\mathrm{}}^m`$ includes effects such as geometric projection and limb-darkening. In Table 2, we present the expression for $`g`$ as well as its values when $`\mathrm{}2`$.
The angular dependence of a combination frequency is described by the product of the angular dependences of its principal modes. This arises because equation (14) is valid for every point on the stellar surface. We express the corresponding disc-integrated flux variations as
$$\left(\frac{\delta f}{f}\right)_C=\underset{i,j}{}a_{i\pm j}G_{\mathrm{}_i\mathrm{}_j}^{m_i\pm m_j}(\mathrm{\Theta }_0)\mathrm{cos}((\omega _i\pm \omega _j)t+\psi _{i_0\pm j_0}).$$
(19)
The expression for the $`G`$ function is presented in the appendix, together with some useful values of $`G`$.
Broad-band photometric observations produce amplitude variations that are related to the bolometric variations as $`(\delta f/f)=\alpha _\lambda (\delta f/f)_{\mathrm{bol}}`$. For ZZ Ceti stars observed in V-band, $`\alpha _V0.4`$.
The theoretical expectation value for $`R_c`$, the ratio between the observed amplitude of a combination and the observed amplitudes of its principal modes (van Kerkwijk et al. 1999), is therefore,
$`R_c`$ $``$ $`{\displaystyle \frac{\left(\frac{\delta f}{f}\right)_{i\pm j}}{n_{ij}\left(\frac{\delta f}{f}\right)_i\left(\frac{\delta f}{f}\right)_\mathrm{j}}}`$ (20)
$`=`$ $`{\displaystyle \frac{|2\beta +\gamma |(\omega _i\pm \omega _j)\tau _{c_0}}{4\alpha _V\sqrt{1+((\omega _i\pm \omega _j)\tau _{c_0})^2}}}{\displaystyle \frac{G_{\mathrm{}_i\mathrm{}_j}^{m_i\pm m_j}}{g_\mathrm{}_i^{m_i}g_\mathrm{}_j^{m_j}}}.`$
## 3 What can be Learned?
In this section, we discuss the potential of extracting information from measurements of the combination frequencies. We first describe what information may be available, and then compare the observations of two variable white dwarfs (a DA and a DB) with our analytical results.
### 3.1 Prelude
A major difficulty of white dwarf asteroseimology lies in our inability to securely identify the spherical degree ($`\mathrm{}`$) for the pulsation modes. In the cases where combination frequencies are detected, one could use the observed values of $`R_c`$ (eq. ) to determine the $`\mathrm{}`$ value for the principal modes. To illustrate this possibility, we consider the harmonic of a $`m=0`$ principal mode. The ratio $`G_{\mathrm{}\mathrm{}}^{00}/(g_{\mathrm{}}^0g_{\mathrm{}}^0)`$ (and consequently the value of $`R_c`$) is significantly higher for $`\mathrm{}=2`$ than for $`\mathrm{}=1`$, except when $`\mathrm{\Theta }_0`$ approaches $`90^{}`$ (see Fig. 2 and eq. ). This arises as the apparent amplitudes of higher $`\mathrm{}`$ modes suffer stronger cancellation when integrated over the stellar disc while the harmonics of these modes do not. Note that this is purely a geometric argument and a similar method of $`\mathrm{}`$ identification would work not only for the sum or difference combinations of two principal modes in white dwarfs, but also for other variable stars that exhibit combination frequencies and where the amplitude of a combination frequency satisfies $`a_{i\pm j}a_ia_j`$ (as in eq. ) for every point on the stellar surface.
Inside the pulsational instability strip, the thermal time constant of the convection zone ($`\tau _{c_0}`$) varies monotonically and sensitively with the stellar effective temperature. The value of $`\tau _{c_0}`$ discloses the relative location of a variable in the strip. In addition, we can study convection under the white dwarf environment if we can empirically determine the $`T_{\mathrm{eff}}`$-$`\tau _{c_0}`$ relation. Time-resolved spectroscopy provides one way to measure $`\tau _{c_0}`$ . However, this technique requires large telescopes and works only for relatively bright white dwarfs. What about using the combination frequencies?
The relative phase between a combination frequency and its principal modes ($`\psi _{i_0\pm j_0}(\psi _{i_0}\pm \psi _{j_0})`$) yields $`\tau _{c_0}`$ straightforwardly (eq. ). This phase depends on $`\tau _{c_0}`$ more sensitively at low frequency. However, care needs to be taken to avoid systematic effects that affect phase measurements adversely, such as the presence of small neighbouring periodicities that are not accounted for.
Another way to measure $`\tau _{c_0}`$ is to take the ratio between the amplitudes of the sum and the difference combinations from the same pair of principal modes, (eq. ) the geometric factor cancels when one or both of $`m_i`$, $`m_j`$ is $`0`$.
$$\frac{\left(\frac{\delta f}{f}\right)_{i+j}}{\left(\frac{\delta f}{f}\right)_{ij}}=\frac{(\omega _i+\omega _j)}{(\omega _i\omega _j)}\frac{\sqrt{1+(\omega _i\omega _j)^2\tau _{c_0}^2}}{\sqrt{1+(\omega _i+\omega _j)^2\tau _{c_0}^2}}\frac{G_{\mathrm{}_i\mathrm{}_j}^{m_i+m_j}}{G_{\mathrm{}_i\mathrm{}_j}^{m_im_j}}.$$
(21)
Note that amplitudes measured in the lower frequency region are generally less accurate due to higher noise levels.
The two dimensionless numbers, $`\beta `$ and $`\gamma `$, quantify the deepening of the convection zone when a white dwarf cools. It is therefore related to the width of the white dwarf instability strip. Let us associate the blue edge of the ZZ Ceti instability strip ($`T_{\mathrm{eff}}12,000\mathrm{K}`$) with $`\tau _{c_0}=20\mathrm{s}`$ (when the lowest order $`\mathrm{}=1`$ gravity-mode mode satisfies $`\omega \tau _{c_0}=1`$, see Paper I), and the red edge of the strip with $`\tau _{c_0}=1300\mathrm{s}`$ (when the $`1000\mathrm{s}`$ period mode becomes invisible at the surface, $`\omega \tau _{c_0}=101`$, see Paper I). We find the width of the instability strip to be $`1000\mathrm{K}`$ when we adopt $`\beta +\gamma 14`$ as in §2.1. A larger value of $`|\beta +\gamma |`$ would correspond to a narrower instability strip. These numbers can be obtained from combination frequency measurements together with other unknown quantities.
A number of practical difficulties may arise in the actual analysis. For instance, different $`m`$ components of a gravity-mode are closely spaced in frequency and may not be resolved by observations of short duration, whereas in observations of sufficiently long duration temporal changes in the amplitudes of pulsation may occur. In the following sections, we apply our results ignoring these difficulties.
### 3.2 The DB variable GD358
For our analysis of the DB variable GD358, we use the Whole Earth Telescope (WET) data in which different $`m`$ components of the principal modes are well resolved. We assume that mode amplitudes do not change appreciably during the entire observation.
#### 3.2.1 Fine Structure in the Combination Frequencies
The sum (or difference) combination of two $`\mathrm{}=1`$ principal modes (each split into three $`m`$ components) can contain up to nine fine-splitting components that are different in frequency . The relative strengths among these components reflect their respective projection in the direction of our line-of-sight. In Table 1, we compare the observed and the theoretically expected values of these relative strengths for the sum combination of two principal modes, $`k=13`$ and $`k=15`$ (see Fig. 8 of Winget et al. 1994). We find overall agreements when the inclination angle $`\mathrm{\Theta }_0`$ falls within the range between $`40^{}`$ and $`50^{}`$. Some mismatches exist. They may be caused by the fact that some combination components are too close in frequency to be resolved by the WET run. In addition, the amplitudes of the principal modes vary during the run and this may affect the comparison. We notice that the WET run lasted $`11`$ days which is of order the natural growth time for the $`k=13`$ and $`k=15`$ modes .
#### 3.2.2 Combination Frequencies at Large
Winget et al. (1994, Table 3) determined pulsation power for the strongest component in each combination. We compare their data with theoretical estimates in Figure 3. For our analysis, we assume that all principal modes have spherical degree $`\mathrm{}=1`$ and that the strongest component in each combination is produced by the $`m=0`$ components in the principal modes. The second assumption is tested in Figure 3. As in §3.2.1, We adopt $`\mathrm{\Theta }_0=45^{}`$. We further take $`|2\beta +\gamma |/\alpha _V=25`$ and $`\tau _{c_0}=300\mathrm{s}`$ to produce theoretical estimates. Such a choice of $`\tau _{c_0}`$ ensures that all gravity-modes observed in GD358 satisfy the necessary condition for overstability, $`\omega \tau _{c_0}>1`$ . Varying $`\tau _{c_0}`$ from $`10\mathrm{s}`$ to $`1000\mathrm{s}`$ only changes the estimates by $`40\%`$.
Theoretical estimates based on the above choices of parameters largely reproduce the observed values. Figure 3 shows that most ($`90\%`$) combinations that are expected to have amplitudes above the $`1`$ mma detection limit are indeed observed. However, a few significant discrepancies are present and they merit some discussions.
The power at $`\nu =2660.84\mu `$Hz (labelled with ‘A’ in the upper panel of Fig. 3) is attributed to the $`(k=18)+(k=15)`$ combination by Winget et al. (1994). The observed amplitude is roughly twice the theoretical one. Interestingly enough, the $`(k=16)+(k=17)`$ combination lies at $`\nu =2659.43\mu `$ Hz and is expected to reach approximately the same amplitude as the former combination. This may explain the mismatch seen at ‘A’ if the two combinations are not well resolved from each other.
The combination at $`2848.28\mu `$Hz (‘B’) lies $`6\mu `$Hz away from the harmonic of the $`k=15,m=0`$ mode and is possibly the sum combination between this mode and the $`k=15,m=2`$ mode. As the $`k=15,m=0`$ mode has the greatest amplitude in the Fourier spectrum, it is surprising that we do not observe its harmonic.<sup>3</sup><sup>3</sup>3Even though $`k=15`$ is the strongest mode, its harmonic is expected to have lower amplitude than, for instance, the combination $`(k=15)+(k=17)`$ due to the factor $`n_{ij}`$ in equation (14). This problem is not unique to GD 358, as our investigation of ZZ Psc (§3.3) finds. Full numerical simulations of the convective response may provide a solution to this problem (Ising & Koester 1999).
We can not explain the disagreement seen at $`2946.65\mu `$Hz (case ‘C’) either.
Combinations at low frequencies are potentially most rewarding for estimating $`\tau _{c_0}`$. However, the signal-to-noise ratio is lower at these frequencies.
### 3.3 The DA variable ZZ Psc
ZZ Psc (aka G29-38) is a relatively cool DA variable exhibiting large pulsation amplitudes that vary in time . We investigate the combination frequencies in this star using a set of time-resolved spectroscopy data taken with the Keck II telescope . The observation lasted five hours and could not resolve the rotational splitting. However, the signal-to-noise ratio is high and phases of pulsation are measured from the data. These can be compared with equation (15).
Among the handful of eigenmodes seen in the spectra, one (marked as ‘F4’) stands out as an $`\mathrm{}=2`$ mode (Clemens et al. 1999), while all others have $`\mathrm{}=1`$. This result is supported by the fact that ‘F4’ is associated with a relatively large line-of-sight velocity (van Kerkwijk et al. 1999). In this section, we show that the combination frequencies also provide confirmation for this $`\mathrm{}`$-identification.
We first assume all observed principal modes are $`\mathrm{}=1`$, $`m=0`$. Taking $`\tau _{c_0}=250\mathrm{s}`$,<sup>4</sup><sup>4</sup>4This choice of $`\tau _{c_0}`$ ensures that all observed g-modes satisfy $`\omega \tau _{c_0}>1`$, and it puts ZZ Psc close to the red edge of the DA instability strip. $`|2\beta +\gamma |=10`$ and $`\mathrm{\Theta }_0=30^{}`$,<sup>5</sup><sup>5</sup>5In this case, we can not constrain $`|2\beta +\gamma |`$ and $`\mathrm{\Theta }_0`$ separately. we produce a theoretical amplitude spectrum for the combination frequencies (Fig. 4), onto which we plot the observed amplitudes as well as their error bars. The comparison is satisfactory; all combinations that are estimated to lie above the noise level are indeed detected. However, it is a surprise to find the three lowest frequency combinations to have much higher amplitudes than expected. As signals at very low frequencies may suffer from larger noise, this discrepancy needs to be confirmed by future observations.
In Figure 4, the combinations noted by ‘A’ and ‘D’ are respectively the first harmonics of the second largest and the largest g-modes. Equation (14) under-predicts the amplitude for ‘A’, and over-predicts it for ‘D’. The latter problem, interestingly enough, appears in GD358 as well (see Fig. 3). This may indicate that our perturbation analysis fails at large mode amplitudes. The combination at ‘C’ (sum of the strongest and the second strongest modes) falling much below the theoretical estimate jibes with this suggestion.
Clemens et al. (1999) and van Kerkwijk et al. (1999) argued that mode ‘F4’ has spherical degree $`\mathrm{}=2`$. If this is true, at $`\mathrm{\Theta }_0=30^{}`$, its harmonic would have an amplitude $`6`$ times larger than if it was $`\mathrm{}=1`$ (see §3.1 and Fig. 2). This could explain the disagreement at ‘B’. In the lower panel of Figure 5, we show theoretical $`R_c`$ values for combinations of $`\mathrm{}=1`$ and $`2`$ principal modes. The observed $`R_c`$ value for ‘B’ is consistent with it being the harmonic of an $`\mathrm{}=2`$ mode. This is a piece of independent evidence supporting the $`\mathrm{}=2`$ identification for mode ‘F4’. At $`\mathrm{\Theta }_0=30^{}`$, $`R_c`$ values for combinations that involve ‘F4’ and another mode do not differ appreciably from other combinations (see Fig. 2).
In the upper panel of Figure 5, we study the phase difference between a combination and its principal modes. From equation (15), one expects most of these phase differences to be positive and to lie close to $`0`$. Indeed, the observed phase differences cluster closely around $`0`$. However, in detail the fit is not good. This may be related to the unaccounted-for periodicities which influence the phase determinations (van Kerkwijk et al. 1999)
Employing data from WET and other observations (see Kleinman et al. 1998), Vuille \[2000b\] analyzed the relative phases of combination frequencies in ZZ Psc. He found $`\psi _{i\pm j}(\psi _i\pm \psi _j)`$ to be small but predominantly positive. This reflects the fact that pulsation light-curves have sharper ascent than descent. Vuille concluded that these relative phases remain fairly constant over the years, a finding consistent with equation (15).
## 4 Conclusions
### 4.1 DA & DB Variables
Our analysis leads to two key formulae (eq. and ) that describes the strength and the phase of the combination frequencies relative to their principal modes.
A few stellar parameters enter these formulae. They are the thermal constant of the stellar convection zone at equilibrium ($`\tau _{c_0}`$), the rate of deepening of the convection zone with cooling of the star (quantified by $`|2\beta +\gamma |`$), and the inclination angle between the observer’s line of sight and the stellar pulsation axis ($`\mathrm{\Theta }_0`$). It is becoming possible to use the combination frequencies to constrain these stellar parameters. We find that for both GD358 and ZZ Psc, the observed amplitude spectra can be roughly reproduced using reasonable choices of the above parameters. The same choices can also explain the values for the dimensionless numbers ($`a,b`$ and $`c`$) Brickhill (1992) summarized from his numerical study. The $`\mathrm{}`$ and $`m`$ values of the eigenmodes also enter into these formulae. This presents the potential of determining the $`\mathrm{}`$ values of the principal modes using the combination frequencies. An $`\mathrm{}=2`$ mode is expected to have a stronger harmonic than an $`\mathrm{}=1`$ mode, and this is indeed observed in ZZ Psc.
When analyzing observed combination frequencies, we have ignored amplitude variability during a long observing run, or in the case of a short run, have assumed that modes are axisymmetric with respect to the pulsation axis ($`m=0`$). The failure of these assumptions may account for some of the discrepancies between observation and theory and may prevent us from accurately determining stellar parameters. More suitable data sets might yield more conclusive information.
We find that theory over-predicts the amplitude in the harmonic of the strongest pulsation mode in the two stars we considered. We suspect that it results from the stronger nonlinearity associated with the largest mode.
Combination frequencies are produced by the surface convection zone in a pulsating white dwarf. Photosphere in these stars is not thermally capable of distorting the light curve. We therefore expect equations (20) and (15) to hold for all wavelengths.
### 4.2 Other Types of Variables
Combination frequencies have also been reported in two PG1159 variables (PG1707+427, Fontaine et al. 1991; HS2324+3944, Silvotti et al. 1999). Presumably, these hot white dwarfs do not have surface convection zones. What could be distorting the light curves?
Could a radiative, partially ionising layer produce the distortions? Such a layer is believed to exist in the upper atmosphere of PG1159 variables and is believed to be responsible for driving the observed pulsations. It is similar to the surface convection zones in DA and DB variables in that it retains heat when warmer, and releases heat when cooler. However, unlike in the case of the convection zones, the amount of heat retained (or released) by the partial ionising region can not be significantly modulated throughout the pulsation cycle by the presence of other pulsation modes. This is because the reaction time of the ionising region is roughly the local thermal relaxation time, which is of the same order as periods of overstable modes. Thus, we can not explain the combination frequencies in PG 1159 stars.
We note that in other types of small amplitude pulsators, e.g., $`\delta `$-Scuti stars, sdB variables, and $`\gamma `$-Doradus stars, combination frequencies have also been reported. We conjecture that a thin surface convection zone is present in these variables and is capable of exciting pulsation modes, as well as distorting the light curves.
The author would like to acknowledge the many beneficial comments and suggestions by Drs. Peter Goldreich, Joerg Ising, Marten van Kerkwijk, Scot Kleinman and Francois Vuille.
## Appendix A Angular Integration
Let $`(\mathrm{\Theta },\mathrm{\Phi })`$ be the spherical coordinate system defined by the stellar rotation axis, and $`(\theta ,\varphi )`$ be that defined by the observer’s line-of-sight. Let $`\mathrm{\Theta }_0`$ be the angle between this line-of-sight and the rotation axis. The two coordinate systems are related by
$`\mathrm{cos}\mathrm{\Theta }`$ $`=`$ $`\mathrm{sin}\mathrm{\Theta }_0\mathrm{sin}\theta \mathrm{cos}\varphi +\mathrm{cos}\mathrm{\Theta }_0\mathrm{cos}\theta ,`$
$`\mathrm{sin}\mathrm{\Theta }\mathrm{cos}\mathrm{\Phi }`$ $`=`$ $`\mathrm{cos}\mathrm{\Theta }_0\mathrm{sin}\theta \mathrm{cos}\varphi +\mathrm{sin}\mathrm{\Theta }_0\mathrm{cos}\theta ,`$
$`\mathrm{sin}\mathrm{\Theta }\mathrm{sin}\mathrm{\Phi }`$ $`=`$ $`\mathrm{sin}\theta \mathrm{sin}\varphi .`$ (22)
To obtain the photometric pulsation amplitude, we integrate the photospheric flux variation over the visible disc,
$$\left(\frac{\delta f}{f}\right)=\frac{1}{2\pi }_0^{2\pi }𝑑\varphi _0^1h(\mu )\mu 𝑑\mu \left(\frac{\delta F}{F}\right)(\mathrm{\Theta },\mathrm{\Phi },t),$$
(23)
where $`\mu =\mathrm{cos}\theta `$, and $`h(\mu )`$ is the limb-darkening function normalized by $`_0^1h(\mu )\mu 𝑑\mu =1`$. For our exercise, we adopt the Eddington limb-darkening law of $`h(\mu )=1+3/2\mu `$ (see, e.g., Shu 1991), as is appropriate for grey-atmosphere.
It is convenient to define
$$g_{\mathrm{}}^m(\mathrm{\Theta }_0)\frac{1}{2\pi }_0^{2\pi }𝑑\varphi _0^1h(\mu )\mu 𝑑\mu \mathrm{Re}[Y_{\mathrm{}}^m(\mathrm{\Theta },\mathrm{\Phi })].$$
(24)
We list values of $`g`$ in Table 2 for $`\mathrm{}2`$.
The angular dependence of a combination frequency is described by the product of the angular dependences of its principal modes. Its photometric amplitude is related to its photospheric amplitude by the following function,
$`G_{\mathrm{}_i\mathrm{}_j}^{m_i\pm m_j}`$ $`{\displaystyle \frac{1}{2\pi }}{\displaystyle _0^{2\pi }}𝑑\varphi {\displaystyle _0^1}h(\mu )\mu 𝑑\mu `$ (25)
$`\times P_\mathrm{}_i^{m_i}(\mathrm{\Theta },\mathrm{\Phi })P_\mathrm{}_j^{m_j}(\mathrm{\Theta },\mathrm{\Phi })\mathrm{cos}((m_i\pm m_j)\mathrm{\Phi }).`$
It is easiest to evaluate the $`G`$ function by reducing the above product of the Legendre functions into a linear sum of such functions. We present values of the ratio $`G_{\mathrm{}_i\mathrm{}_j}^{m_i+m_j}/(g_\mathrm{}_i^{m_i}g_\mathrm{}_j^{m_j})`$ in Table 3 for $`\mathrm{}_i=\mathrm{}_j=1`$. Values of $`G_{\mathrm{}_i\mathrm{}_j}^{m_im_j}/(g_\mathrm{}_i^{m_i}g_\mathrm{}_j^{m_j})`$ can be trivially obtained from the same table by changing the sign of $`m_j`$. For our discussion in §3.1, we also need
$`{\displaystyle \frac{G_{\mathrm{1\hspace{0.17em}\hspace{0.17em}2}}^{0\pm 0}}{g_1^0g_2^0}}`$ $`=`$ $`{\displaystyle \frac{1.97+0.82\mathrm{cos}^2\mathrm{\Theta }_0}{3\mathrm{cos}^2\mathrm{\Theta }_01}},`$
$`{\displaystyle \frac{G_{\mathrm{2\hspace{0.17em}\hspace{0.17em}2}}^{0\pm 0}}{g_2^0g_2^0}}`$ $`=`$ $`{\displaystyle \frac{8.39+2.51\mathrm{cos}(2\mathrm{\Theta }_0)+0.22\mathrm{cos}(4\mathrm{\Theta }_0)}{(3\mathrm{cos}^2\mathrm{\Theta }_01)^2}}.`$ (26)
|
no-problem/0003/gr-qc0003117.html
|
ar5iv
|
text
|
# Evidence for a continuum limit in causal set dynamics
## 1 Introduction
In an earlier paper we investigated a type of causal set dynamics that can be described as a (classically) stochastic process of growth or “accretion”. In a language natural to that dynamics, the passage of time consists in the continual birth of new elements of the causal set and the history of a sequence of such births can be represented as an upward path through a poset of all finite causal sets. We called such a stochastic process a sequential growth dynamics because the elements arise singly, rather than in pairs or larger multiplets.
A sequential description of this sort is advantageous in representing the future as developing out of the past, but on the other hand it could seem to rely on an external parameter time (the “time” in which the growth occurs), thereby violating the principle that physical time is encoded in the intrinsic order-relation of the causal set and nothing else. If physically real, such a parameter time would yield a distinguished labeling of the elements and thereby a notion of “absolute simultaneity”, in contradiction to the lessons of both special and general relativity. To avoid such a consequence, we postulated a principle of discrete general covariance, according to which no probability of the theory can depend on — and no physically meaningful question can refer to — the imputed order of births, except insofar as that order reflects the intrinsic precedence relation of the causal set itself.
To discrete general covariance, we added two other principles that we called Bell causality and internal temporality. The first is a discrete analog of the condition that no influence can propagate faster than light, and the second simply requires that no element be born to the past of any existing element.<sup>1</sup><sup>1</sup>1This last condition guarantees that the “parameter time” of our stochastic process is compatible with physical temporality, as recorded in the order relation $``$ that gives the causal set its structure. In a broader sense, general covariance itself is also an aspect of internal temporality, since it guarantees that the parameter time adds nothing to the relation $``$. These principles led us almost uniquely to a family of dynamical laws (stochastic processes) parameterized by a countable sequence of coupling constants $`q_n`$. In addition to this generic family, there are some exceptional families of solutions, but we conjecture that they are all singular limits of the generic family. We have checked in particular that “originary percolation” (see section 2) is such a limit.<sup>2</sup><sup>2</sup>2In the notation of , it is the $`A\mathrm{}`$ limit of the dynamics given by $`t_0=1`$, $`t_n=At^n`$, $`n=1,2,3,\mathrm{}`$.
Now among these dynamical laws, the one resulting from the choice $`q_n=q^n`$ is one of the easiest to work with, both conceptually and for purposes of computer simulation. Defined by a single real parameter $`q[0,1]`$, it is described in more detail in Section 2 below. In , we referred to it as transitive percolation because it can be interpreted in terms of a random “turning on” of nonlocal bonds (with probability $`p=1q`$) in a one-dimensional lattice. Another thing making it an attractive special case to work with is the availability in the mathematics literature of a number of results governing the asymptotic behavior of posets generated in this manner .
Aside from its convenience, this percolation dynamics, as we will call it, possesses other distinguishing features, including an underlying time-reversal invariance and a special relevance to causal set cosmology, as we describe briefly below. In this paper, we search for evidence of a continuum limit of percolation dynamics.
One might question whether a continuum limit is even desirable in a fundamentally discrete theory, but a continuum approximation in a suitable regime is certainly necessary if the theory is to reproduce known physics. Given this, it seems only a small step to a rigorous continuum limit, and conversely, the existence of such a limit would encourage the belief that the theory is capable of yielding continuum physics with sufficient accuracy.
Perhaps an analogy with kinetic theory can guide us here. In quantum gravity, the discreteness scale is set, presumably, by the Planck length $`l=(\kappa \mathrm{})^{1/2}`$ (where $`\kappa =8\pi G`$), whose vanishing therefore signals a continuum limit. In kinetic theory, the discreteness scales are set by the mean free path $`\lambda `$ and the mean free time $`\tau `$, both of which must go to zero for a description by partial differential equations to become exact. Corresponding to these two independent length and time scales are two “coupling constants”: the diffusion constant $`D`$ and the speed of sound $`c_{\mathrm{sound}}`$. Just as the value of the gravitational coupling constant $`G\mathrm{}`$ reflects (presumably) the magnitude of the fundamental spacetime discreteness scale, so the values of $`D`$ and $`c_{\mathrm{sound}}`$ reflect the magnitudes of the microscopic parameters $`\lambda `$ and $`\tau `$ according to the relations
$$D\frac{\lambda ^2}{\tau },c_{\mathrm{sound}}\frac{\lambda }{\tau }$$
or conversely
$$\lambda \frac{D}{c_{\mathrm{sound}}},\tau \frac{D}{c_{\mathrm{sound}}^2}.$$
In a continuum limit of kinetic theory, therefore, we must have either $`D0`$ or $`c_{\mathrm{sound}}\mathrm{}`$. In the former case, we can hold $`c_{\mathrm{sound}}`$ fixed, but we get a purely mechanical macroscopic world, without diffusion or viscosity. In the latter case, we can hold $`D`$ fixed, but we get a “purely diffusive” world with mechanical forces propagating at infinite speed. In each case we get a well defined — but defective — continuum physics, lacking some features of the true, atomistic world.
If we can trust this analogy, then something very similar must hold in quantum gravity. To send $`l`$ to zero, we must make either $`G`$ or $`\mathrm{}`$ vanish. In the former case, we would expect to obtain a quantum world with the metric decoupled from non-gravitational matter; that is, we would expect to get a theory of quantum field theory in a purely classical background spacetime solving the source-free Einstein equations. In the latter case, we would expect to obtain classical general relativity. Thus, there might be two distinct continuum limits of quantum gravity, each physically defective in its own way, but nonetheless well defined.
For our purposes in this paper, the important point is that, although we would not expect quantum gravity to exist as a continuum theory, it could have limits which do, and one of these limits might be classical general relativity. It is thus sensible to inquire whether one of the classical causal set dynamics we have defined describes classical spacetimes. In the following, we make a beginning on this question by asking whether the special case of “percolated causal sets”, as we will call them, admits a continuum limit at all.
Of course, the physical content of any continuum limit we might find will depend on what we hold fixed in passing to the limit, and this in turn is intimately linked to how we choose the coarse-graining procedure that defines the effective macroscopic theory whose existence the continuum limit signifies. Obviously, we will want to send $`N\mathrm{}`$ for any continuum limit, but it is less evident how we should coarse-grain and what coarse grained parameters we want to hold fixed in taking the limit. Indeed, the appropriate choices will depend on whether the macroscopic spacetime region we have in mind is, to take some naturally arising examples, ($`i`$) a fixed bounded portion of Minkowski space of some dimension, or ($`ii`$) an entire cycle of a Friedmann universe from initial expansion to final recollapse, or ($`iii`$) an $`N`$-dependent portion of an unbounded spacetime $`M`$ that expands to encompass all of $`M`$ as $`N\mathrm{}`$. In the sequel, we will have in mind primarily the first of the three examples just listed. Without attempting an definitive analysis of the coarse-graining question, we will simply adopt the simplest definitions that seem to us to be suited to this example. More specifically, we will coarse-grain by randomly selecting a sub-causal-set of a fixed number of elements, and we will choose to hold fixed some convenient invariants of that sub-causal-set, one of which can be interpreted<sup>3</sup><sup>3</sup>3This interpretation is strictly correct only if the causal set forms an interval or “Alexandrov neighborhood” within the spacetime. as the dimension of the spacetime region it constitutes. As we will see, the resulting scheme has much in common with the kind of coarse-graining that goes into the definition of renormalizability in quantum field theory. For this reason, we believe it can serve also as an instructive “laboratory” in which this concept, and related concepts like “running coupling constant” and “non-trivial fixed point”, can be considered from a fresh perspective.
In the remaining sections of this paper we: define transitive percolation dynamics more precisely; specify the coarse-graining procedure we have used; report on the simulations we have run looking for a continuum limit in the sense thereby defined; and offer some concluding comments.
### 1.1 Definitions used in the sequel
Causal set theory postulates that spacetime, at its most fundamental level, is discrete, and that its macroscopic geometrical properties reflect a deep structure which is purely order theoretic in nature. This deep structure is taken to be a partial order and called a causal set (or “causet” for short). For an introduction to causal set theory, see . In this section, we merely recall some definitions which we will be using in the sequel.
A (partial) order or poset is a set $`S`$ endowed with a relation $``$ which is:
transitive $`x,y,zSxy\text{and}yzxz`$
acyclic $`x,ySxyyx`$
irreflexive $`xSxx`$
(Irreflexivity is merely a convention; with it, acyclicity is actually redundant.) For example, the events of Minkowski space (in any dimension) form a poset whose order relation is the usual causal order. In an order $`S`$, the *interval* $`\mathrm{int}(x,y)`$ is defined to be
$$\mathrm{int}(x,y)=\{zS|xzy\}.$$
An order is said to be *locally finite* if all its intervals are finite (have finite cardinality). A *causal set* is a locally finite order.
It will be helpful to have names for some small causal sets. Figure 1 provides such names for the causal sets with three or fewer elements.
## 2 The dynamics of transitive percolation
Regarded as a sequential growth dynamics of the sort derived in , transitive percolation is described by one free parameter $`q`$ such that $`q_n=q^n`$. This is equivalent (at stage $`N`$ of the growth process) to using the following “percolation” algorithm to generate a random causet.
1. Start with $`N`$ elements labeled $`0,1,2,\mathrm{},N1`$.
2. With a fixed probability $`p`$ $`(=1q)`$, introduce a relation $`ij`$ between every pair of elements labeled $`i`$ and $`j`$, where $`i\{0\mathrm{}N2\}`$ and $`j\{i+1\mathrm{}N1\}`$.
3. Form the transitive closure of these relations (e.g. if $`25`$ and $`58`$ then enforce that $`28`$.)
Given the simplicity of this dynamical model, both conceptually and from an algorithmic standpoint, it offers a “stepping stone” allowing us to look into some general features of causal set dynamics. (The name “percolation” comes from thinking of a relation $`ij`$ as a “bond” or “channel” between $`i`$ and $`j`$.)
There exists another model which is very similar to transitive percolation, called “originary transitive percolation”. The rule for randomly generating a causet is the same as for transitive percolation, except that each new element is required to be related to at least one existing element. Algorithmically, we generate potential elements one by one, exactly as for plain percolation, but discard any such element which would be unrelated to all previous elements. Causets formed with this dynamics always have a single minimal element, an “origin”.
Recent work by Dou suggests that originary percolation might have an important role to play in cosmology. Notice first that, if a given cosmological “cycle” ends with the causet collapsing down to a single element, then the ensuing re-expansion is necessarily given by an originary causet. Now, in the limited context of percolation dynamics, Alon et al. have proved rigorously that such cosmological “bounces” (which they call posts) occur with probability 1 (if $`p>0`$), from which it follows that there are infinitely many cosmological cycles, each cycle but the first having the dynamics of originary percolation. For more general choices of the dynamical parameters $`q_n`$ of , posts can again occur, but now the $`q_n`$ take on new effective values in each cycle, related to the old ones by the action of a sort of “cosmological renormalization group”; and Dou has found evidence that originary percolation is a “stable fixed point” of this action, meaning that the universe would tend to evolve toward this behavior, no matter what dynamics it began with.
It would thus be of interest to investigate the continuum limit of originary percolation as well as plain percolation. In the present paper, however, we limit ourselves to the latter type, which we believe is more appropriate (albeit not fully appropriate for reasons discussed in the conclusion) in the context of spacetime regions of sub-cosmological scale.
## 3 The critical point at $`p=0`$, $`N=\mathrm{}`$
In the previous section we have introduced a model of random causets, which depends on two parameters, $`p[0,1]`$ and $`N\text{}`$. For a given $`p`$, the model defines a probability distribution on the set of $`N`$-element causets.<sup>4</sup><sup>4</sup>4Strictly speaking this distribution has gauge-invariant meaning only in the limit $`N\mathrm{}`$ ($`p`$ fixed); for it is only insofar as the growth process “runs to completion” that generally covariant questions can be asked. Notice that this limit is inherent in causal set dynamics itself, and has nothing to do with the continuum limit we are concerned with herein, which sends $`p`$ to zero as $`N\mathrm{}`$. For $`p=0`$, the only causet with nonzero probability, obviously, is the $`N`$-antichain. Now let $`p>0`$. With a little thought, one can convince oneself that for $`N\mathrm{}`$, the causet will look very much like a chain. Indeed it has been proved (see also ) that, as $`N\mathrm{}`$ with $`p`$ fixed at some (arbitrarily small) positive number, $`r1`$ in probability, where
$$r\frac{R}{N(N1)/2}=\frac{R}{\left(\genfrac{}{}{0pt}{}{N}{2}\right)},$$
$`R`$ being the number of relations in the causet, i.e. the number of pairs of causet elements $`x`$, $`y`$ such that $`xy`$ or $`yx`$. Note that the $`N`$-chain has the greatest possible number $`\left(\genfrac{}{}{0pt}{}{N}{2}\right)`$ of relations, so $`r1`$ gives a precise meaning to “looking like a chain”. We call $`r`$ the *ordering fraction* of the causal set, following .
We see that for $`N\mathrm{}`$, there is a change in the qualitative nature of the causet as $`p`$ varies away from zero, and the point $`p=0,N=\mathrm{}`$ (or $`p=1/N=0`$) is in this sense a critical point of the model. It is the behavior of the model near this critical point which will concern us in this paper.
## 4 Coarse graining
An advantageous feature of causal sets is that there exists for them a simple yet precise notion of coarse graining. A coarse grained approximation to a causet $`C`$ can be formed by selecting a sub-causet $`C^{}`$ at random, with equal selection probability for each element, and with the causal order of $`C^{}`$ inherited directly from that of $`C`$ (i.e. $`xy`$ in $`C^{}`$ if and only if $`xy`$ in $`C`$.)
For example, let us start with the $`20`$ element causet $`C`$ shown in Figure 2. (which was percolated using $`p=0.25`$), and successively coarse grain it down to causets of 10, 5 and 3 elements.
We see that, at the largest scale shown (i.e. the smallest number of remaining elements), $`C`$ has coarse-grained in this instance to the 3-element “V” causet. Of course, coarse graining itself is a random process, so from a single causet of $`N`$ elements, it gives us in general, not another single causet, but a probability distribution on the causets of $`m<N`$ elements.
A noteworthy feature of this definition of coarse graining, which in some ways is similar to what is often called “decimation” in the context of spin systems, is the random selection of a subset. In the absence of any background lattice structure to refer to, no other possibility for selecting a sub-causet is evident. Random selection is also recommended strongly by considerations of Lorentz invariance . The fact that a coarse grained causet is automatically another causet will make it easy for us to formulate precise notions of continuum limit, running of the coupling constant $`p`$, etc. In this respect, we believe that this model combines precision with novelty in such a manner as to furnish an instructive illustration of concepts related to renormalizability, independently of its application to quantum gravity. We remark in this connection, that transitive percolation is readily embedded in a “two-temperature” statistical mechanics model, and as such, happens also to be exactly soluble in the sense that the partition function can be computed exactly .
## 5 The large scale effective theory
In section 2 we described a “microscopic” dynamics for causal sets (that of transitive percolation) and in section 4 we defined a precise notion of coarse graining (that of random selection of a sub-causal-set). On this basis, we can produce an effective “macroscopic” dynamics by imagining that a causet $`C`$ is first percolated with $`N`$ elements and then coarse-grained down to $`m<N`$ elements. This two-step process constitutes an effective random procedure for generating $`m`$ element causets depending (in addition to $`m`$) on the parameters $`N`$ and $`p`$. In causal set theory, number of elements corresponds to spacetime volume, so we can interpret $`N/m`$ as the factor by which the “observation scale” has been increased by the coarse graining. If, then, $`V_0`$ is the macroscopic volume of the spacetime region constituted by our causet, and if we take $`V_0`$ to be fixed as $`N\mathrm{}`$, then our procedure for generating causets of $`m`$ elements provides the effective dynamics at volume-scale $`V_0/m`$ (i.e. length scale $`(V_0/m)^{1/d}`$ for a spacetime of dimension $`d`$).
What does it mean for our effective theory to have a continuum limit in this context? Our stochastic microscopic dynamics gives, for each choice of $`p`$, a probability distribution on the set of causal sets $`C`$ with $`N`$ elements, and by choosing $`m`$, we determine at which scale we wish to examine the corresponding effective theory. This effective theory is itself just a probability distribution $`f_m`$ on the set of $`m`$-element causets, and so our dynamics will have a well defined continuum limit if there exists, as $`N\mathrm{}`$, a trajectory $`p=p(N)`$ along which the corresponding probability distributions $`f_m`$ on coarse grained causets approach fixed limiting distributions $`f_m^{\mathrm{}}`$ for all $`m`$. The limiting theory in this sense is then a sequence of effective theories, one for each $`m`$, all fitting together consistently. (Thanks to the associative (semi-group) character of our coarse-graining procedure, the existence of a limiting distribution for any given $`m`$ implies its existence for all smaller $`m`$. Thus it suffices that a limiting distribution $`f_m`$ exist for $`m`$ arbitrarily large.) In general there will exist not just a single such trajectory $`p=p(N)`$, but a one-parameter family of them (corresponding to the one real parameter $`p`$ that characterizes the microscopic dynamics at any fixed $`N`$), and one may wonder whether all the trajectories will take on the same asymptotic form as they approach the critical point $`p=1/N=0`$.
Consider first the simplest nontrivial case, $`m=2`$. Since there are only two causal sets of size two, the 2-chain and the 2-antichain, the distribution $`f_2`$ that gives the “large scale physics” in this case is described by a single number which we can take to be $`f_2(\text{})`$, the probability of obtaining a 2-chain rather than a 2-antichain. (The other probability, $`f_2(\text{})`$, is of course not independent, since classical probabilities must add up to unity.)
Interestingly enough, the number $`f_2(\text{})`$ has a direct physical interpretation in terms of the Myrheim-Meyer dimension of the fine-grained causet $`C`$. Indeed, it is easy to see that $`f_2(\text{})`$ is nothing but the expectation value of what we called above the “ordering fraction” of $`C`$. But the ordering fraction, in turn, determines the Myrheim-Meyer dimension $`d`$ that indicates the dimension of the Minkowski spacetime $`\text{𝕄}^d`$ (if any) in which $`C`$ would embed faithfully as an interval . Thus, by coarse graining down to two elements, we are effectively measuring a certain kind of spacetime dimensionality of $`C`$. In practice, we would not expect $`C`$ to embed faithfully without some degree of coarse-graining, but the original $`r`$ would still provide a good dimension estimate since it is, on average, coarse-graining invariant.
As we begin to consider coarse-graining to sizes $`m>2`$, the degree of complication grows rapidly, simply because the number of partial orders defined on $`m`$ elements grows rapidly with $`m`$. For $`m=3`$ there are five possible causal sets: , , , , and . Thus the effective dynamics at this “scale” is given by five probabilities (so four free parameters). For $`m=4`$ there are sixteen probabilities, for $`m=5`$ there are sixty three, and for $`m=6`$, 7 and 8, the number of probabilities is respectively 318, 2045 and 16999.
## 6 Evidence from simulations
In this section, we report on some computer simulations that address directly the question whether transitive percolation possesses a continuum limit in the sense defined above. In a subsequent paper, we will report on simulations addressing the subsidiary question of a possible scaling behavior in the continuum limit.
In order that a continuum limit exist, it must be possible to choose a trajectory for $`p`$ as a function of $`N`$ so that the resulting coarse-grained probability distributions, $`f_1`$, $`f_2`$, $`f_3`$, …, have well defined limits as $`N\mathrm{}`$. To study this question numerically, one can simulate transitive percolation using the algorithm described in Section 2, while choosing $`p`$ so as to hold constant (say) the $`m=2`$ distribution $`f_2`$ ($`f_1`$ being trivial). Because of the way transitive percolation is defined, it is intuitively obvious that $`p`$ can be chosen to achieve this, and that in doing so, one leaves $`p`$ with no further freedom. The decisive question then is whether, along the trajectory thereby defined, the higher distribution functions, $`f_3`$, $`f_4`$, etc. all approach nontrivial limits.
As we have already mentioned, holding $`f_2`$ fixed is the same thing as holding fixed the expectation value $`<r>`$ of ordering fraction $`r=R/\left(\genfrac{}{}{0pt}{}{N}{2}\right)`$. To see in more detail why this is so, consider the coarse-graining that takes us from the original causet $`C_N`$ of $`N`$ elements to a causet $`C_2`$ of two elements. Since coarse-graining is just random selection, the probability $`f_2(\text{})`$ that $`C_2`$ turns out to be a 2-chain is just the probability that two elements of $`C_N`$ selected at random form a 2-chain rather than a 2-antichain. In other words, it is just the probability that two elements of $`C_N`$ selected at random are causally related. Plainly, this is the same as the fraction of pairs of elements of $`C_N`$ such that the two members of the pair form a relation $`xy`$ or $`yx`$. Therefore, the ordering fraction $`r`$ equals the probability of getting a 2-chain when coarse graining $`C_N`$ down to two elements; and $`f_2(\text{})=<r>`$, as claimed.
This reasoning illustrates, in fact, how one can in principle determine any one of the distributions $`f_m`$ by answering the question, “What is the probability of getting this particular $`m`$-element causet from this particular $`N`$-element causet if you coarse grain down to $`m`$ elements?” To compute the answer to such a question starting with any given causet $`C_N`$, one examines every possible combination of $`m`$ elements, counts the number of times that the combination forms the particular causet being looked for, and divides the total by $`\left(\genfrac{}{}{0pt}{}{N}{m}\right)`$. The ensemble mean of the resulting abundance, as we will refer to it, is then $`f_m(\xi )`$, where $`\xi `$ is the causet being looked for. In practice, of course, we would normally use a more efficient counting algorithm than simply examining individually all $`\left(\genfrac{}{}{0pt}{}{N}{m}\right)`$ subsets of $`C_N`$.
### 6.1 Histograms of 2-chain and 4-chain abundances
As explained in the previous subsection, the main computational problem, once the random causet has been generated, is determining the number of subcausets of different sizes and types. To get a feel for how some of the resulting “abundances” are distributed, we start by presenting a couple of histograms. Figure 3 shows the number $`R`$ of relations obtained from a simulation in which 15,260 causal sets were generated by transitive percolation with $`p=0.01155`$, $`N=4096`$. Visually, the distribution is Gaussian, in agreement with the fact that its “kurtosis”
$$\overline{\left(x\overline{x}\right)^4}/\overline{\left(x\overline{x}\right)^2}^2$$
of 2.993 is very nearly equal to its Gaussian value of 3 (the over-bar denotes sample mean). In these simulations, $`p`$ was chosen so that the number of 3-chains was equal on average to half the total number possible, i.e. the “abundance of 3-chains”, $`\text{(number of 3-chains)}/\left(\genfrac{}{}{0pt}{}{N}{3}\right)`$, was equal to $`1/2`$ on average. The picture is qualitatively identical if one counts 4-chains rather than 2-chains, as exhibited in Fig. 4.
(One may wonder whether it was to be expected that these distributions would appear to be so normal. If the variable in question, here the number of 2-chains $`R`$ or the number of 4-chains ($`C_4`$, say), can be expressed as a sum of independent random variables, then the central limit theorem provides an explanation. So consider the variables $`x_{ij}`$ which are 1 if $`ij`$ and zero otherwise. Then $`R`$ is easily expressed as a sum of these variables:
$$R=\underset{i<j}{}x_{ij}$$
However, the $`x_{ij}`$ are not independent, due to transitivity. Apparently, this dependence is not large enough to interfere much with the normality of their sum. The number of 4-chains $`C_4`$ can be expressed in a similar manner
$$C_4=\underset{i<j<k<l}{}x_{ij}x_{jk}x_{kl}.$$
and similar remarks apply.)
Let us mention that for values of $`p`$ sufficiently close to 0 or 1, these distributions will appear skew. This occurs simply because the numbers under consideration (e.g. the number of $`m`$-chains) are bounded between zero and $`\left(\genfrac{}{}{0pt}{}{N}{m}\right)`$ and must deviate from normality if their mean gets too close to a boundary relative to the size of their standard deviation. Whenever we draw an error bar in the following, we will ignore any deviation from normality in the corresponding distribution.
Notice incidentally that the total number of 4-chains possible is $`\left(\genfrac{}{}{0pt}{}{4096}{4}\right)=11,710,951,848,960`$. Consequently, the mean 4-chain abundance<sup>5</sup><sup>5</sup>5From this point on we will usually write simply “abundance”, in place of “mean abundance”, assuming the average is obvious from context. in our simulation is only $`\frac{2,745,459,887,579}{11,710,951,848,960}=0.234`$, a considerably smaller value than the 2-chain abundance of $`r=\frac{6,722,782}{\left(\genfrac{}{}{0pt}{}{4096}{2}\right)}=0.802`$. This was to be expected, considering that the 2-chain is one of only two possible causets of its size, while the 4-chain is one out 16 possibilities. (Notice also that 4-chains are necessarily less probable than 2-chains, because every coarse-graining of a 4-chain is a 2-chain, whereas the 2-chain can come from every 4-element causet save the 4-antichain.)
### 6.2 Trajectories of $`p`$ versus $`N`$
The question we are exploring is whether there exist, for $`N\mathrm{}`$, trajectories $`p=p(N)`$ along which the mean abundances of all finite causets tend to definite limits. To seek such trajectories numerically, we will select some finite “reference causet” and determine, for a range of $`N`$, those values of $`p`$ which maintain its abundance at some target value. If a continuum limit does exist, then it should not matter in the end which causet we select as our reference, since any other choice (together with a matching choice of target abundance) should produce the same trajectory asymptotically. We would also anticipate that all the trajectories would behave similarly for large $`N`$, and that, in particular, either all would lead to continuum limits or all would not. In principle it could happen that only a certain subset led to continuum limits, but we know of no reason to expect such an eventuality. In the simulations reported here, we have chosen as our reference causets the 2-, 3- and 5-chains. We have computed six trajectories, holding the 2-chain abundance fixed at 1/2, 1/3, and 1/10, the 3-chain abundance fixed at 1/2 and .0814837, and the 5-chain abundance fixed at 1/2. For $`N`$, we have used as large a range as our computers would allow.
Before discussing the trajectories as such, let us have a look at how the mean 2-chain abundance $`<r>`$ (i.e. the mean ordering fraction) varies with $`p`$ for a fixed $`N`$ of 2048, as exhibited in Figure 5. (Vertical error bars are displayed in the figure but are so small that they just look like horizontal lines. The plotted points were obtained from an exact expression for the ensemble average $`<r>`$, so the errors come only from floating point roundoff. The fitting function used in Figure 5 will be discussed in a subsequent paper , where we examine scaling behavior; see also .) As one can see, $`<r>`$ starts at 0 for $`p=0`$, rises rapidly to near 1 and then asymptotes to 1 at $`p=1`$ (not shown). Of course, it was evident a priori that $`<r>`$ would increase monotonically from 0 to 1 as $`p`$ varied between these same two values, but it is perhaps noteworthy that its graph betrays no sign of discontinuity or non-analyticity (no sign of a “phase transition”). To this extent, it strengthens the expectation that the trajectories we find will all share the same qualitative behavior as $`N\mathrm{}`$.
The six trajectories we have simulated are depicted in Fig. 6.<sup>6</sup><sup>6</sup>6Notice that the error bars are shown rotated in the legend. This will be the case for all subsequent legends as well. A higher abundance of $`m`$-chains for fixed $`m`$ leads to a trajectory with higher $`p`$. Also note that, as observed above, the longer chains require larger values of $`p`$ to attain the same mean abundance, hence a choice of mean abundance = 1/2 corresponds in each case to a different trajectory. The trajectories with $`<r>`$ held to lower values are “higher dimensional” in the sense that $`<r>=1/2`$ corresponds to a Myrheim-Meyer dimension of 2, while $`<r>=1/10`$ corresponds to a Myrheim-Meyer dimension of 4. Observe that the plots give the impression of becoming straight lines with a common slope at large $`N`$. This tends to corroborate the expectation that they will exhibit some form of scaling with a common exponent, a behavior reminiscent of that found with continuum limits in many other contexts. This is further suggested by the fact that two distinct trajectories ($`f_2(\text{})=1/2`$ and $`f_3(\text{})=.0814837`$), obtained by holding different abundances fixed, seem to converge for large $`N`$.
By taking the abscissa to be $`1/N`$ rather than $`\mathrm{log}_2N`$, we can bring the critical point to the origin, as in Fig. 7. The lines which pass through the data points there are just splines drawn to aid the eye in following the trajectories. Note that the curves tend to asymptote to the $`p`$-axis, suggesting that $`p`$ falls off more slowly than $`1/N`$. This suggestion is corroborated by more detailed analysis of the scaling behavior of these trajectories, as will be discussed in .
### 6.3 Flow of the coarse-grained theory along a trajectory
We come finally to a direct test of whether the coarse-grained theory converges to a limit as $`N\mathrm{}`$. Independently of scaling or any other indicator, this is by definition the criterion for a continuum limit to exist. We have examined this question by means of simulations conducted for five of the six trajectories mentioned above. In each simulation we proceeded as follows. For each chosen $`N`$, we experimentally found a $`p`$ sufficiently close to the desired trajectory. Having determined $`p`$, we then generated a large number of causets by the percolation algorithm described in Section 2. (The number generated varied from 64 to 40,000.) For each such random causet, we computed the abundances of the different $`m`$-element (sub)causets under consideration (2-chain, 3-chain, 3-antichain, etc), and we combined the results to obtain the mean abundances we have plotted here, together with their standard errors. (The errors shown do not include any contribution from the slight inaccuracy in the value of $`p`$ used. Except for the 3- and 5-chain trajectories these errors are negligibly small.)
To compute the abundances of the 2-, 3-, and 4-orders for a given causet, we randomly sampled its four-element subcausets, counting the number of times each of the sixteen possible 4-orders arose, and dividing each of these counts by the number of samples taken to get the corresponding abundance. As an aid in identifying to which 4-order a sampled subcauset belonged we used the following invariant, which distinguishes all of the sixteen 4-orders, save two pairs.
$$I(S)=\underset{xS}{}\left(2+|\mathrm{past}(x)|\right)$$
Here, $`\mathrm{past}(x)=\{yS|yx\}`$ is the exclusive past of the element $`x`$ and $`|\mathrm{past}(x)|`$ is its cardinality. Thus, we associate to each element of the causet, a number which is two more than the cardinality of its exclusive past, and we form the product of these numbers (four, in this case) to get our invariant. (For example, this invariant is 90 for the “diamond” poset, .)
The number of samples taken from an $`N`$ element causet was chosen to be $`\sqrt{2\left(\genfrac{}{}{0pt}{}{N}{4}\right)}`$, on the grounds that the probability to get the same four element subset twice becomes appreciable with more than this many samples. Numerical tests confirmed that this rule of thumb tends to minimize the sampling error, as seen in Figure 8.
Once one has the abundances of all the 4-orders, the abundances of the smaller causets can be found by further coarse graining. By explicitly carrying out this coarse graining, one easily deduces the following relationships:
$`f_3(\text{})`$ $`=`$ $`f_4(\text{})+{\displaystyle \frac{1}{2}}\left(f_4(\text{})+f_4(\text{})\right)+{\displaystyle \frac{1}{4}}f_4(\text{})+{\displaystyle \frac{1}{4}}\left(f_4(\text{})+f_4(\text{})\right)+{\displaystyle \frac{1}{2}}f_4(\text{})`$
$`f_3(\text{})`$ $`=`$ $`{\displaystyle \frac{1}{2}}f_4(\text{})+{\displaystyle \frac{1}{2}}f_4(\text{})+{\displaystyle \frac{1}{4}}f_4(\text{})+{\displaystyle \frac{3}{4}}f_4(\text{})+{\displaystyle \frac{1}{4}}f_4(\text{})+{\displaystyle \frac{1}{4}}f_4(\text{})+{\displaystyle \frac{1}{2}}f_4(\text{})`$
$`f_3(\text{})`$ $`=`$ $`{\displaystyle \frac{3}{4}}f_4(\text{})+{\displaystyle \frac{1}{4}}\left(f_4(\text{})+f_4(\text{})\right)+{\displaystyle \frac{1}{2}}\left(f_4(\text{})+f_4(\text{})\right)+f_4(\text{})+{\displaystyle \frac{1}{2}}f_4(\text{})+{\displaystyle \frac{1}{2}}f_4(\text{})`$
$`f_3(\text{})`$ $`=`$ $`{\displaystyle \frac{1}{2}}f_4(\text{})+{\displaystyle \frac{1}{2}}f_4(\text{})+{\displaystyle \frac{1}{4}}f_4(\text{})+{\displaystyle \frac{3}{4}}f_4(\text{})+{\displaystyle \frac{1}{4}}f_4(\text{})+{\displaystyle \frac{1}{4}}f_4(\text{})+{\displaystyle \frac{1}{2}}f_4(\text{})`$
$`f_3(\text{})`$ $`=`$ $`{\displaystyle \frac{1}{4}}\left(f_4(\text{})+f_4(\text{})\right)+{\displaystyle \frac{1}{4}}\left(f_4(\text{})+f_4(\text{})\right)+{\displaystyle \frac{1}{2}}f_4(\text{})+f_4(\text{})`$
$`f_2(\text{})`$ $`=`$ $`f_3(\text{})+{\displaystyle \frac{2}{3}}\left(f_3(\text{})+f_3(\text{})\right)+{\displaystyle \frac{1}{3}}f_3(\text{})`$
$`f_2(\text{})`$ $`=`$ $`1f_2(\text{})`$
In the first six equations, the coefficient before each term on the right is the fraction of coarse-grainings of that causet which yield the causet on the left.
In Figures 9, 10, and 11, we exhibit how the coarse-grained probabilities of all possible 2, 3, and 4 element causets vary as we follow the trajectory along which the coarse-grained 2-chain probability $`f_2(\text{})=r`$ is held at $`1/2`$. By design, the coarse-grained probability for the 2-chain remains flat at 50%, so Figure 9 simply shows the accuracy with which this was achieved. (Observe the scale on the vertical axis.) Notice that, since $`f_2(\text{})`$ and $`f_2(\text{})`$ must sum to 1, their error bars are necessarily equal. (The standard deviation in the abundances decreases with increasing $`N`$. The “blip” around $`\mathrm{log}_2N=9`$ occurs simply because we generated fewer causets at that and larger values of $`N`$ to reduce computational costs.)
The crucial question is whether the probabilities for the three and four element causets tend to definite limits as $`N`$ tends to infinity. Several features of the diagrams indicate that this is indeed occurring. Most obviously, all the curves, except possibly a couple in Figure 11, appear to be leveling off at large $`N`$. But we can bolster this conclusion by observing in which direction the curves are moving, and considering their interrelationships.
For the moment let us focus our attention on figure 10. A priori there are five coarse-grained probabilities to be followed. That they must add up to unity reduces the degrees of freedom to four. This is reduced further to three by the observation that, due to the time-reversal symmetry of the percolation dynamics, we must have $`f_3(\text{})=f_3(\text{})`$, as duly manifested in their graphs. Moreover, all five of the curves appear to be monotonic, with the curves for , and rising, and the curves for and falling. If we accept this indication of monotonicity from the diagram, then first of all, every probability $`f_3(\xi )`$ must converge to some limiting value, because monotonic bounded functions always do; and some of these limits must be nonzero, because the probabilities must add up to 1. Indeed, since $`f_3(\text{})`$ and $`f_3(\text{})`$ are rising, they must converge to some nonzero value, and this value must lie below 1/2 in order that the total probability not exceed unity. In consequence, the rising curve $`f_3(\text{})`$ must also converge to a nontrivial probability (one which is neither 0 nor 1). Taken all in all, then, it looks very much like the $`m=3`$ coarse-grained theory has a nontrivial $`N\mathrm{}`$ limit, with at least three out of its five probabilities converging to nontrivial values.
Although the “rearrangement” of the coarse-grained probabilities appears much more dramatic in Figure 11, similar arguments can be made. Excepting initial “transients”, it seems reasonable to conclude from the data that monotonicity will be maintained. From this, it would follow that the probabilities for and (which must be equal by time-reversal symmetry) and the other rising probabilities, , , and , all approach nontrivial limits. The coarse-graining to 4 elements, therefore, would also admit a continuum limit with a minimum of 4 out of the 11 independent probabilities being nontrivial.
To the extent that the $`m=2`$ and $`m=3`$ cases are indicative, then, it is reasonable to conclude that percolation dynamics admits a continuum limit which is non-trivial at all “scales” $`m`$.
The question suggests itself, whether the flow of the coarse-grained probabilities would differ qualitatively if we held fixed some abundance other than that of the 2-chain. In Figures 12, 13, and 14, we display results obtained by fixing the 3-chain abundance (its value having been chosen to make the abundance of 2-chains be 1/2 when $`N=2^{16}`$). Notice in Figure 12 that the abundance of 2-chains varies considerably along this trajectory, whilst that of the 3-chain (in figure 13) of course remains constant. Once again, the figures suggest strongly that the trajectory is approaching a continuum limit with nontrivial values for the coarse-grained probabilities of at least the 3-chain, the “V” and the “$`\mathrm{\Lambda }`$” (and in consequence of the 2-chain and 2-antichain).
All the trajectories discussed so far produce causets with an ordering fraction $`r`$ close to 1/2 for large $`N`$. As mentioned earlier, $`r=1/2`$ corresponds to a Myrheim-Meyer dimension of two. Figures 15 and 16 show the results of a simulation along the “four dimensional” trajectory defined by $`r=1/10`$. (The value $`r=1/10`$ corresponds to a Myrheim-Meyer dimension of 4.) Here the appearance of the flow is much less elaborate, with the curves arrayed simply in order of increasing ordering fraction, and being at the top and and (imperceptibly) at the bottom. As before, all the curves are monotone as far as can be seen. Aside from the intrinsic interest of the case $`d=4`$, these results indicate that our conclusions drawn for $`d`$ near 2 will hold good for all larger $`d`$ as well.
Figure 17 displays the flow of the coarse-grained probabilities from a simulation in the opposite situation where the ordering fraction is much greater than 1/2 (the Myrheim-Meyer dimension is down near 1.) Shown are the results of coarse-graining to three element causets along the trajectory which holds the 3-chain probability to 1/2. Also shown is the 2-chain probability. The behavior is similar to that of Figure 15, except that here the coarse-grained probability rises with the ordering fraction instead of falling. This occurs because constraining $`f_3(\text{})`$ to be 1/2 generates rather chain-like causets whose Myrheim-Meyer dimension is in the neighborhood of 1.34, as follows from the approximate limiting value $`f_2(\text{})0.8`$. The slow, monotonic, variation of the probabilities at large $`N`$, along with the appearance of convergence to non-zero values in each case, suggests the presence of a nontrivial continuum limit for $`r`$ near unity as well.
Figures 18 and 19 present the results of a final set of simulations, the only ones we have carried out which examined the abundances of causets containing more than four elements. In these simulations, the mean 5-chain abundance $`f_5(\text{5-chain})`$ was held at 1/2, producing causets that were even more chain-like than before (Myrheim-Meyer dimension $`1.1`$). In Figure 18, we track the resulting abundances of all $`k`$-chains for $`k`$ between 2 and 7, inclusive. (We limited ourselves to chains, because their abundances are relatively easy to determine computationally.) As in Figure 17, all the coarse-grained probabilities appear to be tending monotonically to limits at large $`N`$. In fact, they look amazingly constant over the whole range of $`N`$, from 5 to $`2^{15}`$. One may also observe that the coarse-grained probability of a chain decreases markedly (and almost linearly over the range examined!) with its length, as one might expect. It appears also that the $`k`$-chain curves for $`k5`$ are “expanding away” from the 5-chain curve, but only very slightly. Figure 19 displays the flow of the probabilities for coarse-grainings to four elements. It is qualitatively similar to Figures 1517, with very flat probability curves, and here with a strong preference for causets having many relations over those having few. Comparing Figures 19 and 16 with Figures 14 and 11, we observe that trajectories which generate causets that are rather chain-like or antichain-like seem to produce distributions that converge more rapidly than those along which the ordering fraction takes values close to 1/2.
In the way of further simulations, it would be extremely interesting to look for continuum limits of some of the more general dynamical laws discussed in §4.5 of Reference . In doing so, however, one would no longer have available (as one does have for transitive percolation) a very fast (yet easily coded) algorithm that generates causets randomly in accord with the underlying dynamical law. Since the sequential growth dynamics of is produced by a stochastic process defined recursively on the causal set, it is easily mimicked algorithmically; but the most obvious algorithms that do so are too slow to generate efficiently causets of the size we have discussed in this paper. Hence, one would either have to devise better algorithms for generating causets “one off”, or one would have to use an entirely different method to obtain the mean abundances, like Monte Carlo simulation of the random causet.
## 7 Concluding Comments
Transitive percolation is a discrete dynamical theory characterized by a single parameter $`p`$ lying between $`0`$ and $`1`$. Regarded as a stochastic process, it describes the steady growth of a causal set by the continual birth or “accretion” of new elements. If we limit ourselves to that portion of the causet comprising the elements born between step $`N_0`$ and step $`N_1`$ of the stochastic process, we obtain a model of random posets containing $`N=N_1N_0`$ elements. This is the model we have studied in this paper.
Because the underlying process is homogeneous, this model does not depend on $`N_0`$ or $`N_1`$ separately, but only on their difference. It is therefore characterized by just two parameters $`p`$ and $`N`$. One should be aware that this truncation to a finite model is not consistent with discrete general covariance, because it is the subset of elements with certain labels that has been selected out of the larger causet, rather than a subset characterized by any directly physical condition. Thus, we have introduced an “element of gauge” and we hope that we are justified in having neglected it. That is, we hope that the random causets produced by the model we have actually studied are representative of the type of suborder that one would obtain by percolating a much larger (eventually infinite) causet and then using a label-invariant criterion to select a subset of $`N`$ elements.
Leaving this question aside for now, let us imagine that our model represents an interval (say) in a causet $`C`$ underlying some macroscopic spacetime manifold. With this image in mind, it is natural to interpret a continuum limit as one in which $`N\mathrm{}`$ while the coarse-grained features of the interval in question remain constant. We have made this notion precise by defining coarse-graining as random selection of a suborder whose cardinality $`m`$ measures the “coarseness” of our approximation. A continuum limit then is defined to be one in which $`N`$ tends to $`\mathrm{}`$ such that, for each finite $`m`$, the induced probability distribution $`f_m`$ on the set of $`m`$-element posets converges to a definite limit, the physical meaning being that the dynamics at the corresponding length-scale is well defined. Now, how could our model fail to admit such a limit?
In a field-theoretic setting, failure of a continuum limit to exist typically means that the coarse-grained theory loses parameters as the cutoff length goes to zero. For example, $`\lambda \varphi ^4`$ scalar field theory in 4 dimensions depends on two parameters, the mass $`\mu `$ and the coupling constant $`\lambda `$. In the continuum limit, $`\lambda `$ is lost, although one can arrange for $`\mu `$ to survive. (At least this is what most workers believe occurs.) Strictly speaking, one should not say that a continuum limit fails to exist altogether, but only that the limiting theory is poorer in coupling constants than it was before the limit was taken. Now in our case, we have only one parameter to start with, and we have seen that it does survive as $`N\mathrm{}`$ since we can, for example, choose freely the $`m=2`$ coarse-grained probability distribution $`f_2`$. Hence, we need not fear such a loss of parameters in our case.
What about the opposite possibility? Could the coarse-grained theory gain parameters in the $`N\mathrm{}`$ limit, as might occur if the distributions $`f_m`$ were sensitive to the fine details of the trajectory along which $`N`$ and $`p`$ approached the “critical point” $`p=0`$, $`N=\mathrm{}`$?<sup>7</sup><sup>7</sup>7Such an increase of the parameter set through a limiting process seems logically possible, although we know of no example of it from field theory or statistical mechanics, unless one counts the extra global parameters that come in with “spontaneous symmetry breaking”. Our simulations showed no sign of such sensitivity, although we did not look for it specifically. (Compare, for example, Figure 10 with Figure 13 and 11 with 14.)
A third way the continuum limit could fail might perhaps be viewed as an extreme form of the second. It might happen that, no matter how one chose the trajectory $`p=p(N)`$, some of the coarse-grained probabilities $`f_m(\xi )`$ oscillated indefinitely as $`N\mathrm{}`$, without ever settling down to fixed values. Our simulations leave little room for this kind of breakdown, since they manifest the exact opposite kind of behavior, namely monotone variation of all the coarse-grained probabilities we “measured”.
Finally, a continuum limit could exist in the technical sense, but it still could be effectively trivial (once again reminiscent of the $`\lambda \varphi ^4`$ case — if you care to regard a free field theory as trivial.) Here triviality would mean that all — or almost all — of the coarse-grained probabilities $`f_m(\xi )`$ converged either to 0 or to 1. Plainly, we can avoid this for at least some of the $`f_m(\xi )`$. For example, we could choose an $`m`$ and hold either $`f_m(m\text{-chain})`$ or $`f_m(m\text{-antichain})`$ fixed at any desired value. (Proof: as $`p1`$, $`f_m(m\text{-chain})1`$ and $`f_m(m\text{-antichain})0`$; as $`p0`$, the opposite occurs.) However, in principle, it could still happen that all the other $`f_m`$ besides these two went to 0 in the limit. (Clearly, they could not go to 1, the other trivial value.) Once again, our simulations show the opposite behavior. For example, we saw that $`f_3(\text{})`$ increased monotonically along the trajectory of Figure 10.
Moreover, even without reference to the simulations, we can make this hypothetical “chain-antichain degeneracy” appear very implausible by considering a “typical” causet $`C`$ generated by percolation for $`N1`$ with $`p`$ on the trajectory that, for some chosen $`m`$, holds $`f_m(m\text{-chain})`$ fixed at a value $`a`$ strictly between 0 and 1. Then our degeneracy would insist that $`f_m(m\text{-antichain})=1a`$ and $`f_m(\chi )=0`$ for all other $`\chi `$. But this would mean that, in a manner of speaking, “every” coarse-graining of $`C`$ to $`m`$ elements would be either a chain or an antichain. In particular the causet could not occur as a subcauset of $`C`$; whence, since is a subcauset of every $`m`$-element causet except the chain and the antichain, $`C`$ itself would have to be either an antichain or a chain. But it is absurd that percolation for any parameter value $`p`$ other than 0 and 1 would produce a “bimodal” distribution such that $`C`$ would have to be either a chain or an antichain, but nothing in between. (It seems likely that similar arguments could be devised against the possibility of similar, but slightly less trivial trivial continuum limits, for example a limit in which $`f_m(\chi )`$ would vanish unless $`\chi `$ were a disjoint union of chains and antichains.)
Putting all this together, we have persuasive evidence that the percolation model does admit a continuum limit, with the limiting model being nontrivial and described by a single “renormalized” parameter or “coupling constant”. Furthermore, the associated scaling behavior one might anticipate in such a case is also present, as we will discuss further in .
But is the word “continuum” here just a metaphor, or can it be taken more literally? This depends, of course, on the extent to which the causets yielded by percolation dynamics resemble genuine spacetimes. Based on the meager evidence available at the present time, we can only answer “it is possible”. On one hand, we know that any spacetime produced by percolation would have to be homogeneous, like de Sitter space or Minkowski space. We also know, from simulations in progress, that two very different dimension estimators seem to agree on percolated causets, which one might not expect, were there no actual dimensions for them to be estimating. Certain other indicators tend to behave poorly, on the other hand, but they are just the ones that are not invariant under coarse-graining (they are not “RG invariants”), so their poor behavior is consistent with the expectation that the causal set will not be manifold-like at the smallest scales (“foam”), but only after some degree of coarse-graining.
Finally, there is the ubiquitous issue of “fine tuning” or “large numbers”. In any continuum situation, a large number is being manifested (an actual infinity in the case of a true continuum) and one may wonder where it came from. In our case, the large numbers were $`p^1`$ and $`N`$. For $`N`$, there is no mystery: unless the birth process ceases, $`N`$ is guaranteed to grow as large as desired. But why should $`p`$ be so small? Here, perhaps, we can appeal to the preliminary results of Dou mentioned in the introduction. If — cosmologically considered — the causet that is our universe has cycled through one or more phases of expansion and recollapse, then its dynamics will have been filtered through a kind of “temporal coarse-graining” or “RG transformation” that tends to drive it toward transitive percolation. But what we didn’t mention earlier was that the parameter $`p`$ of this effective dynamics scales like $`N_0^{1/2}`$, where $`N_0`$ is the number of elements of the causet preceding the most recent “bounce”. Since this is sure to be an enormous number if one waits long enough, $`p`$ is sure to become arbitrarily small if sufficiently many cycles occur. The reason for the near flatness of spacetime — or if you like for the large diameter of the contemporary universe — would then be just that the underlying causal set is very old — old enough to have accumulated, let us say, $`10^{480}`$ elements in earlier cycles of expansion, contraction and re-expansion.
It is a pleasure to thank Alan Daughton, Chris Stephens, Henri Waelbroeck and Denjoe ÓConnor for extensive discussions on the subject of this paper. The research reported here was supported in part by NSF grants PHY-9600620 and INT-9908763 and by a grant from the Office of Research and Computing of Syracuse University.
|
no-problem/0003/quant-ph0003114.html
|
ar5iv
|
text
|
# Phase shift operator and cyclic evolution in finite dimensional Hilbert space
## Abstract
We address the problem of phase shift operator acting as time evolution operator in Pegg-Barnett formalism. It is argued that standard shift operator is inconsistent with the behaviour of the state vector under cyclic evolution. We consider a generally deformed oscillator algebra at $`q`$ root of unity, as it yields the same Pegg-Barnett phase operator and show that shift operator within this algebra meets our requirement.
03.65.-w
In recent years, Pegg-Barnett (PB) formalism has attracted wide attention as a theory for quantum phase ,. Alongside, the subject of quantum algebras and their realizations in terms of $`q`$-deformed oscillators has also been studied with great interest \- . The problem of quantum phase has also been persued from $`q`$-deformation theoretic point of view. There are certain justifications for this approach. A feature of a $`q`$-deformed theory or framework is that one can identify an inherent scale in it, of magnitude $`|1q|`$, where $`q`$ is called deformation parameter. As $`q1`$, one retreives the undeformed or ”classical” theory. Now PB formalism can also be looked upon as inherently $`q`$-deformed in the abovesaid sense and in this case $`q=\mathrm{exp}(i2\pi /s+1)`$, where $`(s+1)`$ is the dimension of the Hilbert space. Secondly, the phase observable which is hermitian phase operator in PB theory, can be consistently defined only in a finite dimensional Hilbert space (FDHS). These two features form the motivation to study phase using a $`q`$-oscillator with $`q=\mathrm{exp}(i2\pi /(s+1))`$ . Firstly, $`q`$ being root of unity, naturally truncates the $`q`$-oscillator to a FDHS. Secondly, infinite dimensional limit ($`s\mathrm{}`$) also corresponds to the deformation free ($`q1`$) limit. However, the problem of negative norm in this representation was recognised later and there now exist representations of $`q`$-oscillator or generally deformed oscillator with positive norm for $`q`$ as root of unity and for which the Pegg-Barnett phase operator can be consistently defined.
Let us first recapitulate relevant key points of PB formalism. Here the phase operator $`\mathrm{\Phi }`$ and the number operator $`N`$ are not canonically conjugate, but satisfy a complicated commutator
$$[\mathrm{\Phi },N]=\frac{2\pi \mathrm{}}{s+1}\underset{n,n^{}=l}{\overset{l}{}}\frac{(n^{}n)|n^{}n|}{\mathrm{exp}[2\pi i(nn^{})/(s+1)]1}.$$
(1)
The eigenstates of $`\mathrm{\Phi }`$ which form an orthonormal set of phase states, are related to the number states by Fourier transform
$$|\theta _m=\frac{1}{\sqrt{s+1}}\underset{n=0}{\overset{s}{}}\mathrm{exp}(in\theta _m)|n,$$
(2)
where $`\theta _m=\theta _0+\frac{2\pi m}{s+1}`$, $`\{m=0,1,2,\mathrm{},s\}`$. $`\theta _0`$ is the arbitrary phase window which defines the phase angle interval $`2\pi `$ modulo, $`\theta _0\theta _m<\theta _0+2\pi `$. Apart from the hermitian phase operator $`\mathrm{\Phi }`$, the unitary phase operator $`e^{i\mathrm{\Phi }}`$ is also of significance in PB theory. It acts as shift operator on number states
$`e^{i\mathrm{\Phi }}|n`$ $`=`$ $`|n1,n0`$ (3)
$`e^{i\mathrm{\Phi }}|0`$ $`=`$ $`e^{i(s+1)\theta _0}|s.`$ (4)
Thus the action of $`e^{i\mathrm{\Phi }}`$ is cyclic and it steps down the number states by unity. Its adjoint acts as step up operator. Thus one can write a realization of unitary phase operator as
$$e^{i\mathrm{\Phi }}=|01|+|12|+\mathrm{}+|s1s|+e^{i(s+1)\theta _0}|s0|.$$
(5)
The operator dual to $`e^{i\mathrm{\Phi }}`$ is the operator $`q^N`$, which acts as shift operator on the phase states
$`q^N|\theta _m`$ $`=`$ $`|\theta _{m1},m0`$ (6)
$`q^N|\theta _0`$ $`=`$ $`|\theta _s.`$ (7)
Note that the apparent duality between the two kinds of shift operators seems incomplete due to the extra phase factor in Eq. (4) or the lack of corresponding factor in Eq. (7). This is due to the arbitrariness in the choice of phase window in PB formalism, while there is no such choice in the ground state eigenvalue of number operator, which is necessarily zero. Thus the realization of $`q^N`$ in terms of phase states is
$$q^N=|\theta _0\theta _1|+|\theta _1\theta _2|+\mathrm{}+|\theta _{s1}\theta _s|+|\theta _s\theta _0|.$$
(8)
Now the unitary phase shift operator $`q^N`$ can be thought as time evolution operator, which operated once on the phase state advances the phase by $`2\pi /(s+1)`$. Thus if we operate it $`(s+1)`$ times on a phase state, we complete one cycle and return to the same phase state. On the other hand, we have the results of Ref. , where it was shown that for cyclic evolution of harmonic oscillator in a general state $`_nc_n|n`$, in FDHS, the state vector can change sign which depends on the dimesnionality of the space; if $`(s+1)`$ is even sign changes, otherwise not. However, if we take $`q^N`$ as equivalent to time evolution operator, we note that according to realization of Eq. (8), the state vector always returns exactly to initial state, irrespective of the dimensionality of the space.
The purpose of this paper is to make the action of phase shift operator consistent with that of time evolution operator in the context of cyclic evolution in FDHS. We take as our model the recently proposed generally deformed oscillator , which has certain advantages over other approaches from algebraic point of view, namely, i) The creation and annihilation operators in PB theory do not form a closed algebra by themselves, and they do not go over to corresponding relations in the $`s\mathrm{}`$ limit ii) we can algebraically define PB phase operator in the approach of , iii) for $`q`$ as root of unity, positive norm is also assured.
Briefly, in the approach of , new creation and annihilation operators are defined
$$A^{}=\sqrt{(q^𝒩)}e^{i\mathrm{\Phi }},A=e^{i\mathrm{\Phi }}\sqrt{(q^𝒩)},q^𝒩=q^{N+\eta }.$$
(9)
The action of these operators on generalized number states is
$`A^{}|n+\eta `$ $`=`$ $`\sqrt{(q^{n+\eta +1})}|n+\eta +1,ns`$ (10)
$`A^{}|s+\eta `$ $`=`$ $`e^{i(s+1)\theta _0}\sqrt{(q^n)}|\eta `$ (11)
$`A|n+\eta `$ $`=`$ $`\sqrt{(q^{n+\eta })}|n+\eta 1,n0`$ (12)
$`A|\eta `$ $`=`$ $`\sqrt{(q^\eta )}e^{i(s+1)\theta _0}|s+\eta `$ (13)
$`q^𝒩|n+\eta `$ $`=`$ $`q^{n+\eta }|n+\eta .`$ (14)
The parameter $`\eta `$ is chosen such that i) the above defines a cyclic representation, ii) the function $``$ is hermitian and non-negative, iii) in $`s\mathrm{}`$ limit, $`A^{}`$ and $`A`$ go over to the creation and annihilation operators of the ordinary oscillator. Also the condition for cyclic representation ($`(q^\eta )0`$) in Eqs. (11) and (13)) also ensures that one can consistently define unitary phase operator by inverting $`A^{}`$ and $`A`$ in Eq. (9). Note that this approach exactly recovers the PB phase operator.
However a significant fact that was missed in is that in the above representation, $`q^{\pm 𝒩}`$ can also act as phase shift operator on the phase states. As one can easily see, its action gives $`q^𝒩|\theta _m=q^\eta |\theta _{m1}`$ and $`q^𝒩|\theta _0=q^\eta |\theta _s`$, which is just same as Eqs. (6) and (7). However, as we show below, the significance of this operator lies in its being consistent with the results of cyclic evolution in FDHS . As a solution for restoring the duality in $`e^{i\mathrm{\Phi }}`$ and $`q^𝒩`$, we propose to modify the Eq. (2) as follows:
$$|\theta _m=\frac{1}{\sqrt{s+1}}\underset{n=0}{\overset{s}{}}\mathrm{exp}(i(n+\eta )\theta _m)|n+\eta ,$$
(15)
so that now we have
$`q^𝒩|\theta _m`$ $`=`$ $`|\theta _{m1},m0`$ (16)
$`q^𝒩|\theta _0`$ $`=`$ $`e^{i2\pi \eta }|\theta _s.`$ (17)
The action of $`e^{i\mathrm{\Phi }}`$ on the phase states remains as such, i.e. $`e^{i\mathrm{\Phi }}|\theta _m=\theta _m|\theta _m`$. Moreover, the action of $`e^{i\mathrm{\Phi }}`$ on the (new) number states remains same as before. Thus from Eq. (15)
$$|n+\eta =\frac{1}{\sqrt{s+1}}\underset{m=0}{\overset{s}{}}\mathrm{exp}(i(n+\eta )\theta _m)|\theta _m,$$
(18)
we can write
$`e^{i\mathrm{\Phi }}|n+\eta `$ $`=`$ $`|n+\eta 1,n0`$ (19)
$`e^{i\mathrm{\Phi }}|\eta `$ $`=`$ $`e^{i(s+1)\theta _0}|s+\eta .`$ (20)
Thus the duality between $`e^{i\mathrm{\Phi }}`$ and $`q^𝒩`$ is exactly obeyed, so that parameter $`\eta `$ plays the role equivalent to $`\theta _0`$. We can as well write the following realization for modified unitary operator
$$q^𝒩=|\theta _0\theta _1|+|\theta _1\theta _2|+\mathrm{}+|\theta _{s1}\theta _s|+e^{i2\pi \eta }|\theta _s\theta _0|.$$
(21)
Therefore, operating the above unitary operator $`(s+1)`$ times, we get
$$\left(q^𝒩\right)^{s+1}|\theta _m=e^{i2\pi \eta }|\theta _m.$$
(22)
Next, we are interested to know if under such cyclic evolution, the state vector changes sign or not. Thus if $`\eta `$ is an integer, no change in sign occurs, while for $`\eta `$ as half-odd integer, there is change in sign. Now the usual time evolution operator is $`e^{iHt/\mathrm{}}`$, where for the case of harmonic oscillator in FDHS , the hamiltonian $`H`$ has the following energy spectrum
$$E_n=\mathrm{}\omega \left(n+\frac{1}{2}+\frac{(s+1)}{2}\delta _{n,s}\right).$$
(23)
Thus under evolution through one time period, $`t=2\pi /\omega `$, the state vector $`|n`$ is multiplied by the phase factor $`\mathrm{exp}(i2\pi \{n+1/2+(s+1)\delta _{n,s}/2\})`$. On the other hand, if we consider time evolution through unitary shift operator $`q^𝒩`$, this means that state vector is multiplied by the factor $`\mathrm{exp}(i2\pi \{n+\eta \})`$. Thus we note that for a harmonic oscillator in FDHS, for $`ns`$, we have $`\eta =1/2`$, whereas for $`n=s,\eta =1/2+(s+1)/2`$. So $`(s+1)`$ as even number is equivalent to $`\eta `$ as half-odd integer, which from previous discussion, implies change in sign under cyclic evolution, whereas $`(s+1)`$ odd is equivalent to $`\eta `$ as integer and consequently no change in sign of the state vector under one cycle. Also, the case of infinite dimensional harmonic oscillator requires that $`E_n=(n+1/2)\mathrm{}\omega `$, which is consistent with $`\eta =1/2`$.
Finally, it is interesting to note that states $`|n+\eta `$ can be obtained from usual number states $`|n`$, by applying a continuous unitary transformation i.e. when $`\eta `$ is not an integer ($`e^{i\eta \mathrm{\Phi }}|n=|n+\eta `$). As was pointed out in , such continuous unitary transformations are useful to construct the phase-moment generating functions.
Concluding, we have argued that phase shift operator in standard PB formalism is inconsistent with cyclic evolution of harmonic oscillator in finite dimesional Hilbert space. To treat this, we have shown that phase shift operator of a generally deformed oscillator algebra at $`q`$ root of unity, and which yields the same PB phase operator, can simulate the behaviour of time evolution operator for cyclic evolution. This also restores the duality in the actions of phase- and number- shift operators.
The author would like to acknowledge the kind hospitality of H.S. Mani and Sumathi Rao at Mehta Research Institute, Allahabad, where this work was initiated and S. Abe, for careful reading of the manuscript.
|
no-problem/0003/nlin0003015.html
|
ar5iv
|
text
|
# Spatial Solitons in Resonators
## I Introduction <br>
Solitary structures can form in optics when a balance occurs between a linear and a nonlinear optical process. The best known instance of such structures in optics are solitary pulses propagating along optical fibers. Here a balance occurs between the linear mechanism of dispersion, which leads to a broadening of the pulse in the propagation direction, and the nonlinear mechanism of ”self-phase-modulation” or intensity-dependent refractive index, (e.g. Kerr effect) which tends to shorten the pulse. The result is a pulse traveling along the fiber without changing its shape. Such pulses can well be described by the soliton solutions of the 1 + 1D nonlinear Schroedinger equation (NLSE) . In this equation the time co-ordinate may as well be a spatial co-ordinate. It follows that spatial solitons should also exist. The linear broadening mechanism in this case is diffraction. In one spatial dimension such spatial- or propagation solitons do indeed exist . The phenomenon manifests itself in the contraction of a light beam with propagation, until a filament of constant thickness is formed which then propagates without further change (”Self-trapped beam”).
Taking, however, a normal laser beam which diffracts or contracts in 2D (the beam cross section), the NLSE has no stable solutions of the form of a beam propagating with a constant diameter, at least in the paraxial optics approximation. The Kerr-nonlinearity is ”stronger” than the diffraction, so that catastrophic collapse of the beam cross section occurs . It was pointed out than such collapse can be avoided if the nonlinearity is ”saturable” i.e. reduces with increased light intensity . In this way a variety of experiments on propagation solitons in 2D has been possible . Technical applications in information routing and field steering with such propagation solitons are under consideration.
A particular situation occurs if such a ”self trapped” beam propagates inside an optical resonator. The finite mirror reflectance acts in his case somewhat similarly to the saturation of the nonlinearity because in each round-trip of the light, to the already self-focused light unfocused light, which irradiates the resonator, is added, thus continually weakening the self-focusing. Consequently, in resonators of finite finesse stable filamentation is possible .
Evidently, the stability of such structures can be enhanced further by a saturability of the nonlinearity of the material filling the resonator. Thus, occurrence of spatial resonator solitons has been predicted for a number of nonlinear materials . The first observations of such solitary structures in optical resonators occurred before the bulk of theoretical work on passive resonator solitons appeared. In such solitary structures were observed using a liquid crystal film inside a resonator and in such a spatial soliton was observed in a resonator made up of two phase-conjugating mirrors which contained a saturable absorber.
Spatial resonator solitons can exist if the characteristic of the resonator shows two coexisting stable steady states (bistability). In such a bistable resonator, if it is of large Fresnel number, domains of the two states can exist, which are then connected by ”switching fronts” (or -waves). Such switching waves move into or out of domains of one of the states depending on the difference of the background field value and the field value corresponding to the unstable steady state solution lying between the two stable steady states. A switching front will move into the state of higher intensity if the background field is larger than that of the unstable steady state, and it will move into the domain of the lower intensity state if the background field is smaller than that of the unstable steady state. Thus in general one kind of domain will shrink and the other expand.
In general the asymptotic state will be that the total resonator cross section is entirely switched to one of the two states. If, however, the system is not far from a modulational instability, then the switching fronts do not aperiodically connect the two states but can be accompanied by damped spatial oscillations on either side of the fronts. Then as a domain contracts, finally the switching front on one side of the domain will ”feel” the spatial oscillations of the field close to the front on the other side of the domain. The spatial field minima can then ”trap” the front of the other side of the domain, in which case the (small) domain has attained a stability (and is then called a solitary or localized structure). It can be freely moved around the resonator cross section. If the spatial field oscillations on the one side of the domain trap not the front from the other side itself, but the spatial oscillations accompanying it, then a higher order spatial soliton is formed .
We have found so far, that the solitons of low orders resemble Gauss-Laguerre-Modes with ring nodes (not the flower-like variety). There is no obvious reason why that should be so. Optical resonator modes are the eigenfunctions of a boundary problem with the boundaries given by the mirror surfaces. Conversely, the solitons are the solutions of a self-consistence problem where the ”potential”, constituted in case of an optical resonator by the resonator mirrors, is created by the light field itself. Thus it is not obvious why these two problems should have similar solutions, and reasons for the similarity of the solutions are an open question. Interestingly it has been found, that the potential created by a fundamental soliton can actually allow-besides the existence of the field of the fundamental soliton - the stable additional existence of a 1<sup>st</sup> order soliton .
One can, in all cases, picture a spatial soliton as a small domain of one of two coexisting states surrounded by a stationary switching front which has locked into a stable ring. Due to the bistable character of the resonator such spatial solitons are bistable. They can be switched on or off and are thus suitable for carrying information.
## II Solitons in laser with nonlinear absorber <br>
The theoretical work on solitons of active resonators dates back to the 80s (see a summary in ).
It suggested experiments with a repetitively pulsed dye-laser with an internal saturable absorber . FIG. 1 shows the output power of the dye laser with an internal Bacterio-Rhodopsine (BR) absorption cell as a function of pump power. The pulse repetition rate is 12 Hz, the acidity of the BR-absorber solution is chosen for an absorber recovery time constant of 300 ms so that the system dynamics is slaved by the absorber and the system can be treated like a continuously emitting system. The bistability of the system is apparent. The upper part of FIG. 1 shows the output beam cross section. Apparently the narrowest beam occurs within the bistable region. It represents a spatial soliton.
The resonator used is of ”self-imaging” type. This kind of resonator is in its transverse mode structure equivalent to a plane resonator of zero length. For the precise self-imaging length, its transverse modes are completely degenerate. The diffraction losses, equally, correspond to a plane resonator of zero length. Thus, this resonator, on the one hand, has the complete transverse mode degeneracy of a plane resonator of zero length as necessary for arbitrary images to resonate, and on the other hand it has sufficient length to house various intracavity elements without the detrimental diffraction losses of a plane resonator of the same length. (It may be noted that such a resonator permits to realize also a negative length as far as the transverse mode structure concerns).
FIG. 2 shows the ”writing” of a spatial soliton in this system in various places of the resonator cross section. The absorber cell was locally bleached for a short time (by a He-Ne laser beam). The result is a stationary spatial soliton (which remains after the external bleaching is stopped). FIG. 2 shows on the one hand that the solitons are bistable (i.e. can be switched on and off) and on the other hand that they can exist at any location in the cross section.
The motion of solitons in field gradients was also tested. In a fluid analogy of the laser a phase gradient corresponds to flow velocity and an intensity gradient to a density- (or pressure-) gradient. Thus a soliton should move in such gradients. FIG. 3 shows experiments with phase gradients. In FIG. 3a a phase gradient across the laser cross section was created by a small tilt of one laser resonator mirror. The snapshots taken at equidistant times show the motion of the soliton induced by the phase gradient. By changing the length of the self-imaging laser resonator away from the precise-self imaging length in FIG. 3b a ”phase trough” was created with its minimum at the center of the resonator. As the snapshots show, the soliton is drawn from all sides towards the center of the phase trough, were it is then trapped. This movement and trapping of solitons would likely be important for uses of resonator solitons in optical information processing .
In these initial experiments, the laser containing the nonlinear absorber was emitting a larger number of longitudinal modes. Thus the tuning of the resonator was of no importance, as the modes emitted adjust to the resonator length. The resonator tuning does, however, affect the solitons, their motion, and characteristics if emission is restricted to a single longitudinal mode family. The simplest way for such mode selection is an active medium with a very narrow gain spectrum. By far the narrowest ”gain” spectra (if one interprets in a laser-physics concept) have photorefractive gain media . Therefore experiments were conducted using resonators of self-imaging type with photorefractive gain. To picture the effects of a narrow gain line one can think of the self-imaging resonator as a plane-plane resonator. Such a resonator has a (longitudinal) resonance if the length can accommodate an integer number $`N`$ of (half) wavelengths of radiation generated - whose wavelength is fairly strictly given by the wavelength of the center of the gain line.
Then, if the length of the resonator is between $`N\lambda /2`$ and $`(N1)\lambda /2`$ the emission can adjust to this resonator length by tilting the propagation direction of the radiation with respect to the resonator axis. In this way $`N`$ half wavelengths can be accommodated in the resonator . Such a ”tilted wave” has a propagation component along the resonator axis and a (small) propagation component lying in the resonator mirror plane. Light will therefore move across the resonator section in a detuned resonator, corresponding to the motion in a phase gradient. We can therefore expect stationary solitons in the case of precise resonator tuning, and moving solitons for detuning.
The stationary solitons, correspond to the stationary solitons described above for the broadband (dye) multi mode laser which can adjust its detuning to zero by choice of different longitudinal modes. For the moving solitons corresponding to detuned (tilted wave-)emission, one can from this picture directly deduce the characteristics of the soliton motion:
1) the direction of the tilt of the wave vector of the light generated is free. Only the decomposition of the wave vector into a longitudinal and a transverse component, in magnitude, is fixed. Thus the direction of motion of the solitons is a priori undetermined. The actual direction of motion is determined by spontaneous symmetry breaking (in the same way as a single mode laser choses the phase of its field at laser threshold). As for the laser phase, there is here no restoring force for a particular direction. The direction of motion of a soliton can therefore change under external influence. In presence of noise it will change diffusively.
2) Whereas the direction of motion of a soliton, as well as its position in the laser cross section are free, the magnitude of the soliton velocity is fixed and given by the detuning of the resonator (wave vector tilt is proportional to detuning).
3) Different longitudinal orders can be emitted simultaneously if their wave vectors are tilted by different amounts. Thus in a resonator of length N $`\lambda `$/2, which emits a stationary soliton, simultaneous emission of moving solitons is possible. According to the different longitudinal orders, the velocities of the moving solitons are quantized for a resonator of given length.
4) If we consider that in the experiment a self-imaging resonator is used, with the nonlinear absorber in the near field (near a plane mirror) and the (photorefractive) gain medium in the far field, a strange type of competition among moving solitons follows. Fields of two solitons overlapping inside the gain medium compete. Only one soliton can then survive. The consequence is a competition of solitons in $`velocity`$ $`space`$. If and only if two solitons have the same vectorial velocity (i.e. velocity direction and magnitude equal), they will compete. Notably, even when they are far apart in the near field plane.
5) For stationary solitons the competition condition is trivially fulfilled. Therefore only one stationary soliton can exist at a time.
6) Moving solitons of different direction of motion and/or different magnitude of velocity can coexist, thus in general a large number of moving solitons can coexist with one stationary soliton.
FIG. 4 pictures the situation: a stationary soliton corresponds to emission along the resonator axis. Moving solitons correspond to emission at a fixed angle (given by detuning), to the resonator axis, in the far field. Therefore the stationary soliton corresponds to light in the central spot of the Airy rings of the resonator, while the moving solitons correspond to emission on the rings.
The restriction on the wavelength of the light generated is given by the finite widths of the gain line of the active medium. The finite width of this gain line corresponds to the allowed spread of tilt angles of the emitted wave. Therefore the emission of the stationary soliton corresponds to a central disk of finite diameter and that of the moving solitons to finite area sections of a ring. For moving solitons, the wavevector of emitted light changes faster with radial angle than with azimuthal angle. Therefore the light of a moving soliton in the far field occupies an elliptical section of an Airy ring (see FIG. 4). Its Fourier transform (moving soliton in the near field) is of elliptical shape, with the long axis into the direction of motion (suggesting again a fluid picture) FIG. 4.
FIG. 5 shows cases of all these soliton types as recorded on a photorefractive BaTiO<sub>3</sub> oscillator with a BR-saturable absorber which uses a self-imaging resonator . Excitation of emission segments on two rings into the same azimuthal angle results in an ”inch-worm”-soliton FIG.5.
In these experiments the gain elements of the lasers were placed in the conjugate plane of the near field. This leads to competition among certain solitons and, in particular, only one stationary soliton can exist. For applications where such solitons are to serve as binary elements for information storage, however, large numbers of stationary solitons are desirable.
In order to test whether this is achievable, experiments were conducted with the gain element and the nonlinear absorber both in the near field plane. This case had been extensively treated in in the form of both elements being inside a plane resonator. In the experiments the unsaturated absorption of the nonlinear absorber was so high that, even at the highest pump power available, the laser could not be brought to emit. External bleaching of the absorber (by a green laser) was used for complete saturation of the absorber so that large area laser emission occurred. Reducing then the pump strength, the absorber gradually unsaturates and becomes intensity dependent (nonlinear).
FIG. 6 shows the formation of the solitons in the experiment. To illustrate the development of solitons out of the laser emission, FIG.7 shows a numerical calculation of the process:
a) shows the emission typical for a tuned laser: A number of optical vortices exist which are separated by ”shocks” (”vortex glass” ).
b) as the pump is reduced, the vortices develop into dark areas and the shocks convert to 1-dimensional soliton-structures (see c), d)).
c) Further reduction of the pump leads to shortening of the 1-D solitary structures, which can then be converted to 2-D bright solitons by increasing the pump slightly.
d) This final increase in pump is necessary since the diffraction losses for a 2-D soliton are larger than for a 1-D solitary line. For details see . This can be seen in FIG. 6: in the outer regions where the pump is weak, stripes prevail, while at the higher pump in the center spots dominate.
FIG. 8 finally shows ensembles of 2-D solitons in the final stage, for different pump powers. It appeared that the number of solitons existing in the final state is a monotonic function of the pump power.
As the pump beam has a Gaussian intensity profile, gradients existed in the emission, which caused a slow outward motion of the solitons. Thereby some solitons would reach the edge of the emission field and extinguish there. This loss of solitons at constant pump power was accompanied by continual splitting of solitons in the central area of the near field. It appeared that the splitting occurred to balance the loss of solitons at the edges. FIG. 9 shows such soliton splitting.
Time has not yet allowed us to study the interaction of solitons in this system, which must exhibit interesting phenomena. Each of the solitons here is an independent laser whose phase is arbitrary. The interaction between solitons would on the one hand depend on the relative phase, on the other hand two solitons whose phase is free to change can be expected to synchronize their phases. Whether this would be in or out of phase, combined with the initial independence of the phases of the individual solitons should produce a complicated interaction.
## III Parametric mixing solitons <br>
Whereas the field of a laser, as used in the experiments described above, can have any phase value, in wave-mixing with phase matching the phase of the generated field is tied to the phase of the pump field. The generated field can therefore be described as a real-valued variable - as opposed to the complex-valued field of a laser.
Spatial resonator solitons require, as has been described in Sec. I, a bistable characteristic of the resonator. Experiments described so far utilize a subcritical bistability with a high- and a low-intensity branch. For degenerate wave mixing such as 4-wave mixing (D4WM) or degenerate parametric mixing (DOPO) a phase bistability of the (real-valued) field occurs . Although this is a symmetric and supercritical bistability, one would expect that also this kind of bistability would support spatial solitons, which we would call $`phase`$-$`solitons`$. A calculation shows that indeed spatial solitons exist for a finite small detuning.
FIG. 10 gives shapes of such solitons in intensity and phase. Inside the solitons the field phase is opposite to the surrounding so that a dark circular interference fringe forms the switching front connecting the two steady states.
The corresponding experiment was conducted using D4WM in BaTiO<sub>3</sub> . FIG. 11 shows the resonator used. Two pump beams together with the generated fields form an index grating in the material which diffracts pump radiation into the generated fields and adjusts self-consistently to the generated field.
The two counter-propagating generated fields resonate in the same (linear) resonator which forces their degeneracy and with that the bistability and real value of the generated field. A typical intensity distribution of the field generated experimentally is shown in FIG. 12. Small circular domains coexist which larger domains and black domain walls as was expected. FIG. 13 shows a domain wall of complicated shape together with an interferogram proving the opposite phase of the field on either side of the domain wall. We note that the domain walls themselves are extended 1-D solitary structures . They are the switching waves connecting the two steady states of the resonator (field with +$`\pi `$/2 and -$`\pi `$/2 phase). In general such switching waves move and the domains they surround grow or shrink, the length of the domain walls expending or contracting. The expansion/contraction can be controlled by resonator detuning . FIG. 14 shows the contraction of a domain wall.
Although in the experiment the small domains appeared to be stable, it is necessary to prove their stability more explicitly since stability is hard to distinguish from a slow transient dynamics. Recordings under well-defined resonator tuning conditions were therefore analysed. The resonator length was for this purpose actively stabilized with respect to the pump light frequency in a manner similar to that described in .
FIG. 15 shows three snapshots out of an evolution captured in 20 frames. Fig.16 shows the change of the length of the boundaries of the domains ”1”, ”2”, ”3” as a function of time as measured on the 20 recorded frames. The largest domain-”1”-boundary shrinks fastest. The medium sized domain-”2”-boundary shrinks at a slower rate, while the domain-”3”-boundary does not change in time. This is proof that the small circular domain ”3” is stable and represents a phase-soliton. The faster shrinking of the larger domain is what is expected theoretically .
Stability of a soliton, under conditions where the shrinking rate of a large domain is even larger, is shown in the four snapshots FIG.17.
Analysing the 20 frames out of which the FIG. 17 snapshots are taken leads to FIG. 18 proving again than the small domain is a phase-soliton.
## IV Nonlinear semiconductor resonators <br>
An interesting nonlinear material for technical applications is a semiconductor. Solitons in semiconductor resonators were predicted in .
We have used a nonlinear semiconductor Fabry Perot for initial experiments on spatial solitons. The nonlinear medium consists of three quantum wells (FIG. 19). These three wells have between them and 99,5 $`\%`$ Bragg mirrors, spacers to make the space between the mirrors equal to a few $`\lambda `$/2. The thickness of this structure is a few micron while the cross section of the resonator is a few cm. Details about these nonlinear Fabry Perots are given in . The resonance of the Bragg resonator is slightly dependent on the location on the sample, so that by choice of the area to be irradiated the wavelength of excitation of the semiconductor material can be chosen to lie either in the interband transition, between interband transition and exciton line, on the exciton line, or above the exciton line. Typically we work a few 10 nm above the exciton wavelength so that the nonlinearity is largely dispersive and defocusing. The empty resonator finesse is around 500. With the residual absorption of the semiconductor material the resonator finesse is $``$ 100. A cw Ti:Al<sub>2</sub>0<sub>3</sub>-laser is used for the excitation.
To avoid thermal effects the observations are done during radiation pulses of a few microseconds length which are repeated every millisecond. To create the pulses acousto-optic modulators are used. The radiation is focused into a spot size of 50 - 100 $`\mu `$m on the semiconductor resonator surface, the light reflected from the sample is observed by a CCD camera or by a fast (2ns) photodiode.
As has been theoretically predicted for such dispersively nonlinear resonators, under irradiation structure forms. FIG. 20 shows that the structure is a hexagonal lattice as expected .
Bistability of the resonator is easily reached (at intensities of a few 100 W/cm<sup>2</sup>). FIG. 21a shows the incident intensity (dashed) and the reflected intensity (solid) as measured by the fast photodiode.
The reduction of reflected light, as the sample is switched on, is clearly seen, as is the increase of reflected light as the sample is switched back off. From the intensities at which the switching ”on” and ”off” occurs, the width of the bistability loop is apparent. After the resonator was switched ”on” at the point of observation (image of detector) we varied the irradiating intensity in order to observe the motion of the switching waves connecting the on- and off-switched regions. By recording curves as in FIG. 21a for different locations across a diameter of the laser irradiation spot on the sample one is able to construct the time history of the resonator dynamics on this diameter. FIG. 21d shows a recording thus obtained.
Brightness in FIG.21d corresponds to reflectivity value. The corresponding irradiation is given in FIG.21b in the form of equi-intensity lines.
As the irradiation intensity is initially increased the switching-on threshold is reached at a certain time in the center of the laser field. A switching wave travels then outward until it becomes stationary. We call it then ”switching zone”. As mentioned in Sec. I a switching wave moves into the unswitched region if the background intensity is larger than that corresponding to the unstable steady state solution on the unstable branch of the S-shaped resonator characteristic and vice versa. Thus the switching wave becomes stationary at a particular intensity corresponding to a certain distance from the maximum of the Gaussian laser beam. This is what we observe.
When the power of the laser field is reduced, one would then expect that the stationary switching wave (switching zone) would move towards the center of the laser beam. Comparing FIG. 21b and FIG. 21d this is confirmed. The switching zone (boundary between on- and off-switched areas) moves precisely on an equi-intensity contour of the input light. FIG. 21c shows for clarity the equi-reflectivity line corresponding to the switching zone, which follows the second lowest intensity contour of the incident light.
If one chooses a location on the sample where the resonator resonance is further from the exciton line, the switching zone becomes accompanied on the lower branch side by fringes. This is an indication that under these conditions the lower branch is close to a modulational instability. This is a requirement for the formation of spatial dark solitons as described in Sec. I.
Suitable choice of parameters appears indeed to lead to a solitary structure. FIG. 22 shows a bright narrow spot of the size the order of the elements of the hexagonal pattern FIG. 20. (FIG. 22 is an average over 20 laser pulses with rectangular intensity-vs-time-form). We can clearly show that this small structure is bistable as predicted for a soliton: FIG. 23 shows the proof. We use a rectangular laser pulse with a constant intensity in the middle of the bistability region. A short increase in intensity beyond the upper intensity of the bistability switches the small localized structure ”on”. It remains ”on” until the intensity is for a short time reduced to below the lower intensity of the bistability.
This clearly demonstrates that the small structure is bistable as expected for a spatial soliton. We have tried to record that this structure has a stability of its shape as one would expect for a soliton (or a circularly locked switching zone). This is shown in FIG.24.
FIG. 24a gives the equi-intensity lines of the input field. During a short high intensity period at the beginning, the central part of the beam is switched up(see reflected intensity FIG. 24c). Reducing the intensity lets the switched-up region then contract to the small diameter of FIG. 22.
We test the stability of this structure now by a variation of the light intensity: if the small bright structure is just a circular switching zone (which is not locked and thus is not a soliton) then its diameter should follow a contour of the incident light. FIG. 24b shows the contour corresponding to the switching zone. Evidently it does not follow any of the contours of the incident light (FIG. 24a). This indicates a certain robustness of the narrow structure against changes of system parameters, for which reason FIG. 22 can be taken as the first indication of the existence of localized structures in semiconductor resonators.
A more explicit test on the existence of such independent localized structures was possible by injection of spatially narrow, temporarily short light pulses into the illuminated area . FIG. 25a shows a collection of bright spots resulting from illumination of the area shown. The narrow pulse is first directed at the spot marked ”a”. As can be seen in FIG. 25b, this switches the bright spot ”a” to dark. All other spots remaining unchanged.
Correspondingly, the second pulse was directed at bright spot ”b” which is equally switched to dark, all other spots remaining unchanged, as seen in FIG. 25c. As the FIGs 25 a to c are time-average-pictures not indicating directly the switching, FIG. 25d gives the intensity at the center of a switched spot as a function of time. The upper trace corresponds to a pulse energy not sufficient for switching and no permanent switch of the bright spot results. With sufficient energy of the pulse, however, permanent switching occurs, i.e. the intensity remains small throughout the illumination (until the reduction of the background intensity near the end of the illumination returns the resonator to monostable.) Thus, in this case, individual bright spots are found which can be independently of the rest of the system, switched. This makes the bright spots observed very ”soliton-like” .
To experiment with single bright spots, in order to show their soliton nature, was not possible under the largely dispersive conditions used, because the collection of bright spots appears largely as a consequence of linear filtering of the high finesse, high Fresnel number resonator. For details see .
In order to suppress this linear (”noise induced”-) structure, we chose to work at lower resonator finesse i.e. closer to the band edge or the exciton line, where, moreover, the dissipative solitons predicted in could be more likely expected. FIG. 26 shows observations (pictures were taken as snapshots of 50 ns duration) under these conditions.
FIG. 26a shows a switched area (resonator field is high in the dark area because observation is in reflection) surrounded by a switching front. For small intensities such switched area collapses into the structure shown in FIG. 26b, which shows all the characteristic features of a bright soliton (dark due to observation in reflection), particularly, the spatial oscillations around it .
We have recently been able to switch this structure on and off by a narrow pulse similar to what was done to observe Fig. 25d.
Interestingly, at higher illumination intensity ”dark” solitons (bright in reflection) appear. FIG. 26c shows such a soliton; embedded in the upswitched area as to be expected Such dark solitons were predicted in . As predicted there, we have found that these ”dark” solitons are less stable than the bright ones. They appear to move and we find that for smaller intensities they tend to pulse in a regular fashion, somewhat similarly to what was predicted in .
A hint towards the nonlinear nature of these structures comes from the brightness of the light reflected on the structure. Quantitative intensity measurement shows that the light reflected at the center of the structure is almost twice as high as the illumination intensity. This means a reflectivity higher than one. This has to be interpreted such that the structure collects light from its surrounding and emits it at its center.
FIG. 27 finally shows that more than one soliton can exist, even at our conditions which are limited by finite laser power and spatial nonuniformity of the illuminating field . With these solitary structures in semiconductor microresonators, information processing and storage should be possible.
This work was supported by ESPRIT projects PASS and PIANOS. Growth of semiconductor Fabry-Perots by I.Sagnes is gratefully acknowledged.
|
no-problem/0003/hep-ph0003196.html
|
ar5iv
|
text
|
# NUC-MINN-00/07-TMarch 2000 Two-Loop Contribution to High Mass Dilepton Production by Quark-Gluon Plasma
## Acknowledgements
This work was supported by the US Department of Energy under grant DE-FG02-87ER40328.
## Figures
|
no-problem/0003/hep-ph0003046.html
|
ar5iv
|
text
|
# Physics at 𝑒⁺𝑒⁻ Linear Colliders
## Introduction
The past decade of precision electroweak experiments has seen outstanding confirmation of the Standard Model at the per mille level. But the successes of the Standard Model have drawn increased attention to its deficiencies, notably its unsatisfactory treatment of the mechanism behind electroweak symmetry breaking. This phenomenon occurs roughly at the 1 TeV scale, which the LHC will access directly. The merits of any additional machine must therefore be evaluated within the context of the LHC program. A TeV-scale $`e^+e^{}`$ linear collider (LC) is a leading candidate for such a facility. Such a machine offers control over the beam energy and polarization, and a clean environment that enables precision event reconstruction. In this paper I illustrate aspects of the LC physics program with examples drawn from Higgs physics, top quark physics, and the study of large extra spacetime dimensions. More comprehensive reviews of physics at $`e^+e^{}`$ linear colliders are given in Refs. Snowmass96 , Peskin-Murayama , and TESLA-review .
## Machine and Detector Overview
Well-developed LC designs have been put forward by the SLAC-KEK joint effort (the NLC/JLC designs)NLC-ZDR and by DESY (the TESLA design)TESLA-design . The two machines differ technologically but achieve similar ends. The NLC uses warm rf cavities operating in the X-band (11.4 GHz). The baseline design assumes initial operation at $`\sqrt{s}=500`$ GeV at a luminosity of $`5\times 10^{33}`$ cm<sup>-2</sup>s<sup>-1</sup>, with an $`e^{}`$ beam polarization of 80-90%. The linac is designed to allow adiabatic energy upgrades to $`\sqrt{s}=1`$ TeV through the addition of klystrons. The size of the beam-delivery and final focus systems would allow eventual operation at 1.5 TeV. The TESLA design uses superconducting rf operating at $`1.3`$ GHz, and reaches a center of mass energy of 800-1000 GeV. Initial operation would be in the 200-500 GeV range, with an $`e^{}`$ polarization of 80% as well as a positron beam polarization of 60%, which would introduce new measureables into physics processes. This design permits operation at very high luminosity, up to $`5\times 10^{34}`$ cm<sup>-2</sup>s<sup>-1</sup>. The two designs have rather different beam characteristics and time structures. A brief comparison of the TESLA and NLC parameters is shown in Table 1; a more complete list can be found in Ref. machine-params .
Designing a detector for a linear collider is widely regarded, at least by those who work at hadron colliders, as an easy problem. Certainly the LC does not share the LHC’s formidable challenges of high event rates and radiation exposures. The challenges for a linear collider detector stem from the desire to fully exploit the clean machine environment by building a detector of the highest possible precision. Current designs are evolutionary extensions of the LEP and SLD detectors and feature CCD pixel vertex detectors, silicon or TPC outer trackers, and a fine-grained EM calorimeter located inside the magnet coil. Several designs incorporate the hadronic calorimeter inside the coil as well. In contrast to the LHC where triggering is a major challenge, at a linear collider the full detector can be read out between bunch trains and triggers formed in software.
## Light Higgs Physics
Current electroweak data point to the existence of a light Higgs between roughly 100 and 200 GeV, with the lower end of this range being favored by the fitsLEP-EWWG . If so, the Higgs may be discovered in the near future at LEPWu or the TevatronHobbs , but if not there then certainly at the LHC. A Higgs in this mass range can be convincingly observed at the LHC through such channels as $`H\gamma \gamma `$, $`HZZ^{()}`$ or $`WW^{()}`$, and production of $`t\overline{t}H`$ followed by $`H\gamma \gamma `$ or $`b\overline{b}`$ATLAS-phyTDR . Yet the LHC cannot see, or can see only with great difficulty, many important Higgs decays, such as $`Hc\overline{c}`$ and $`H\tau ^+\tau ^{}`$, that are critical to determining if this object is indeed the Higgs, the relic of electroweak symmetry breaking that couples to fermions in proportion to their masses.
At the LC, a light Higgs can be cleanly observed in recoil off the $`Z`$Higgs-recoil ; juste . The cross section for this process peaks in the 250-400 GeV range, making light Higgs physics an attractive target for the initial phase of the LC physics program. Since the signature of this process is a monoenergetic $`Z`$ boson, these events can be reconstructed with high efficiency independent of the Higgs decay mode. This gives a clean inclusive sample in which to study Higgs decays, measure $`m_H`$ to 100-200 MeV (similar to the LHC), and obtain an extremely precise measurement of the $`H`$-$`Z`$ Yukawa couplingjuste . A sample recoil mass plot is shown in Figure 1, in comparison to the dominant $`H\gamma \gamma `$ discovery signal for the same-mass Higgs at the LHC.
To exploit this inclusive sample fully, however, it is necessary to have a vertex detector capable of cleanly separating bottom, charm, and light quark jets. This ability is provided at the LC by a CCD pixel vertex detector, which can be located as close as 1 cm from the beam. Such a device is far too slow and rad-soft to be practical at a hadron collider, but the more forgiving environment of the LC allows one to exploit its superior spatial resolution for excellent flavor separation. The payoff of this capability is demonstrated in a recent study by BattagliaBattaglia-sitges , summarized in Figure 2. With 500 fb<sup>-1</sup> at $`\sqrt{s}=350`$ GeV, the branching ratios $`Hb\overline{b},c\overline{c},\tau ^+\tau ^{},gg,\mathrm{and}WW^{}`$ can be measured with an accuracy of 2-5%. Interestingly, the $`H\gamma \gamma `$ mode, which is the prime discovery channel for a $`120`$ GeV Higgs at the LHC, is undetectable in this production mode at the LC due to its very small ($`10^3`$) branching fraction. (The inverse process $`\gamma \gamma H`$ can be observed if the LC is operated as a $`\gamma \gamma `$ collider, using backscattered Compton photons from the primary $`e^+e^{}`$ beams.)
This ensemble of branching ratio measurements can be used to distinguish a SM Higgs from the lightest Higgs ($`h^0`$) of the MSSM. Typically the branching ratios of the $`h^0`$ are equal to those of the SM Higgs, times a function of $`\mathrm{tan}\beta `$ and $`M_{A^0}`$, where the $`A^0`$ is the heavy, CP-odd Higgs of the MSSM. A likelihood fit can therefore be used to determine whether the collection of observed BR’s is more consistent with the SM or with the MSSM. Separation of the SM from the MSSM Higgs can be determined at the 90% confidence level with the above measurements for $`M_{A^0}`$ up to 730 GeV, with the dominant uncertainty coming from knowledge of the $`b`$ and $`c`$ quark massesBattaglia-sitges . If SUSY exists at this scale, it will most likely have been discovered at the LHC, but measurements such as this will constitute a vital precision test that can only be performed at the LC, as a muon collider, too, has difficulties with high-precision charm ID.
## Top Quark Physics
The top quark’s privileged status as the most massive known matter particle, and the only fermion with a mass at the “natural” electroweak scale, make it a prime target for all future colliders. The LC aims to carry out a complete program of top quark physics, including measurements of top’s mass, width, form factors, and, perhaps most interestingly, its Yukawa coupling to the Higgs. Furthermore, the process $`e^+e^{}t\overline{t}\nu \overline{\nu }`$, accessible at a 1.5 TeV LC, can be a sensitive probe of electroweak symmetry-breaking by new strong interactionsttbarnunu .
The mass of the top quark, $`m_t`$, is a precision electroweak parameter that affects relationships among other electroweak observables such as $`M_W`$, $`M_Z`$, $`\mathrm{sin}^2\theta _W`$, and $`m_H`$. Future measurements at the Tevatron and the LHC are likely to give a 2-3 GeV precision on $`m_t`$, dominated by systematics. At the LC, the top quark’s mass can be determined to about 100-200 MeV, and the width to about 7%, in a relatively low-luminosity (10-50 fb<sup>-1</sup>) threshold scantop-threshold . But what would we gain from such a high precision measurement? Table 2 shows the fractional precision on $`m_H`$ that would follow from various uncertainties on $`M_W`$ and $`m_t`$TESLA-review . A 200 MeV uncertainty on $`m_t`$, together with a 15 MeV uncertainty on $`M_W`$ (which may be achievable from a high-luminosity return to the $`W`$ pair threshold with the LC) yields a 17% uncertainty on $`m_H`$. For comparison, an uncertainty of about 50% is expected from measurements at LEP II and the Tevatron. The Higgs is likely to have been discovered by the time the LC makes this measurement, in which case it will serve as a key consistency test—much like the comparison of the directly measured $`m_t`$ to the value inferred from electroweak data does today.
Of still greater interest, however, is a direct measurement of the top-Higgs Yukawa coupling, $`\lambda _{t\overline{t}H}`$. Such a measurement, like those of the Higgs branching ratios discussed above, is needed to establish the “Higgsness” of the Higgs, and may also probe the special nature of the top quark. At the LHC, the ratio $`\lambda _{t\overline{t}H}/\lambda _{WH}`$ can be measured to an accuracy of 25% for $`80<m_H<120`$ GeVATLAS-phyTDR . For a light Higgs, $`\lambda _{t\overline{t}H}`$ can be measured at the LC in $`t\overline{t}H`$ production. For Higgs masses around 120 GeV, the cross section for this process peaks at about 2.6 fb for $`\sqrt{s}=700`$-$`800`$ GeV and then falls off slowly. This is some three orders of magnitude smaller than the dominant $`t\overline{t}`$, $`WW`$, and $`t\overline{t}Z`$ backgrounds. But the spectacular nature of these events ($`qqqqbbbb`$ if both tops decay hadronically, or $`qqbbbb+\mathrm{}^\pm `$ \+ $`\mathrm{}\mathrm{E}`$ if both tops decay semi-leptonically), and their many kinematic constraints, provide enough handles that backgrounds can be acceptably reduced through direct mass reconstructionbaer or a neural netjuste2 . In the latter study, the authors assume 1000 fb<sup>-1</sup> (about 3 years of running at $`=10^{34}`$) at $`\sqrt{s}=800`$ GeV, and obtain a 5.5% uncertainty on $`\lambda _{t\overline{t}H}`$. Outstanding flavor-ID is again a prerequisite for this measurement. The possibility of measuring $`\lambda _{t\overline{t}H}`$ with such high precision is a strong argument in favor of the highest possible luminosities.
## Large Extra Dimensions
The recent proposallowscale ; ADD to resolve the hierarchy problem through a theory of low-scale quantum gravity with large extra spacetime dimensions has generated great interest because of its testable consequences at collidersADD-pheno ; Leff ; Hewett . In these models, Standard Model fields are confined to the 4-dimensional boundary of a “bulk” with $`n`$ compact extra dimensions of characteristic size $`R`$. Gravitons propagate in the bulk, where they couple with a strength of order the electroweak strength (hence the elimination of the hierarchy). The apparent weakness of gravity in our 4-dimensional world arises from the geometrical suppression of the gravitational flux lines by a factor proportional to the volume of the compact extra dimensions:
$$M_{\mathrm{Pl}}^2=V_nM_s^{n+2},$$
where $`M_{\mathrm{Pl}}=10^{19}`$ GeV is the Planck scale, $`V_nR^n`$ is the volume of the compact extra dimensions, and $`M_s`$ is the fundamental Planck scale in the bulk. If we require $`M_s=𝒪(\mathrm{few}\mathrm{TeV})`$ to eliminate the hierarchy, we can obtain the characteristic size $`R`$ of the extra dimensions for various values of $`n`$. Values of $`R`$ as large as a fraction of a millimeter are permitted by current limits from Cavendish-type experimentslong .
For phenomenological purposes, we are most concerned with the effective Lagrangian that describes the interactions between gravitons and SM fields in our 4-dimensional worldLeff . Gravitons then appear as a Kaluza-Klein tower, or series, of closely-spaced massive spin-2 states that can be emitted or exchanged along with SM gauge bosons. Each such state has a very weak coupling to matter, of order $`1/M_{\mathrm{Pl}}`$, but because of the large number of these states their cumulative effect is comparable to that of Standard Model processes at energies near $`M_s`$.
One way to search for the effect of these large extra dimensions at the LC is through the effect of graviton exchange on fermion pair productionHewett . This process is extremely well-understood theoretically and is a sensitive probe of many types of new physics, including $`Z^{}`$’s, compositeness, and technicolor. Graviton exchange turns out to leave the total cross section and integrated left-right asymmetry unchanged, but modifies the angular distributions in a way that depends on a single parameter, $`\lambda /M_s^4`$. Here $`\lambda `$ is a dimensionless parameter of order one (but of either sign) that depends on model-dependent physics above $`M_s`$. A fit to the angular distribution of $`e^+e^{}\mathrm{}^+\mathrm{}^{},b\overline{b}`$, and $`c\overline{c}`$ gives the exclusion reach shown in Figure 3(a). A 1 TeV LC with 200 fb<sup>-1</sup> can exclude $`M_s`$ up to 6.6 TeV, similar to the 6.0 TeV achievable with the LHC in 100 fb<sup>-1</sup> using $`e^+e^{}`$ and $`\mu ^+\mu ^{}`$ final states only. However, the LHC may have difficulty distinguishing a graviton signal from some other new physics process, such as a $`Z^{}`$. At the LC, the polarized beams and the ability to observe $`b\overline{b}`$ and $`c\overline{c}`$ final states allow a clear separation between spin-1 and spin-2 exchange for $`M_s`$ up to about $`5\sqrt{s}`$, as shown in Figure 3(b).
## Conclusions
TeV-scale $`e^+e^{}`$ linear colliders offer complementary access to the physics of electroweak symmetry breaking that will be explored initially by the LHC. Assuming that both the NLC and TESLA designs prove technologically (and financially) feasible, the choice of which one to build may depend on the relative importance of high luminosity for the highest precision measurements at lower energies (TESLA), versus upgradability to 1-1.5 TeV for exploratory physics and possible fuller elucidation of the SUSY spectrum. More advanced accelerator designs, such as the two-beam CLICCLIC design, may open the path to even higher energies in coming decades, ensuring a vibrant future for $`e^+e^{}`$ physics in the post-LHC era.
## Acknowledgements
I would like to thank the organizers for a stimulating and enjoyable conference in a lovely setting. This work is supported in part by DOE contract number DE-FG02-95ER40899 and by NSF CAREER award PHY-9818097.
|
no-problem/0003/cond-mat0003361.html
|
ar5iv
|
text
|
# NMR evidence for a ”generalized spin-Peierls transition” in the high magnetic field phase of the spin-ladder Cu2(C5N2H12)2Cl4
## Abstract
The magnetic field-induced 3D ordered phase of the two-leg spin-ladder Cu<sub>2</sub>(C<sub>5</sub>N<sub>2</sub>H<sub>12</sub>)<sub>2</sub>Cl<sub>4</sub> has been probed through measurements of <sup>1</sup>H NMR spectra and $`1/T_1`$ in the temperature range 70 mK - 1.2 K. The second order transition line $`T_c(H)`$ has been determined between $`H_{c1}=`$ 7.52 T and $`H_{c2}=`$ 13 T and varies as $`(HH_{c1})^{2/3}`$ close to $`H_{c1}`$. From the observation of anomalous shifts and a crossover in $`1/T_1`$ above $`T_c`$, the mechanism of the 3D transition is argued to be magnetoelastic, involving a displacement of the protons along the longitudinal exchange ($`J_{}`$) path.
Two-leg $`S`$=1/2 ladders are 1D objects formed by two antiferromagnetically (AF) coupled Heisenberg spin chains. In zero external magnetic field, their ground state is a collective singlet state ($`S`$ = 0), separated by a gap $`\mathrm{\Delta }`$ from the first excited states which are triplets ($`S=1`$) . As a consequence, the spin-spin correlations remain of short range even when $`T0`$, in spite of the strong interactions. There is currently considerable interest in these systems, often named spin-liquids, since the short-range singlet correlations of the ground state are believed to lead to superconducting correlations when mobile charges are added .
The fascinating properties of spin-liquids can also be revealed through the effect of a magnetic field $`H`$. This can be described in four steps: (1) For $`H0`$, the gap is reduced as $`\mathrm{\Delta }(H)=\mathrm{\Delta }g\mu _BH`$. (2) At the so-called quantum critical point $`H=H_{c1}=\mathrm{\Delta }/g\mu _BH`$, the spin-gap vanishes. At $`T`$ = 0, this defines a (quantum) phase transition between gapped singlet and gapless magnetic phases. (3) For $`H>H_{c1}`$, the gapless spin system still exhibits 1D behavior at finite temperature, but as $`T`$ is reduced the magnetic correlation length and the spin-spin correlation functions now diverge (Luttinger liquid behaviour). This behaviour can be observed up to a saturation field $`H_{c2}`$ where all spins are polarized by $`H`$. (4) For $`H_{c1}<H<H_{c2}`$, the transverse coupling $`J_t`$ between ladders should drive the system towards a 3D magnetic ordering at low $`T`$. The nature of the 3D phase, in the vicinity of the two quantum critical points, is expected to be highly unconventional.
Points (1-3) were previously observed in NMR studies of the spin-ladder Cu<sub>2</sub>(C<sub>5</sub>N<sub>2</sub>H<sub>12</sub>)<sub>2</sub>Cl<sub>4</sub> in which the low values of the AF exchange coupling (between spins 1/2 on Cu<sup>2+</sup> ions) along the legs ($`J_{}`$ 3 K) and along the rungs ($`J_{}`$ 13 K), lead to experimentally accessible values of $`H_{c1}`$ ($``$ 7.5 T) and $`H_{c2}`$ ($``$ 13.5 T) . As to point (4), specific heat measurements in the field range 7-12 T have indeed revealed a phase transition towards a 3D ordered phase for $`T<1`$ K. However, no microscopic experimental insight has been reported so far, although this phase currently generates a large interest .
In this Letter, we present a <sup>1</sup>H NMR study of Cu<sub>2</sub>(C<sub>5</sub>N<sub>2</sub>H<sub>12</sub>)<sub>2</sub>Cl<sub>4</sub> in the field range 7.5-14 T, including the $`T`$-dependence (in the range 70 mK-1.2 K) of the lineshape and of the nuclear spin-lattice relaxation rate $`1/T_1`$. From the splitting of NMR lines, we define the transition line $`T_c(H)`$ below which 3D ordering occurs. In addition, we observe through $`1/T_1`$ a drastic change in the low-energy spin excitations below $``$1.3 K, which is above $`T_c`$. This behavior is correlated with anomalous shift of some <sup>1</sup>H lines, which we attribute to the displacement of protons involved in the exchange path along the legs of the ladder. This is argued to demonstrate the magneto-elastic nature of the transition, which is in some way analogous to the incommensurate magnetic phase of spin-Peierls systems.
Experiments have been performed on a single crystal placed inside the mixing chamber of a <sup>3</sup>He-<sup>4</sup>He dilution refrigerator, the $`b`$-axis of the crystal being parallel to $`H`$. In this orientation, the number of inequivalent <sup>1</sup>H sites in the crystal is reduced to 24. All sites experience different hyperfine fields through their dipolar coupling to the electronic spins localized at the Cu sites . For large polarization of the electronic moments, these couplings lead to <sup>1</sup>H spectra extending over several MHz, which have been recorded at fixed field by sweeping the frequency. In Fig. 1 is shown the low frequency part of such spectra recorded at 70 mK at various values of $`H`$. One clearly observes a splitting of all lines starting at $`H=7.55`$ T and increasing for higher $`H`$ values. This is the signature of an ordered magnetic phase. Following the evolution of the spectrum with $`H`$ for different $`T`$ values allows an accurate determination of the transition line $`T_c(H)`$. As shown in Fig. 2, $`T_c`$ rapidly increases as a function of $`HH_{c1}`$ and then saturates around 900 mK for $`H`$ 9 T. In this range, the transition was determined at fixed value of $`H`$ and decreasing $`T`$. Again, we observe a line splitting (Fig. 1), which quickly increases below $`T_c`$ and then saturates at lower $`T`$, as expected for an order parameter. The resulting experimental phase diagram is shown in Fig. 2.
There are a few theoretical calculations of the line $`T_c(H)`$: Close to $`H_{c1}`$. Giamarchi and Tsvelik predict a variation as $`(HH_{c1})^{2/3}`$, resulting from the condensation of dilute hard core bosons . Wessel and Haas rather propose an $`(HH_{c1})^{1/2}`$ variation . As shown in the inset to Fig. 2, $`T_c`$ can be well-fitted to $`(HH_{c1})^{2/3}`$, with $`H_{c1}=7.52`$ T. Note that the prediction of a first order transition with a jump of magnetization for values of $`H`$ close to $`H_{c1}`$ is not observed in our data even very close to $`H_{c1}`$. In all cases, each line splits at $`T_c`$ keeping the same center of gravity, thus indicating that the magnetization is continuous with $`T`$ or $`H`$, in agreement with earlier thermodynamic measurements .
A careful examination of the whole <sup>1</sup>H spectrum (Fig. 3) above $`T_c`$ reveals that the $`T`$-dependence of the shifts of the two lines at the left side of the spectrum does not scale with that of other <sup>1</sup>H lines. These two lines are assigned to the protons H(2) and H(4) , which are along the exchange path $`J_{}`$ corresponding to the atom sequence Cu-N-H$`\mathrm{}`$Cl-Cu (see inset to Fig. 1). Since the shift of a proton <sup>1</sup>H($`i`$) is given by $`\delta h(i)=A(i)\chi _{Cu}`$, in which $`A(i)`$ is its hyperfine field and $`\chi _{Cu}`$ the spin susceptibility per Cu atom, the absence of scaling can only be explained if $`A(2)`$ and $`A(4)`$ become $`T`$-dependent. This, in turn, can only occur if the distances H(2)-Cu and H(4)-Cu change . This is thus clear evidence that some kind of lattice instability occurs prior to the magnetic ordering.
Since these protons are along the exchange path corresponding to $`J_{}`$, any modification of the hydrogen bond should clearly change the magnetic excitation spectrum of the system, which can be probed by the nuclear spin-lattice relaxation rate ($`1/T_1`$).
Such change is indeed observed in $`T_1`$ data, measured between 1.2 K and 70 mK and reported in Fig. 4. There are three striking features in these data: i) the huge decrease of $`1/T_1`$ up to 5 orders of magnitude at $`H`$= 8.0 T and 10.85 T. This decrease can be fitted by a power law ($`1/T_1T^5`$) . ii) the increase of $`1/T_1`$, attributed to the divergence of the spin correlation functions , stops around 1.3 K for all field values between $`H_{c1}`$ and $`H_{c2}`$. iii) $`1/T_1`$ starts decreasing before the onset of the 3D transition (as detected by the modification of the lineshape).
We now discuss the possible nature of this 3D ordered ground state. From a theoretical point of view, the spin-ladder Hamiltonian can be transformed into an interacting spinless fermion model through the canonical Jordan-Wigner transformation . In this representation, $`H`$ acts as the chemical potential $`\mu `$, and for $`H=H_{c1}`$, $`\mu `$ lies exactly at the bottom of the band. Increasing $`H`$ further fills the band in. Since the value of the Fermi-wave vector $`k_F`$ is set by the field, $`k_F`$ is incommensurate (IC) with the underlying lattice, except at half filling. Due to the divergence of the spin susceptibility at 2$`k_F`$, the on-site magnetization of the ordered phase is also expected to be incommensurate. Between $`H_{c1}`$ and $`H_{c2}`$, and at sufficiently low $`T`$, the low energy properties of the system are those of a Luttinger liquid .
In the same field range, the spin-ladder Hamiltonian can also be approximately mapped onto that of an $`XXZ`$ $`S`$=1/2 chain . In this latter representation, an effective spin 1/2 is introduced, whose eigenstates correspond to the singlet and the lowest state of the triplet on a rung, and the effective field $`H_{\mathrm{eff}}`$ is equal to $`H(H_{c1}+H_{c2})/2`$. In the following discussion, we shall use either the spinless fermion or the XXZ language.
It is well known that there are two possibilities to achieve 3D ordering at finite $`T`$ for quantum spin chains: a transverse magnetic coupling $`J_t`$, leading to some kind of AF order when $`J_t\xi _{}^2k_BT`$ or a spin-Peierls (SP) transition in presence of magneto-elastic coupling . In the latter case, a modulation of the lattice occurs, which is stabilized by the energy gain due to the opening of a gap in the magnetic excitation spectrum. The 3D character of the transition arise, in this case, from to the 3D nature of the elastic modes.
The case of a transverse magnetic coupling for an assembly of ladders has been treated by Giamarchi and Tsvelik . In their model, the 3D ordering corresponds to a freezing of the XY degrees of freedom of the triplet states, and below $`T_c`$, the local magnetization $`M_z(R)`$ is incommensurate along the direction of the ladders. A magneto-elastic scenario has been treated by Nagaosa and Murakami , who considered a modulation of the exchange along the rungs $`J_{}`$ and by Calemczuk et al. who found that a modulation of $`J_{}`$ better explains specific heat data . As in the purely magnetic scenario, the local magnetization $`M_z(R)`$ is IC along the ladder direction, and the 3D ordered phase is in some sense similar to the IC magnetic phase observed in spin-Peierls systems above their threshold field $`H_c`$ . There is, however, a noticeable difference: in regular SP compounds, there is a commensurate (dimerized) phase, which is a collective singlet for $`0<H<H_c`$. In the spin-ladder system, the commensurability occurs for $`H=(H_{c1}+H_{c2})/2`$ (i.e. $`H_{\mathrm{eff}}0`$). Any extension of the commensurability around this $`H`$ value would correspond to a plateau in the magnetization (which is not observed in Cu<sub>2</sub>(C<sub>5</sub>N<sub>2</sub>H<sub>12</sub>)<sub>2</sub>Cl<sub>4</sub>). Along the same line, the parts of the phase diagram close to $`H_{c1}`$ or $`H_{c2}`$ in the ladder case correspond to a field range close to the saturation of the magnetization in the case of a regular SP system.
We now compare our data to the predictions of these different models. All of them predict an IC modulation of $`M_z(R)`$, giving rise to an infinite number of inequivalent sites. This should transform each NMR line into a double horned lineshape . Because of the high density of $`{}_{}{}^{1}H`$ lines in our spectra, we cannot distinguish whether each line transforms this way or simply splits.
At variance with the lineshape, the $`T`$-dependence of $`1/T_1`$ is expected to depend strongly on the nature of the ground state. A purely magnetic ground state implies a divergence of $`1/T_1`$ at the transition, which is not observed experimentally.
The increase of $`1/T_1`$ upon cooling, seen on Fig. 4 above $`1.3`$ K, is indeed related to the Luttinger liquid behavior of the gapless 1D system, as explained in . This increase cannot be attributed to critical fluctuations linked to the transition, since it starts too far above $`T_c`$ and furthermore we now find that it even stops above $`T_c`$. This is particularly obvious for the data at 7.65 T (right panel of Fig. 4).
In contrast, in a spin-Peierls like transition, the low-energy spectral weight of AF fluctuations starts being suppressed even above the 3D ordering by the coupling to the elastic degrees of freedom. Hence, there is no divergence, but a rapid decrease of $`1/T_1`$ due to the opening of a gap. This was experimentally observed at the transition between the IC high field phase and the uniform phase in the spin-Peierls compound CuGeO<sub>3</sub> . Due to the IC nature of the ground state, the relaxation rate below $`T_c`$ should be dominated by the phasons, which are the standard Goldstone modes of IC phases, so that the decrease of $`1/T_1`$ is not necessarily thermally activated. This could be the origin of the apparent power law observed here. As shown in Fig. 4, a faster decrease is observed for $`H=10.85`$ T, which corresponds approximately to $`H_{\mathrm{eff}}=(H_{c1}+H_{c2})/2`$ where commensurability occurs. For $`H=7.65`$ T, a field close to $`H_{c1}`$, the decrease is noticeably slower. However, for this field value, $`1/T_1`$ was measured on the line I \[protons H(2) and H(4)\], while it was measured on the line II for $`H=8.0`$ T and 10.85 T (see inset to Fig. 3). As discussed in Ref. , the corresponding sites have different form factors, and do not probe the same linear combination of the transverse and longitudinal spin-spin correlation functions. So, we cannot really attribute the weaker $`T`$-dependence of $`1/T_1`$ at 7.65 T to the proximity to $`H_{c1}`$.
In summary, for approximately the same $`T`$-range where $`1/T_1`$ decreases, we observe displacements of protons located in the hydrogen bonding along the exchange path $`J_{}`$. These three features, namely i) absence of $`1/T_1`$ divergence at $`T_c`$, ii) decrease of $`1/T_1`$ above $`T_c`$ and iii) evidence for proton displacements in the same $`T`$ range, rule out any model involving solely a magnetic coupling between the ladders , and strongly support a ”generalized spin-Peierls” scenario. We also found that the field dependence of $`T_c`$ is consistent with the predictions of a Bose condensation type of transition ($`T_c(HH_{c1})^{2/3}`$).
It must be stressed that NMR spectra only tell us about the time-averaged displacements of these protons. Would the displacements of protons H(2) and H(4) be purely static, only the value of $`J_{}`$ would change (and thus that of $`H_{c1}J_{}J_{}`$). The dynamics, evidenced by the divergence of $`1/T_1`$ in the 1D regime, would not be affected. To alter the magnetic excitation spectrum, a dynamical modulation of the position of these protons must be present. In other words, they have to participate to some phonon mode coupled to the magnetic excitations and leading to a dynamic modulation of $`J_{}`$. This magneto-elastic coupling appears prior to the ”spin-Peierls transition” at $`T_c(H)`$ and it readily explains the change in the $`T`$-dependence of $`1/T_1`$ above $`T_c`$. The freezing of this collective mode would finally lead to a static IC modulation of the position of the protons along the legs of the ladder.
We note that preliminary data show that anomalous shifts, related to proton movements, are still observed very close to $`H_{c1}`$, where $`T_c0`$. This can be explained by the fact that, already at $`H=H_{c1}`$, the system has interest in suppressing the quantum magnetic fluctuations through the magneto-elastic coupling. However, an ordering cannot be achieved at finite temperature since there is no finite magnetization along the individual ladders.
We thank Th. Giamarchi for discussions and P. Van der Linden for technical help.
|
no-problem/0003/math0003062.html
|
ar5iv
|
text
|
# Density of integral points on algebraic varieties
## 1 Introduction
Let $`K`$ be a number field, $`S`$ a finite set of valuations of $`K`$, including the archimedean valuations, and $`𝒪_S`$ the ring of $`S`$-integers. Let $`X`$ be an algebraic variety defined over $`K`$ and $`D`$ a divisor on $`X`$. We will use $`𝒳`$ and $`𝒟`$ to denote models over $`\mathrm{Spec}(𝒪_S)`$.
We will say that integral points on $`(X,D)`$ (see Section 2 for a precise definition) are potentially dense if they are Zariski dense on some model $`(𝒳,𝒟)`$, after a finite extension of the ground field and after enlarging $`S`$. A central problem in arithmetic geometry is to find conditions insuring potential density (or nondensity) of integral points. This question motivates many interesting and concrete problems in classical number theory, transcendence theory and algebraic geometry, some of which will be presented below.
If we think about general reasons for the density of points - the first idea would be to look for the presence of a large automorphism group. There are many beautiful examples both for rational and integral points, like K3 surfaces given by a bihomogeneous $`(2,2,2)`$ form in $`^1\times ^1\times ^1`$ or the classical Markov equation $`x^2+y^2+z^2=3xyz`$. However, large automorphism groups are “sporadic” - they are hard to find and usually, they are not well behaved in families. There is one notable exception - namely automorphisms of algebraic groups, like tori and abelian varieties.
Thus it is not a surprise that the main geometric reason for the abundance of rational points on varieties treated in the recent papers , , is the presence of elliptic or, more generally, abelian fibrations with multisections having a dense set of rational points and subject to some nondegeneracy conditions. Most of the effort goes into ensuring these conditions.
In this paper we focus on cases when $`D`$ is nonempty. We give a systematic treatment of known approaches to potential density and present several new ideas for proofs. The analogs of elliptic fibrations in log geometry are conic bundles with a bisection removed. We develop the necessary techniques to translate the presence of such structures to statements about density of integral points and give a number of applications.
The paper is organized as follows: in Section 2 we introduce the main definitions and notations. Section 3 is geometrical - we introduce the relevant concepts from the log minimal model program and formulate several geometric problems inspired by questions about integral points. In Section 4, we recall the fibration method and nondegeneracy properties of multisections. We consider approximation methods in Section 5. Section 6 is devoted to the study of integral points on conic bundles with sections and bisections removed. In the final section, we survey the known results concerning potential density for integral point on log K3 surfaces.
Acknowledgements. The first author was partially supported by an NSF Postdoctoral Research Fellowship. The second author was partially supported by the NSA. We benefitted from conversations with Y. André, F. Bogomolov, A. Chambert-Loir, J.-L. Colliot-Thélène, J. Kollár, D. McKinnon, and B. Mazur. We are grateful to P. Vojta for comments that improved the paper, especially Proposition 3.12, and to D.W. Masser for information on specialization of nondegenerate sections. Our approach in Section 6 is inspired by the work of F. Beukers (see and ).
## 2 Generalities
### 2.1 Integral points
Let $`\pi :𝒰\mathrm{Spec}(𝒪_S)`$ be a flat scheme over $`𝒪_S`$ with generic fiber $`U`$. An integral point on $`𝒰`$ is a section of $`\pi `$; the set of such points is denoted $`𝒰(𝒪_S)`$.
In the sequel, $`𝒰`$ will be the complement to a reduced effective Weil divisor $`𝒟`$ in a normal proper scheme $`𝒳`$, both generally flat over $`\mathrm{Spec}(𝒪_S)`$. Hence an $`S`$-integral point $`P`$ of $`(𝒳,𝒟)`$ is a section $`s_P:\mathrm{Spec}(𝒪_S)𝒳`$ of $`\pi `$, which does not intersect $`𝒟`$, that is, for each prime ideal $`𝔭\mathrm{Spec}(𝒪_S)`$ we have $`s_P(𝔭)𝒟_𝔭`$. We denote by $`X`$ (resp. $`D`$) the corresponding generic fiber. We generally assume that $`X`$ is a variety (i.e., a geometrically integral scheme); frequently $`X`$ is smooth and $`D`$ is normal crossings. Potential density of integral points on $`(𝒳,𝒟)`$ does not depend on the choice of $`S`$ or on the choices of models over $`\mathrm{Spec}(𝒪_S)`$, so we will not always specify them. Hopefully, this will not create any confusion.
If $`D`$ is empty then every $`K`$-rational point of $`X`$ is an $`S`$-integral point for $`(𝒳,𝒟)`$ (on some model). Every $`K`$-rational point of $`X`$, not contained in $`D`$ is $`S`$-integral on $`(𝒳,𝒟)`$ for $`S`$ large enough. Clearly, for any $`𝒳`$ and $`𝒟`$ there exists a finite extension $`K^{}/K`$ and a finite set $`S^{}`$ of prime ideals in $`𝒪_K^{}`$ such that there is an $`S^{}`$-integral point on $`(𝒳^{},𝒟^{})`$ (where $`𝒳^{}`$ is the basechange of $`𝒳`$ to $`\mathrm{Spec}(𝒪_S^{})`$).
The definition of integral points can be generalized as follows: let $`𝒵`$ be any subscheme of $`𝒳`$, flat over $`𝒪_S`$. An $`S`$-integral point for $`(𝒳,𝒵)`$ is an $`𝒪_S`$-valued point of $`𝒳𝒵`$.
### 2.2 Vojta’s conjecture
A pair consists of a proper normal variety $`X`$ and a reduced effective Weil divisor $`DX`$. A morphism of pairs $`\phi :(X_1,D_1)(X_2,D_2)`$ is a morphism $`\phi :X_1X_2`$ such that $`\phi ^1(D_2)`$ is a subset of $`D_1`$. In particular, $`\phi `$ restricts to a morphism $`X_1D_1X_2D_2`$. A morphism of pairs is dominant if $`\phi :X_1X_2`$ is dominant. If $`(X_1,D_1)`$ dominates $`(X_2,D_2)`$ then integral points are dense on $`(X_2,D_2)`$ when they are dense on $`(X_1,D_1)`$ (after choosing appropriate integral models.) A morphism of pairs is proper if $`\phi :X_1X_2`$ is proper and the restriction $`X_1D_1X_2D_2`$ is also proper; equivalently, we may assume that $`\phi :X_1X_2`$ is proper and $`D_1`$ is a subset of $`\phi ^1(D_2)`$. A resolution of the pair $`(X,D)`$ is a proper morphism of pairs $`\rho :(\stackrel{~}{X},\stackrel{~}{D})(X,D)`$ such that $`\rho :\stackrel{~}{X}X`$ is birational, $`\stackrel{~}{X}`$ is smooth, and $`\stackrel{~}{D}`$ is normal crossings.
Let $`X`$ be a normal proper variety of dimension $`d`$. Recall that a Cartier divisor $`DX`$ is big if $`h^0(𝒪_X(nD))>Cn^d`$ for some $`C>0`$ and all $`n`$ sufficiently large and divisible.
###### Definition 2.1
A pair $`(X,D)`$ is of log general type if it admits a resolution $`\rho :(\stackrel{~}{X},\stackrel{~}{D})(X,D)`$ with $`\omega _{\stackrel{~}{X}}(\stackrel{~}{D})`$ big.
Let us remark that the definition does not depend on the resolution.
###### Conjecture 2.2
(Vojta, ) Let $`(X,D)`$ be a pair of log general type. Then integral points on $`(X,D)`$ are not potentially dense.
This conjecture is known when $`X`$ is a semiabelian variety (, , ). Vojta’s conjecture implies that a pair with dense integral points cannot dominate a pair of log general type.
We are interested in geometric conditions which would insure potential density of integral points. The most naive statement would be the direct converse to Vojta’s conjecture. However this can’t be true even when $`D=\mathrm{}`$. Indeed, varieties which are not of general type may dominate varieties of general type, or more generally, admit finite étale covers which dominate varieties of general type (see the examples in ). In the next section we will analyze other types of covers with the same arithmetic property.
## 3 Geometry
### 3.1 Morphisms of pairs
###### Definition 3.1
We will say that a class of dominant morphisms of pairs $`\phi :(X_1,D_1)(X_2,D_2)`$ is arithmetically continuous if the density of integral points on $`(X_2,D_2)`$ implies potential density of integral points on $`(X_1,D_1)`$.
For example, assume that $`D=\mathrm{}`$. Then any projective bundle in the Zariski topology $`X`$ is arithmetically continuous. In the following sections we present other examples of arithmetically continuous morphisms of pairs.
###### Definition 3.2
A pseudo-étale cover of pairs $`\phi :(X_1,D_1)(X_2,D_2)`$ is a proper dominant morphism of pairs such that
a) $`\phi :X_1X_2`$ is generically finite, and
b) the map from the normalization $`X_2^{\mathrm{norm}}`$ of $`X_2`$ (in the function field of $`X_1`$) onto $`X_2`$ is étale away from $`D_2`$.
###### Remark 3.3
For every pair $`(X,D)`$ there exists a birational pseudo-étale morphism $`\phi :(\stackrel{~}{X},\stackrel{~}{D})(X,D)`$ such that $`\stackrel{~}{X}`$ is smooth and $`\stackrel{~}{D}`$ is normal crossings.
The following theorem is a formal generalization of the well-known theorem of Chevalley-Weil. It shows that potential density is stable under pseudo-étale covers of pairs.
###### Theorem 3.4
Let $`\phi :(X_1,D_1)(X_2,D_2)`$ be a pseudo-étale cover of pairs. Then $`\phi `$ is arithmetically continuous.
###### Remark 3.5
An elliptic fibration $`EX`$, isotrivial on $`XD`$, is arithmetically continuous. Indeed, it splits after a pseudo-étale morphism of pairs and we can apply Theorem 3.4.
The following example is an integral analog of the example of Skorobogatov, Colliot-Thélène and Swinnerton-Dyer () of a variety which does not dominate a variety of general type but admits an étale cover which does.
###### Example 3.6
Consider $`^1\times ^1`$ with coordinates $`(x_1,y_1),(x_2,y_2)`$ and involutions
$$j_1(x_1,y_1)=(x_1,y_1)j_2(x_2,y_2)=(y_2,x_2)$$
on the factors. Let $`j`$ be the induced involution on the product; it has fixed points
$$\begin{array}{cccc}x_1=0& & x_2=& \hfill y_2\\ x_1=0& & x_2=& \hfill y_2\\ y_1=0& & x_2=& \hfill y_2\\ y_1=0& & x_2=& \hfill y_2\end{array}.$$
The first projection induces a map of quotients
$$(^1\times ^1)/j^1/j_1.$$
We use $`X`$ to denote the source; the target is just $`\mathrm{Proj}([x_1^2,y_1])^1.`$ Hence we obtain a fibration $`f:X^1`$. Note that $`f`$ has two nonreduced fibers, corresponding to $`x_1=0`$ and $`y_1=0`$ respectively. Let $`D`$ be the image in $`X`$ of
$$(x_1=0)(y_1=0)(x_2=m_2y_2)(x_2=m_1y_2)$$
(where $`m_1,m_2`$ are nonzero and distinct). Since $`D`$ intersects the general fiber of $`f`$ in just two points, $`(X,D)`$ is not of log general type.
We can represent $`X`$ as a degenerate quartic Del Pezzo surface with four A1 singularities (see figure 1).
If we fix invariants
$$a=x_1^2x_2y_2,b=x_1^2(x_2^2+y_2^2),c=x_1y_1(x_2^2y_2^2),d=y_1^2(x_2^2+y_2^2),e=y_1^2x_2y_2$$
then $`X`$ is given as a complete intersection of two quadrics:
$$ad=be,c^2=bd4ae.$$
The components of $`D`$ satisfy the equations
$`D_1`$ $`=`$ $`\{a=b=c=0\}`$
$`D_2`$ $`=`$ $`\{c=d=e=0\}`$
$`D_3`$ $`=`$ $`\{(1+m_1^2)am_1b=(1+m_1^2)em_1d=0\}`$
$`D_4`$ $`=`$ $`\{(1+m_2^2)am_2b=(1+m_2^2)em_2d=0\}.`$
We claim that $`(X,D)`$ does not admit a dominant map onto a variety of log general type and that there exists a pseudo-étale cover of $`(X,D)`$ which does. Indeed, the preimage of $`XD`$ in $`^1\times ^1`$ is
$$(𝔸^10)\times (^1\{m_1,m_2,1/m_1,1/m_2\}),$$
which dominates a curve of log general type, namely, $`^1`$ minus four points. However, $`(X,D)`$ itself cannot dominate a curve of log general type. Any such curve must be rational, with at least three points removed; however, the boundary $`D`$ contains at most two mutually disjoint irreducible components.
The following was put forward as a possible converse to Vojta’s conjecture.
###### Problem 3.7 (Strong converse to Vojta’s conjecture)
Assume that the pair $`(X_2,D_2)`$ does not admit a pseudo-étale cover $`(X_1,D_1)(X_2,D_2)`$ such that $`(X_1,D_1)`$ dominates a pair of log general type. Are integral points for $`(X_2,D_2)`$ potentially dense?
### 3.2 Projective bundles in the étale topology
We would like to produce further classes of dominant arithmetically continuous morphisms $`(X_1,D_1)(X_2,D_2)`$.
###### Theorem 3.8
Let $`\phi :(X_1,D_1)(X_2,D_2)`$ be a projective morphism of pairs such that $`\phi `$ is a projective bundle (in the étale topology) over $`X_2D_2`$. We also assume that $`\phi ^1(D_2)=D_1`$. Then $`\phi `$ is arithmetically continuous.
Proof. We are very grateful to Prof. Colliot-Thélène for suggesting this proof.
Choose models $`(𝒳_i,𝒟_i)`$ ($`i=1,2`$) over some ring of integers $`𝒪_S`$, so that the morphism $`\phi `$ is well-defined and satisfies our hypotheses. (We enlarge $`S`$ as necessary.)
We recall basic properties of the Brauer group $`\mathrm{Br}(𝒪_S)`$. Let $`v`$ denote a place for the quotient field $`K`$ and $`K_v`$ the corresponding completion. Classfield theory gives the following exact sequence
$$0\mathrm{Br}(𝒪_S)\mathrm{Br}(K)_{vS}\mathrm{Br}(K_v).$$
The Brauer groups of the local fields corresponding to nonarchimedean valuations are isomorphic to $`/`$. Given a finite extension of $`K_w/K_v`$ of degree $`n`$ the induced map on Brauer groups is multiplication by $`n`$.
Each $`𝒪_S`$-integral point of $`(X_2,D_2)`$ yields an element of $`\mathrm{Br}(𝒪_S)`$ of order $`r`$. This gives elements of $`\mathrm{Br}(K_v)`$ which are zero unless $`vS`$. It suffices to find an extension $`K^{}/K`$ inducing a cyclic extension of $`K_v`$ of order divisible by $`r`$ for all $`vS`$. Indeed, such an extension necessarily kills any element of $`\mathrm{Br}(𝒪_S)`$ of order $`r`$. $`\mathrm{}`$
###### Remark 3.9
Let $`X`$ be a smooth simply connected projective variety which does not dominate a variety of general type. It may admit an projective bundle (in the étale topology) $`\phi :X`$, for example if $`X`$ is a K3 surface. However, $``$ cannot dominate a variety of general type. Indeed, given a dominant morphism $`\pi :Y`$, the fibers of $`\phi `$ are mapped to points by $`\pi `$. In particular, $`\pi `$ necessarily factors through $`\phi `$. (We are grateful to J. Kollár for emphasizing this point.)
###### Problem 3.10 (Geometric counterexamples to Problem 3.7)
Are there pairs which do not admit pseudo-étale covers dominating pairs of log general type but which do admit arithmetically continuous covers dominating pairs of log general type?
### 3.3 Punctured varieties
In Section 3.1 we have seen that potential density of integral points is preserved under pseudo-étale covers. It is not an easy task, in general, to check whether or not some given variety (like an elliptic surface) admits a (pseudo-) étale cover dominating a variety of general type. What happens if we modify the variety (or pair) without changing the fundamental group?
###### Problem 3.11 (Geometric puncturing problem)
Let $`X`$ be a projective variety with canonical singularities and $`Z`$ a subvariety of codimension $`2`$. Assume that no (pseudo-) étale cover of $`(X,\mathrm{})`$ dominates a variety of general type. Then $`(X,Z)`$ admits no pseudo-étale covers dominating a pair of log general type. A weaker version would be to assume that $`X`$ and $`Z`$ are smooth.
By definition, a pseudo-étale cover of $`(X,Z)`$ is a pseudo-étale cover of a pair $`(X^{},D^{})`$, where $`X^{}`$ is proper over $`X`$ and $`X^{}D^{}XZ`$.
###### Proposition 3.12
Assume $`X`$ and $`Z`$ are as in Problem 3.11, and that $`X`$ is smooth. Then
a) No pseudo-étale covers of $`(X,Z)`$ dominate a curve of log general type.
b) No pseudo-étale covers of $`(X,Z)`$ dominate a variety of log general type of the same dimension.
Proof. Suppose we have a pseudo-étale cover $`\rho :(X_1,D_1)(X,Z)`$ and a dominant morphism $`\phi :(X_1,D_1)(X_2,D_2)`$ to a variety of log general type. By Remark 3.3, we may take the $`X_i`$ smooth and the $`D_i`$ normal crossings. Since $`D_1`$ is exceptional with respect to $`\rho `$, Iitaka’s Covering Theorem ( Theorem 10.5) yields an equality of Kodaira dimensions
$$\kappa (K_X)=\kappa (K_{X_1}+D_1).$$
Assume first that $`X_2`$ is a curve. We claim it has genus zero or one. Let $`X^{\mathrm{norm}}`$ be the normalization of $`X`$ in the function field of $`X_1`$. The induced morphism $`g:X^{\mathrm{norm}}X`$ is finite, surjective, and branched only over $`Z`$, a codimension $`2`$ subset of $`X`$. Since $`X`$ is smooth, it follows that $`g`$ is étale. If $`X_2`$ has genus $`2`$ then $`\phi :X_1X_2`$ is constant along the fibers of $`X_1X^{\mathrm{norm}}`$, and thus descends to a map $`X^{\mathrm{norm}}X_2`$. This would contradict our assumption that no étale cover of $`X`$ dominates a variety of general type.
Choose a point $`pD_2`$ and consider the divisor $`F=\phi ^1p`$. Note that $`2F`$ moves because $`2p`$ moves on $`X_2`$. However, $`2F`$ is supported in $`D_1`$, which lies in the exceptional locus for $`\rho `$, and we obtain a contradiction.
Now assume $`\phi `$ is generically finite. We apply the Logarithmic Ramification Formula to $`\phi `$ (see Theorem 11.5)
$$K_{X_1}+D_1=\phi ^{}(K_{X_2}+D_2)+R$$
where $`R`$ is the (effective) logarithmic ramification divisor. Applying the Covering Theorem again, we find that $`\kappa (K_{X_1}+D_1R)=\kappa (K_{X_2}+D_2)=dim(X)`$. It follows that $`K_{X_1}+D_1`$ is also big, which contradicts the assumption that $`X`$ is not of general type. $`\mathrm{}`$
###### Problem 3.13 (Arithmetic puncturing problem)
Let $`X`$ be a projective variety with canonical singularities and $`Z`$ a subvariety of codimension $`2`$. Assume that rational points on $`X`$ are potentially dense. Are integral points on $`(X,Z)`$ potentially dense?
For simplicity, one might first assume that $`X`$ and $`Z`$ are smooth. Note that some conditions on the singularities of $`X`$ are necessary. For example, blow up $`^2`$ in 20 points lying along a smooth quartic curve $`C`$. Assume that the divisor class of the points equals $`5H`$, where $`H`$ is the hyperplane class of $`C`$. Then the linear series of quintics containing the 20 points gives a birational map contracting $`C`$. Let $`X`$ be the resulting surface and $`Z`$ the singular point. Rational points on $`X`$ are dense but density of integral points on $`(X,Z)`$ would contradict Vojta’s conjecture.
###### Remark 3.14
Assume that Problem 3.13 has a positive solution. Then potential density of rational points holds for all K3 surfaces.
Indeed, if $`Y`$ is a K3 surface of degree $`2n`$ then potential density of rational points holds for the symmetric product $`X=Y^{(n)}`$ (see ). Denote by $`Z`$ the large diagonal in $`X`$ and by $`\mathrm{\Delta }`$ the large diagonal in $`Y^n`$ (the ordinary product). Assume that integral points on $`(X,Z)`$ are potentially dense. Then, by Theorem 3.4 integral points on $`(Y^n,\mathrm{\Delta })`$ are potentially dense. This implies potential density for rational points on $`Y`$.
## 4 The fibration method and nondegenerate multisections
This section is included as motivation. Let $`B`$ be an algebraic variety, defined over a number field $`K`$ and $`\pi :GB`$ be a group scheme over $`B`$. We will be mostly interested in the case when the generic fiber is an abelian variety or a split torus $`𝔾_m^n`$. Let $`s`$ be a section of $`\pi `$. Shrinking the base we may assume that all fibers of $`G`$ are smooth. We will say that $`s`$ is nondegenerate if $`_ns^n`$ is Zariski dense in $`G`$.
###### Problem 4.1 (Specialization)
Assume that $`GB`$ has a nondegenerate section $`s`$. Describe the set of $`bB(K)`$ such that $`s(b)`$ is nondegenerate in the fiber $`G_b`$.
For simple abelian varieties over a field a point of infinite order is nondegenerate. If $`EB`$ is a Jacobian elliptic fibration with a section $`s`$ of infinite order then this section is automatically nondegenerate, and $`s(b)`$ is nondegenerate if it is nontorsion. By a result of Néron (see 11.1), the set of $`bB(K)`$ such that $`s(b)`$ is not of infinite order is thin; this holds true for abelian fibrations of arbitrary dimension.
For abelian fibrations $`AB`$ with higher-dimensional fibers, one must also understand how rings of endomorphisms specialize. The set of $`bB(K)`$ for which the restriction
$$\mathrm{End}(A)\mathrm{End}(A(b))$$
fails to be surjective is also thin; this is a result of Noot Corollary 1.5. In particular, a nondegenerate section of a family of generically simple abelian varieties specializes to a nondegenerate point outside a thin set of fibers.
More generally, given an arbitrary abelian fibration $`AB`$ and a nondegenerate section $`s`$, the set of $`bB(K)`$ such that $`s(b)`$ is degenerate is thin in $`B`$. (We are grateful to Masser for pointing out the proof.) After replacing $`A`$ by an isogenous abelian variety and taking a finite extension of the function field $`K(B)`$, we obtain a family $`A^{}B^{}`$ with $`A^{}A_1^{r_1}\times \mathrm{}\times A_m^{r_m}`$, where the $`A_j`$ are (geometrically) simple and mutually non-isogenous. By the Theorems of Néron and Noot, the $`A_j(b^{})`$ are simple and mutually non-isogenous away from some thin subset of $`B^{}`$. A section $`s^{}`$ of $`A^{}B^{}`$ is nondegenerate iff its projection onto each factor $`A_j^{r_j}`$ is nondegenerate; for $`b^{}`$ not contained in our thin subset, $`s^{}(b^{})`$ is nondegenerate iff its projection onto each $`A_j^{r_j}(b^{})`$ is nondegenerate. Hence we are reduced to proving the claim for each $`A_j^{r_j}`$. Since $`A_j`$ is simple, a section $`s_j`$ of $`A_j^{r_j}`$ is nondegenerate iff its projections $`s_{j,1},\mathrm{},s_{j,r_j}`$ are linearly independent over $`\mathrm{End}(A_j)`$. Away from a thin subset of $`B^{}`$, the same statement holds for the specializations to $`b^{}`$. However, Néron’s theorem implies that $`s_{j,1}(b^{}),\mathrm{},s_{j,r_j}(b^{})`$ are linearly independent away from a thin subset.
###### Remark 4.2
There are more precise versions of Néron’s Theorem due to Demyanenko, Manin and Silverman (see , for example). Masser has proposed another notion of what it means for a subset of $`B(K)`$ to be small, known as ‘sparcity’. For instance, the endomorphism ring of a family of abelian varieties changes only on a ‘sparse’ set of rational points of the base (see ). For an analogue to Néron’s Theorem, see .
Similar results hold for algebraic tori and are proved using a version of Néron’s Theorem for $`𝔾_m^n`$-fibrations (see pp. 154). A sharper result (for 1-dimensional bases $`B`$) can be obtained from the following recent theorem:
###### Theorem 4.3
() Let $`C`$ be an absolutely irreducible curve defined over a number field $`K`$ and $`x_1,\mathrm{},x_r`$ rational functions in $`K(C)`$, multiplicatively independent modulo constants. Then the set of algebraic points $`pC(\overline{})`$ such that $`x_1(p),\mathrm{},x_r(p)`$ are multiplicatively dependent has bounded height.
The main idea of the papers , , can be summarized as follows. We work over a number field $`K`$ and we assume that all geometric data are defined over $`K`$. Let $`\pi :EB`$ be a Jacobian elliptic fibration over a one dimensional base $`B`$. This means that we have a family of curves of genus 1 and a global zero section so that every fiber is in fact an elliptic curve. Suppose that we have another section $`s`$ which is of infinite order in the Mordell-Weil group of $`E(K(B))`$. The specialization results mentioned above show that for a Zariski dense set of $`bB(K)`$ the restriction $`s(b)`$ is of infinite order in the corresponding fiber $`E_b`$. If $`K`$-rational points on $`B`$ are Zariski dense then rational points on $`E`$ are Zariski dense as well.
Let us consider a situation when $`E`$ does not have any sections but instead has a multisection $`M`$. By definition, a multisection (resp. rational multisection) $`M`$ is irreducible and the induced map $`MB`$ is finite flat (resp. generically finite) of degree $`\mathrm{deg}(M)`$. The base-changed family $`E\times _BMM`$ has the identity section $`\mathrm{Id}`$ (i.e., the image of the diagonal under $`M\times _BME\times _BM`$) and a (rational) section
$$\tau _M:=\mathrm{deg}(M)\mathrm{Id}\mathrm{Tr}(M\times _BM)$$
where $`\mathrm{Tr}(M\times _BM)`$ is obtained (over the generic point) by summing all the points of $`M\times _BM`$. By definition, $`M`$ is nondegenerate if $`\tau _M`$ is nondegenerate.
When we are concerned only with rational points, we will ignore the distinction between multisections and rational multisections, as every rational multisection is a multisection over an open subset of the base. However, this distinction is crucial when integral points are considered.
If $`M`$ is nondegenerate and if rational points on $`M`$ are Zariski dense then rational points on $`E`$ are Zariski dense (see ).
###### Example 4.4
() Let $`X`$ be a quartic surface in $`^3`$ containing a line $`L`$. Consider planes $`^2`$ passing through this line. The residual curve has degree 3. Thus we obtain an elliptic fibration on $`X`$ together with the trisection $`L`$. If $`L`$ is ramified in a smooth fiber of this fibration then the multisection is nondegenerate and rational points are Zariski dense.
This argument generalizes to abelian fibrations $`\pi :AB`$. However, we do not know of any simple geometric conditions insuring nondegeneracy of a (multi)section in this case. We do know that for any abelian variety $`A`$ over $`K`$ there exists a finite extension $`K^{}/K`$ with a nondegenerate point in $`A(K^{})`$ (see ). This allows us to produce nondegenerate sections over function fields.
###### Proposition 4.5
Let $`Y`$ be a Fano threefold of type $`W_2`$, that is a double cover of $`^3`$ ramified in a smooth surface of degree 6. Then rational points on the symmetric square $`Y^{(2)}`$ are potentially dense.
Proof. Observe that the symmetric square $`Y^{(2)}`$ is birational to an abelian surface fibration over the Grassmannian of lines in $`^3`$. This fibration is visualized as follows: consider two generic points in $`Y`$. Their images in $`^3`$ determine a line, which intersects the ramification locus in 6 points and lifts to a (hyperelliptic) genus two curve on $`Y`$. On $`Y^{(2)}`$ we have an abelian surface fibration corresponding to the degree 2 component of the relative Picard scheme. Now we need to produce a nondegenerate multisection. Pick two general points $`b_1`$ and $`b_2`$ on the branch surface. The preimages in $`Y`$ of the corresponding tangent planes are K3 surfaces $`\mathrm{\Sigma }_1`$ and $`\mathrm{\Sigma }_2`$, of degree two with ordinary double points at the points of tangency. The surfaces $`\mathrm{\Sigma }_1`$ and $`\mathrm{\Sigma }_2`$ therefore have potentially dense rational points (this was proved in ), as does $`\mathrm{\Sigma }_1\times \mathrm{\Sigma }_2`$. This is our multisection; we claim it is nondegenerate for generic $`b_1`$ and $`b_2`$. Indeed, it suffices to show that given a (generic) point in $`Y^{(2)}`$, there exist $`b_1`$ and $`b_2`$ so that $`\mathrm{\Sigma }_1\times \mathrm{\Sigma }_2`$ contains the point. Observe that through a (generic) point of $`^3`$, there pass many tangent planes to the branch surface. $`\mathrm{}`$
###### Remark 4.6
Combining the above Proposition with the strong form of Problem 3.13 we obtain potential density of rational points on a Fano threefold of type $`W_2`$ \- the last family of smooth Fano threefolds for which potential density is not known.
Here is a formulation of the fibration method useful for the analysis of integral points:
###### Proposition 4.7
Let $`B`$ be a scheme over a number field $`K`$, $`GB`$ a flat group scheme, $`TB`$ an étale torsor for $`G`$, and $`MT`$ a nondegenerate multisection over $`B`$. If $`M`$ has potentially dense integral points then $`T`$ has potentially dense integral points.
Proof. Without loss of generality, we may assume that $`B`$ is geometrically connected and smooth. The base-changed family $`T\times _BM`$ dominates $`T`$, so it suffices to prove density for $`T\times _BM`$. Note that since $`M`$ is finite and flat over $`B`$, $`\tau _M`$ is a well-defined section over all of $`M`$ (i.e., it is not just a rational section). Hence we may reduce to the case of a group scheme $`GB`$ with a nondegenerate section $`\tau `$.
We may choose models $`𝒢`$ and $``$ over $`\mathrm{Spec}(𝒪_S)`$ so that $`𝒢`$ is a group scheme with section $`\tau `$. We may also assume that $`𝒪_S`$-integral points of $`\tau `$ are Zariski dense. The set of multiples $`\tau ^n`$ of $`\tau `$, each a section of $`𝒢`$, is dense in $`𝒢`$ by the nondegeneracy assumption. Since each has dense $`𝒪_S`$-integral points, it follows that $`𝒪_S`$-integral points are Zariski dense. $`\mathrm{}`$
A similar argument proves the following
###### Proposition 4.8
Let $`\phi :X^1`$ be a K3 surface with elliptic fibration. Let $`M`$ be a multisection over its image $`\phi (M)`$, nondegenerate and contained in the smooth locus of $`\phi `$. Let $`F_1,\mathrm{},F_n`$ be fibers of $`\phi `$ and $`D`$ a divisor supported in these fibers and disjoint from $`M`$. If $`M`$ has potentially dense integral points then $`(X,D)`$ has potentially dense integral points.
Proof. We emphasize that $`X`$ is automatically minimal and the fibers of $`\phi `$ are reduced (see ). Our assumptions imply that $`M`$ is finite and flat over $`\phi (M)`$.
After base-changing to $`M`$, we obtain a Jacobian elliptic fibration $`X^{}:=X\times _^1M`$ with identity and a nondegenerate section $`\tau _M`$. Let $`GX^{}`$ be the open subset equal to the connected component of the identity. Since $`D^{}:=D\times _^1M`$ is disjoint from the identity, it is disjoint from $`G`$. Hence it suffices to show that $`G`$ has potentially dense integral points.
We assumed that $`M`$ is contained in the smooth locus of $`\phi `$, so $`\tau _M`$ is contained in the grouplike part of $`X^{}`$, and some multiple of $`\tau _M`$ is contained in $`G`$. Repeating the argument for Proposition 4.7 gives the result. $`\mathrm{}`$
## 5 Approximation techniques
In this section we prove potential density of integral points for certain pairs $`(X,D)`$ using congruence conditions to control intersections with the boundary. Several of these examples are included as support for the statement of Problem 3.13.
###### Proposition 5.1
Let $`G=_j^NG_j`$ where $`G_j`$ are algebraic tori $`𝔾_m`$ or geometrically simple abelian varieties. Let $`Z`$ be a subvariety in $`G`$ of codimension $`>\mu =\mathrm{max}_j(dim(G_j))`$ and let $`U=GZ`$ be the complement. Then integral points on $`U`$ are potentially dense.
Proof. We are grateful to D. McKinnon for inspiring the following argument.
The proof proceeds by induction on the number of components $`N`$. The base case $`N=1`$ follows from the fact that rational points on tori and abelian varieties are potentially dense, so we proceed with the inductive step. Consider the projections $`\pi ^{}:GG^{}=_{jN}G_j`$ and $`\pi _N:GG_N`$. By assumption, generic fibers of $`\pi ^{}`$ are geometrically disjoint from $`Z`$.
Choose a ring of integers $`𝒪_S`$ and models $`𝒢_j`$ over $`\mathrm{Spec}(𝒪_S)`$. We assume that each $`𝒢_j`$ is smooth over $`\mathrm{Spec}(𝒪_S)`$ and that $`𝒢_N`$ has a nondegenerate point $`q`$ (see , for example, for a proof of the existence of such points on abelian varieties).
Let $`𝒯`$ be any subscheme of $`𝒢_N`$ supported over a finite subset of $`\mathrm{Spec}(𝒪_S)`$ such that $`𝒢_N`$ has an $`𝒪_S`$-integral point $`p_N`$ disjoint from $`𝒯`$. We claim that such integral points are Zariski dense. Indeed, for some $`m>0`$ we have
$$mq0(mod𝔭)$$
for each $`𝔭\mathrm{Spec}(O_S)`$ over which $`𝒯`$ has support. Hence we may take the translations of $`p_N`$ by multiples of $`mq`$.
After extending $`𝒪_S`$, we may assume $`U`$ has at least one integral point $`p=(p^{},p_N)`$ so that $`\pi _{}^{}{}_{}{}^{1}(p^{})`$ and $`\pi _N^1(p_N)`$ intersect $`Z`$ in the expected dimensions. In particular, $`\pi _{}^{}{}_{}{}^{1}(p^{})`$ is disjoint from $`Z`$. By the inductive hypothesis, we may extend $`𝒪_S`$ so that
$$(\pi _N^1(p_N)𝒢^{},\pi _N^1(p_N)𝒵)$$
has dense integral points. In particular, almost all such integral points are not contained in $`\pi ^{}(𝒵)`$, a closed proper subscheme of $`𝒢^{}`$. Let $`r`$ be such a point, so that $`F_r=\pi _{}^{}{}_{}{}^{1}(r)𝒢_N`$ intersects $`𝒵`$ in a subscheme $`𝒯`$ supported over a finite number of primes. Since $`(r,p_N)F_r`$ is disjoint from $`𝒯`$, the previous claim implies that the integral points of $`F_r`$ disjoint from $`𝒯`$ are Zariski dense. As $`r`$ varies, we obtain a Zariski dense set of integral points on $`𝒢𝒵`$. $`\mathrm{}`$
###### Corollary 5.2
Let $`X`$ be a toric variety and $`ZX`$ a subvariety of codimension $`2`$, defined over a number field. Then integral points on $`(X,Z)`$ are potentially dense.
Another special case of the Arithmetic puncturing problem 3.13 is the following:
###### Problem 5.3
Are integral points on punctured simple abelian varieties of dimension $`n>1`$ potentially dense?
###### Example 5.4
Potential density of integral points holds for simple abelian varieties punctured in the origin, provided that their ring of endomorphisms contains units of infinite order.
## 6 Conic bundles and integral points
Let $`K`$ be a number field, $`S`$ a finite set of places for $`K`$ (including all the infinite places), $`𝒪_S`$ the corresponding ring of $`S`$-integers, and $`\eta \mathrm{Spec}(𝒪_S)`$ the generic point. For each place $`v`$ of $`K`$, let $`K_v`$ be the corresponding complete field and $`𝔬_v`$ the discrete valuation ring (if $`v`$ is nonarchimedean). As before, we use calligraphic letters (e.g., $`𝒳`$) for schemes (usually flat) over $`𝒪_S`$ and roman letters (e.g., $`X`$) for the fiber over $`\eta `$.
### 6.1 Results on linear algebraic groups
Consider a linear algebraic group $`G/K`$. Choose a model $`𝒢`$ for $`G`$ over $`𝒪_S`$, i.e., a flat group scheme of finite type $`𝒢/𝒪_S`$ restricting to $`G`$ at the generic point. This may be obtained by fixing a representation $`G\mathrm{GL}_n(K)`$ (see also §10-11). The $`S`$-rank of $`G`$ (denoted $`\mathrm{rank}(G,𝒪_S)`$) is defined as the rank of the abelian group of sections of $`𝒢(𝒪_S)`$ over $`𝒪_S`$. This does not depend on the choice of a model. Indeed, consider two models $`𝒢_1`$ and $`𝒢_2`$ with a birational map $`b:𝒢_1𝒢_2`$; of course, $`b`$ is trivial over the generic point and the proper transform of the identity section $`I_1`$ is the identity. There is a subscheme $`Z\mathrm{Spec}(𝒪_S)`$ with finite support such that the indeterminacy of $`b`$ is in the preimage of $`Z`$. It follows that the sections of $`𝒢_1`$ congruent to $`I_1`$ modulo $`Z`$ have proper transforms which are sections of $`𝒢_2`$. Such sections form a finite-index subgroup of $`𝒢_1(𝒪_S)`$.
Let $`𝔾_m`$ be the multiplicative group over $``$, i.e., $`\mathrm{Spec}([x,y]/xy1)`$; it can be defined over an arbitrary scheme by extension of scalars. There is a natural projection
$$𝔾_m()\mathrm{Spec}([x])=𝔸_{}^1_{}^1$$
so that $`_{}^1𝔾_m()=\{0,\mathrm{}\}`$. A form of $`𝔾_m`$ over $`K`$ is a group scheme $`G/K`$ for which there exists a finite field extension $`K^{}/K`$ and an isomorphism $`G\times _KK^{}𝔾_m(K^{})`$. These are classified as follows (see for a complete account). Any group automorphism
$$\alpha :𝔾_m(K^{})𝔾_m(K^{})$$
is either inversion or the identity, depending on whether it exchanges $`0`$ and $`\mathrm{}`$. The corresponding automorphism group is smooth, so we may work in the étale topology (see Theorem 3.9). In particular,
$$K\text{forms of }𝔾_mH_{\stackrel{´}{e}t}^1(\mathrm{Spec}(K),/2).$$
Each such form admits a natural open imbedding into a projective curve $`GX`$, generalizing the imbedding of $`𝔾_m`$ into $`^1`$. The complement $`D=XG`$ consists of two points. The Galois action on $`D`$ is given by the cocycle in $`H_{\stackrel{´}{e}t}^1(\mathrm{Spec}(K),/2)`$ classifying $`G`$.
There is a general formula for the rank due to T. Ono and J.M. Shyr (see , Theorem 6 and ). Let $`T_v`$ denote the completion of $`T`$ at some place $`v`$, $`\widehat{T}`$ and $`\widehat{T}_v`$ the corresponding character groups, and $`\rho (T)`$ (resp. $`\rho (𝒯_v)`$) the number of independent elements of $`\widehat{T}`$ (resp. $`\widehat{𝒯}_v`$). The formula takes the form
$$\mathrm{rank}(T,𝒪_S)=\underset{vS}{}\rho (𝒯_v)\rho (T).$$
For forms of $`𝔾_m`$ this is particularly simple. For split forms
$$\mathrm{rank}(𝔾_m,𝒪_S)=\mathrm{\#}\{\text{places }vS\}1.$$
Now let $`G/K`$ be a nonsplit form, corresponding to the quadratic extension $`K^{}/K`$, and $`S^{}`$ the places of $`K^{}`$ lying over the places of $`S`$. Then we have
$$\mathrm{rank}(G,𝒪_S)=\mathrm{\#}\{\text{places }vS\text{ completely splitting in }S^{}\}.$$
### 6.2 Group actions and integral points
Throughout this subsection, $`𝒳`$ is a normal, geometrically connected scheme and $`𝒳\mathrm{Spec}(𝒪_S)`$ a flat projective morphism. Let $`𝒟𝒳`$ be an effective reduced Cartier divisor. Contrary to our previous conventions, we do not assume that $`𝒟`$ is flat over $`𝒪_S`$. Assume that a linear algebraic group $`G`$ acts on $`X`$ so that $`XD`$ is a $`G`$-torsor.
###### Proposition 6.1
There exists a model $`𝒢`$ for $`G`$ such that $`𝒢`$ acts on $`𝒳`$ and stabilizes $`𝒟`$.
Proof. Choose an imbedding $`𝒳_{𝒪_S}^n`$ and a compatible linearization $`G\mathrm{GL}_{n+1}(K)`$ (see , Ch. 1 §3). Let $`𝒢^{}\mathrm{GL}_{n+1}(𝒪_S)`$ be the resulting integral model of $`G`$, so that $`𝒢^{}`$ stabilizes the ideal of $`𝒳`$ and therefore acts on it. Furthermore, $`𝒢^{}`$ evidently stabilizes the irreducible components of $`𝒟`$ dominating $`𝒪_S`$. The fibral components of $`𝒟`$ are supported over a finite subset of $`\mathrm{Spec}(𝒪_S)`$. We take $`𝒢𝒢^{}`$ to be the subgroup acting trivially over this subset; it has the desired properties. $`\mathrm{}`$
###### Proposition 6.2
Assume $`(𝒳,𝒟)`$ has an $`𝒪_S`$-integral point and that $`G`$ has positive $`𝒪_S`$-rank. Then $`(𝒳,𝒟)`$ has an infinite number of $`𝒪_S`$-integral points.
Proof. Consider the action of $`𝒢(𝒪_S)`$ on the integral point $`\sigma `$ (which has trivial stabilizer). The orbit consists of $`𝒪_S`$-integral points of $`(𝒳,𝒟)`$, an infinite collection because $`𝒢`$ has positive rank. $`\mathrm{}`$
Now assume that $`X`$ is a smooth rational curve. A rational section (resp. bisection) $`𝒟𝒳`$ is a reduced effective Cartier divisor such that the generic fiber $`D`$ is reduced of degree one (resp. two). Note that the open curve $`XD`$ is geometrically isomorphic to $`^1\{\mathrm{}\}`$ (resp. $`^1\{0,\mathrm{}\}`$), and thus is a torsor for some $`K`$-form $`G`$ of $`𝔾_a`$ (resp. $`𝔾_m`$). This form is easily computed. Of course, $`𝔾_a`$ has no nontrivial forms. In the $`𝔾_m`$ case, we can regard $`D_\eta `$ as an element of $`H_{\stackrel{´}{e}t}^1(\mathrm{Spec}(K),/2)`$, which gives the descent data for $`G`$.
The following result is essentially due to Beukers (see , Theorem 2.3):
###### Proposition 6.3
Let $`(𝒳,𝒟)\mathrm{Spec}(𝒪_S)`$ be a rational curve with rational bisection and $`G`$ the corresponding form of $`𝔾_m`$ (as described above). Assume that $`(𝒳,𝒟)`$ has an $`𝒪_S`$-integral point and $`\mathrm{rank}(G,𝒪_S)>0`$. Then $`𝒪_S`$-integral points of $`(𝒳,𝒟)`$ are Zariski dense.
Proof. This follows from Proposition 6.2. Given an $`𝒪_S`$-integral point $`\sigma `$ of $`(𝒳,𝒟)`$, the orbit $`𝒢(𝒪_S)\sigma `$ is infinite and thus Zariski dense. $`\mathrm{}`$
Combining with the formula for the rank, we obtain the following:
###### Corollary 6.4
Let $`(𝒳,𝒟)\mathrm{Spec}(𝒪_S)`$ be a rational curve with rational bisection such that $`(𝒳,𝒟)`$ has an $`𝒪_S`$-integral point. Assume that either
a) $`D`$ is reducible over $`\mathrm{Spec}(K)`$ and $`|S|>1`$; or
b) $`D`$ is irreducible over $`\mathrm{Spec}(K)`$ and at least one place in $`S`$ splits completely in $`K(D)`$.
Then $`𝒪_S`$-integral points of $`(𝒳,𝒟)`$ are Zariski dense.
When $`D`$ is a rational section we obtain a similar result (also essentially due to Beukers , Theorem 2.1):
###### Proposition 6.5
Let $`(𝒳,𝒟)\mathrm{Spec}(𝒪_S)`$ be a rational curve with rational section such that $`(𝒳,𝒟)`$ has an $`𝒪_S`$-integral point. Then $`𝒪_S`$-integral points of $`(𝒳,𝒟)`$ are Zariski dense.
### 6.3 $`v`$-adic geometry
For each place $`vS`$, consider the projective space $`^1(K_v)`$ as a manifold with respect to the topology induced by the $`v`$-adic absolute value on $`K_v`$. For simplicity, this will be called the $`v`$-adic topology; we will use the same term for the induced subspace topology on $`^1(K)`$. Given an étale morphism of curves $`f:U^1`$ defined over $`K_v`$, we will say that $`f(U(K_v))`$ is a basic étale open subset. These are open in the $`v`$-adic topology, either by the open mapping theorem (in the archimedean case) or by Hensel’s lemma (in the nonarchimedean case).
Let
$$\chi _f(B):=\mathrm{\#}\{z𝒪_{\{v\}}:|z|_vB\text{ and }zf(U(K_v))\}$$
where $`B`$ is a positive integer and
$$𝒪_{\{v\}}:=\{zK:|z|_w1\text{ for each }wv\}.$$
We would like to estimate the quantity
$$\mu _f:=\underset{B\mathrm{}}{lim\; inf}\chi _f(B)/\chi _{\mathrm{Id}}(B)$$
i.e., the fraction of the integers contained in the image of the $`v`$-adic points of $`U`$.
###### Proposition 6.6
Let $`f:U^1`$ be an étale morphism defined over $`K_v`$ and $`f_1:C^1`$ a finite morphism of smooth curves extending $`f`$. If there exists a point $`qf_1^1(\mathrm{})C(K_v)`$ at which $`f_1`$ is unramified then $`\mu _f=1`$.
Proof. This follows from the fact that $`f(U(K_v))`$ is open if $`f`$ is étale along $`U`$. $`\mathrm{}`$
As an illustrative example, we take $`K=`$ and $`K_v=`$, so that $`𝒪_{\{v\}}=`$. The set $`f(U())`$ is a finite union of open intervals $`(r,s)`$ with $`r,s\{\mathrm{}\}`$, where the (finite) endpoints are branch points. We observe that
$$\mu _f=\{\begin{array}{cc}0\hfill & \text{if }f(U())\text{ is bounded;}\hfill \\ 1/2\hfill & \text{if }\overline{f(U())}\text{ contains a one-sided neighborhood of }\mathrm{}\text{;}\hfill \\ 1\hfill & \text{if }\overline{f(U())}\text{ contains a two-sided neighborhood of }\mathrm{}\text{.}\hfill \end{array}$$
We can read off easily which alternative occurs in terms of the local behavior at infinity. Let $`f_1:C^1`$ be a finite morphism of smooth curves extending $`f`$. If $`f_1^1(\mathrm{})`$ has no real points then $`\mu _f=0`$. If $`f_1^1(\mathrm{})`$ has unramified (resp. ramified) real points then $`\mu _f=1`$ (resp. $`\mu _f>0`$.)
We specialize to the case of double covers:
###### Proposition 6.7
Let $`U^1`$ be an étale morphism defined over $`K_v`$ and $`f_1:C^1`$ a finite morphism of smooth curves extending $`f`$. Assume that $`f_1`$ has degree two and ramifies at $`qf_1^1(\mathrm{})`$. Then $`\mu _f>0`$.
Proof. Of course, $`q`$ is necessarily defined over $`K_v`$. The archimedean case follows from the previous example, so we restrict to the nonarchimedean case. Assume $`f_1`$ is given by
$$y^2=c_nz^n+c_{n1}z^{n1}+\mathrm{}+c_0,$$
where $`z`$ is a coordinate for the affine line in $`^1(K_v)`$, $`c_n0`$, and the $`c_i𝔬_v`$. Substituting $`z=1/t`$ and $`y=x/t^{n/2}`$, we obtain the equation at infinity
$$\{\begin{array}{cc}x^2=c_n+c_{n1}t+\mathrm{}+c_0t^n\hfill & \text{for }n\text{ even}\hfill \\ x^2=c_nt+c_{n1}t^2+\mathrm{}+c_0t^n\hfill & \text{for }n\text{ odd}\hfill \end{array}.$$
If $`n`$ is even then $`f_1^1(\mathrm{})`$ consists of two non-ramified points, so we may assume $`n`$ odd. Then $`f_1^1(\mathrm{})`$ consists of one ramification point $`q`$, necessarily defined over $`K_v`$.
Write $`c_n=u_0\pi ^\alpha `$ and $`z=u_1\pi ^\beta `$, where $`u_0`$ and $`u_1`$ are units and $`\pi `$ is a uniformizer in $`𝔬_v`$. (We may assume that some power $`\pi ^r`$ is contained in $`𝒪_K`$.) Our equation takes the form
$$y^2\pi ^{n\beta \alpha }=u_0u_1^n+c_{n1}u_1^{n1}\pi ^{\beta \alpha }+\mathrm{}+c_0u_1\pi ^{n\beta \alpha }.$$
(1)
We review a property of the $`v`$-adic numbers, (proved in , Ch. XIV §4). Consider the multiplicative group
$$U^{(m)}:=\{u𝔬_v:u1(mod\pi ^m)\}.$$
Then for $`m`$ sufficiently large we have $`U^{(m)}K_v^2`$. In particular, to determine whether a unit $`u`$ is a square, it suffices to consider its representative $`mod\pi ^m`$.
Consequently, if $`\beta `$ is sufficiently large and has the same parity as $`\alpha `$, then we can solve Equation 1 for $`yK_v`$ precisely when $`u_0u_1`$ is a square. For example, choose any $`M𝒪_K`$ so that $`Mu_0\pi ^{(r1)\beta }(mod\pi ^{r\beta })`$ and set $`z=M/\pi ^{r\beta }𝒪_{\{v\}}`$. Hence, of the $`z𝒪_{\{v\}}`$ with $`|z|_vB`$ (with $`B0`$), the fraction satisfying our conditions is bounded from below. It follows that $`\mu _f>0`$. $`\mathrm{}`$
Now let $`f:U^1`$ be an étale morphism of curves defined over $`K`$. Consider the function
$$\omega _{f,S}(B):=\mathrm{\#}\{z𝒪_S:|z|_vB\text{ for each }vS\text{ and }\alpha f(U(K))\}$$
and the quantity
$$\underset{B\mathrm{}}{lim\; sup}\omega _{f,\{v\}}(B)/\chi _f(B).$$
We expect that this is zero provided that $`f`$ does not admit a rational section. We shall prove this is the case when $`f`$ has degree two.
A key ingredient of our argument is a version of Hilbert’s Irreducibility Theorem:
###### Proposition 6.8
Let $`f:U^1`$ be an étale morphism of curves, defined over $`K`$ and admitting no rational section. Then we have
$$\underset{B\mathrm{}}{lim\; sup}\omega _{f,\{v\}}(B)/\chi _{\mathrm{Id}}(B)=0.$$
Proof. We refer the reader to Serre’s discussion of Hilbert’s irreducibility theorem (, §9.6, 9.7). Essentially the same argument applies in our situation. $`\mathrm{}`$
Combining Propositions 6.6, 6.7, and 6.8, we obtain:
###### Corollary 6.9
Let $`f:C^1`$ be a finite morphism of smooth curves defined over $`K`$. Assume that $`f`$ admits no rational section and that $`f^1(\mathrm{})`$ contains a $`K_v`$-rational point. We also assume that $`f`$ has degree two. Then we have
$$\underset{B\mathrm{}}{lim\; sup}\omega _{f,\{v\}}(B)/\chi _f(B)=0.$$
In particular, the set $`\{z𝒪_{\{v\}}:zf(C(K_v))f(C(K))\}`$ is infinite.
### 6.4 A density theorem for surfaces
Geometric assumptions: Let $`𝒳`$ and $``$ be flat and projective over $`\mathrm{Spec}(𝒪_S)`$ and $`\varphi :𝒳`$ be a morphism. Let $`𝒳`$ be a closed irreducible subscheme, $`𝒟𝒳`$ a reduced effective Cartier divisor, and $`𝔮:=𝒟`$. We assume the generic fibers satisfy the following: $`X`$ is a geometrically connected surface, $`B`$ a smooth curve, $`\varphi :XB`$ a flat morphism such that the generic fiber is a rational curve with bisection. We also assume $`L_K^1`$, $`\varphi |L`$ is finite, and $`L`$ meets $`D`$ at a single point $`q`$, at which $`D`$ is nonsingular. Write $`𝒳^{}`$ for $`𝒳\times _{}`$, $`𝒟^{}`$ for $`𝒟\times _{}`$, $`^{}`$ for the image of the diagonal in $`𝒳\times _{}`$ (now a section for $`\varphi ^{}:𝒳^{}`$), and $`𝔮^{}`$ for $`^{}𝒟^{}`$. Finally, if $`𝒞^{}`$ denotes the normalization of the union of the irreducible components of $`𝒟^{}`$ dominating $``$, we assume that $`𝒞^{}`$ has no rational section over $`K`$ (i.e., that $`𝒞^{}`$ is irreducible over $`K`$).
Arithmetic assumptions: We assume that $`(,𝔮)`$ has an $`𝒪_S`$-integral point. Furthermore, we assume that for some $`vS`$, $`C^{}`$ has a $`K_v`$-rational point lying over $`\varphi ^{}(q^{})`$.
###### Remark 6.10
This assumption is valid if any of the following are satisfied:
1. $`DB`$ is unramified at $`q`$.
2. $`DB`$ is finite (but perhaps ramified) at $`q`$ and $`LB`$ has ramification at $`q`$ of odd order.
3. $`DB`$ is finite (but ramified) at $`q`$ and $`LB`$ has ramification at $`q`$ of order two. Choose local uniformizers $`t,x,`$ and $`y`$ so that we have local analytic equations $`t+ax^2=0`$ and $`t+by^2=0`$ (with $`a,bK`$) for $`DB`$ and $`LB`$. We assume that $`ab`$ is a square in $`K_v`$.
Note that in the last case, $`D^{}`$ and $`C^{}`$ have local analytic equations $`ax^2by^2=0`$ and $`x/y=\pm \sqrt{b/a}`$ respectively.
###### Theorem 6.11
Under the geometric and arithmetic assumptions made above, $`𝒪_S`$-integral points of $`(𝒳,𝒟)`$ are Zariski dense.
Proof. It suffices to prove that $`𝒪_S`$-integral points of $`(𝒳^{},𝒟^{})`$ are Zariski dense. These map to $`𝒪_S`$-integral points $`(𝒳,𝒟)`$.
Consider first $`𝒪_S`$-integral points of $`(^{},𝔮^{})`$. These are dense by Proposition 6.5, and contain a finite index subgroup of $`𝔾_a(𝒪_S)_K^1`$. Corollary 6.9 and our geometric assumptions imply that infinitely many of these points lie in $`\varphi ^{}(C^{}(K_v))\varphi ^{}(C^{}(K))`$.
Choose a generic $`𝒪_S`$-integral point $`p`$ of $`(^{},𝔮^{})`$ as described above. Let $`𝒳_p^{}=\varphi _{}^{}{}_{}{}^{1}(p),𝒟_p^{}=𝒳_p^{}𝒟^{},`$ and $`_p^{}=𝒳_p^{}^{}`$, so that $`(𝒳_p^{},𝒟_p^{})`$ is a rational curve with rational bisection and integral point $`_p^{}`$. Combining the results of the previous paragraph with Proposition 6.3, with obtain that $`𝒪_S`$-integral points of $`(𝒳_p^{},𝒟_p^{})`$ are Zariski dense. As we vary $`p`$, we obtain a Zariski dense collection of integral points for $`(𝒳^{},𝒟^{})`$. $`\mathrm{}`$
### 6.5 Cubic surfaces containing a line
Let $`𝒳_1`$ be a cubic surface in $`_{𝒪_S}^3`$, $`𝒟_1𝒳_1`$ a hyperplane section, and $`_1𝒳_1`$ a line not contained in $`𝒟_1`$, all assumed to be flat over $`\mathrm{Spec}(𝒪_S)`$. Write $`𝔮_1:=𝒟_1_1`$, a rational section over $`\mathrm{Spec}(𝒪_S)`$. Let $`_{𝒪_S}^3`$ be the projection associated with $`_1`$, $`𝒳=\mathrm{Bl}__1𝒳_1`$, and $`\varphi :𝒳`$ the induced projection (of course, $`=_{𝒪_S}^1`$ if $`𝒪_S`$ is a UFD). Let $`𝒳`$ be the proper transform of $`_1`$, $`𝒟𝒳`$ the total transform of $`𝒟_1`$, and $`𝔮=𝒟`$. We shall apply Theorem 6.11 to obtain density results for $`𝒪_S`$-integral points of $`(𝒳_1,𝒟_1)`$.
We will need to assume the following geometric conditions:
1. $`D_1`$ is reduced everywhere and nonsingular at $`q_1`$;
2. $`X_1`$ has only rational double points as singularities, with at most one singularity along $`L_1`$.
3. $`D_1`$ is not the union of a line and a conic containing $`q_1`$ (defined over $`K`$).
Using the first two assumptions, we analyze the projection from the line $`L_1`$. This induces a morphism
$$\varphi :X^1.$$
Of course, $`X=X_1`$ if and only if $`L_1`$ is Cartier in $`X_1`$, which is the case exactly when $`X_1`$ is smooth along $`L_1`$. We use $`L`$ to denote the proper transform of $`L_1`$ and $`D`$ to denote the proper transforms of $`L_1`$ and $`D_1`$. Our three assumptions imply that $`D`$ equals the total transform of $`D_1`$ and has a unique irreducible component $`C`$ dominating $`^1`$. We also have that the generic fiber of $`\varphi `$ is nonsingular, intersects $`D`$ in two points, and intersects $`L`$ in two points (if $`X_1`$ is smooth along $`L_1`$) or in one point (if $`X_1`$ has a singularity along $`L_1`$). In particular, $`L`$ is a bisection (resp. section) of $`\varphi `$ if $`X_1`$ is nonsingular (resp. singular) along $`L_1`$.
We emphasize that $`𝒪_S`$-integral points of $`(𝒳,𝒟)`$ map to $`𝒪_S`$-integral points of $`(𝒳_1,𝒟_1)`$, and all the Geometric Assumptions of Theorem 6.11 are satisfied except for the last one. The last assumption is verified if any of the following hold:
1. The branch loci of $`C^1`$ and $`L^1`$ do not coincide.
2. The curve $`C`$ has genus one.
3. $`X_1`$ has a singularity along $`L_1`$.
Clearly, either the second or the third condition implies the first.
We turn next to the Arithmetic Assumptions.
1. $`(_1,𝔮_1)`$ has an $`𝒪_S`$-integral point.
Note that $`𝒪_S`$-integral points of $`(_1,𝔮_1)`$ not lying in the singular locus of $`𝒳_1\mathrm{Spec}(𝒪_S)`$ lift naturally to $`𝒪_S`$-integral points of $`(,𝔮)`$.
Our next task is to translate the conditions of Remark 6.10 to our situation. They are satisfied in any of the following contexts:
1. $`D_1`$ is irreducible over $`K`$ and $`q_1`$ is not a flex of $`D_1`$;
2. $`X_1`$ has a singularity along $`L_1`$;
3. $`D_1`$ is irreducible over $`K`$ and $`q_1`$ is a flex of $`D_1`$. Let $`H`$ be the hyperplane section containing $`L_1`$ and the flex line. We assume that $`HX_1=L_1M`$, where $`M`$ is a smooth conic.
4. $`D_1`$ is irreducible over $`K`$ but $`q`$ is a flex so that the hyperplane $`H`$ containing $`L_1`$ and the flex line $`F`$ intersects $`X_1`$ in three coincident lines, i.e., $`HX_1=L_1M_1M_2`$. Choose local coordinates $`x`$ and $`y`$ for $`H`$ so that $`L_1=\{x=0\},F=\{y=0\},`$ and $`M_1M_2=\{ax^2+cxy+by^2=0\}`$. Then we assume that $`ab`$ is a square in $`K_v`$.
5. $`D_1`$ consists of a line and a conic $`C_1`$ irreducible over $`K`$, intersecting in two distinct points, each defined over $`K_v`$.
In the first case, the map $`DB`$ is unramified at $`q`$. Note that in the second case $`L`$ is a section for $`\varphi `$. In the third case, our assumption implies that $`LB`$ is unramified at $`q`$. In the last case, we observe that the points of $`L`$ lying over $`\varphi (q)`$ are defined over $`K`$, hence $`C^{}`$ has a $`K_v`$-rational point over $`\varphi ^{}(q^{})`$.
It remains to show that AA2d allows us to apply case 3 of Remark 6.10. We fix projective coordinates on $`^3`$ compatibly with the coordinates already chosen on $`H`$: $`y=0`$ is the linear equation for the hyperplane containing $`D_1`$, $`z=0`$ the equation for $`H`$, $`x=z=0`$ the equations for $`L_1`$, and $`x=z=w=0`$ the equations for $`q_1`$. Under our assumptions, the equations for $`D_1`$ and $`X_1`$ take the form
$`g`$ $`:=`$ $`zw^2+ax^3+c_1wxz+c_2wz^2+c_4x^2z+c_5xz^2+c_6z^3=0`$
$`f`$ $`:=`$ $`g+cx^2y+bxy^2+yz\mathrm{}(w,x,y,z)=0`$
where $`\mathrm{}`$ is linear in the variables. The conic bundle structure $`\varphi :XB`$ is obtained by making the substitution $`z=tx`$
$`g^{}`$ $`=`$ $`tw^2+x(wc_1t+wc_2t^2)+x^2(a+c_4t+c_5t^2+c_6t^3)=0`$
$`f^{}`$ $`=`$ $`g^{}+cxy+by^2+ty\mathrm{}(w,x,y,tx).`$
We analyze the local behavior of $`DB`$ at $`q`$ using $`x`$ as a coordinate for $`D`$. First dehomogenize
$$g^{\prime \prime }=t+x(c_1t+c_2t^2)+x^2(a+c_4t+c_5t^2+c_6t^3)=0$$
and then take a suitable analytic change of coordinate on $`D`$ to obtain $`t+aX^2=0`$. To analyze $`LB`$, we set $`x=0`$ and use $`y`$ as a coordinate
$$f^{\prime \prime }=t+by^2+ty\mathrm{}(1,0,y,0)=0.$$
After a suitable analytic change of coordinate on $`L`$, we obtain $`t+bY^2=0`$.
###### Remark 6.12
We further analyze condition AA2d when $`K_v=`$. Then $`ab`$ is a square if and only if $`ab0`$. This is necessarily the case if $`c^24ab<0`$, i.e., if the lines $`M_1`$ and $`M_2`$ are defined over an imaginary quadratic extension.
We summarize our discussion in the following theorem:
###### Theorem 6.13
Let $`𝒳_1`$ be a cubic surface, $`𝒟_1𝒳_1`$ a hyperplane section, and $`_1𝒳_1`$ a line not contained in $`𝒟_1`$, all assumed to be flat over $`\mathrm{Spec}(𝒪_S)`$. Write $`𝔮_1:=𝒟_1_1`$. Assume the following:
1. GA1,GA2,GA3, and AA1;
2. at least one of the assumptions GA4a,GA4b,or GA4c;
3. at least one of the assumptions AA2a,AA2b,AA2c,AA2d, or AA2e.
Then $`𝒪_S`$-integral points of $`(𝒳_1,𝒟_1)`$ are Zariski dense.
We recover the following result (essentially Theorem 2 of Beukers ):
###### Corollary 6.14
Let $`𝒳_1`$ be a cubic surface, $`𝒟_1𝒳_1`$ a hyperplane section, and $`_1𝒳_1`$ a line not contained in $`𝒟_1`$, all assumed to be flat over $`\mathrm{Spec}()`$. Write $`𝔮_1:=𝒟_1_1`$. Assume that
1. $`X_1`$ and $`D_1`$ are smooth;
2. there exists an $``$-integral point of $`(_1,𝔮_1)`$;
3. if $`q`$ is a flex of $`D_1`$, we assume that the hyperplane containing $`L_1`$ and the flex line intersects $`X_1`$ in a smooth conic and $`L_1`$.
Then $``$-integral points of $`(𝒳_1,𝒟_1)`$ are Zariski dense.
We also recover a weak version of Theorem 1 of . (This theorem is asserted to be true but the proof is not quite complete; the problem occurs in the argument for the second part of Lemma 2.)
###### Corollary 6.15
Retain all the hypotheses of Corollary 6.14, except that we allow the existence of a hyperplane $`H`$ intersecting $`X_1`$ in three lines $`L_1,M_1,`$ and $`M_2`$ and containing a flex line $`F`$ for $`D_1`$ at $`q`$. Let $`p`$ be a place for $``$ (either infinite or finite). Choose local coordinates $`x`$ and $`y`$ for $`H`$ so that $`L_1=\{x=0\},F=\{y=0\},`$ and $`M_1M_2=\{ax^2+cxy+by^2=0\}`$, and assume that $`ab`$ is a square in $`_p`$. Then $`[1/p]`$-integral points of $`(𝒳_1,𝒟_1)`$ are Zariski dense (where $`[1/\mathrm{}]=`$ and $`_{\mathrm{}}=`$.)
Of course, there are infinitely many primes $`p`$ such that $`ab`$ is a square in $`_p`$. When $`p=\mathrm{}`$, by Remark 6.12 it suffices to verify that $`M_1`$ and $`M_2`$ are defined over an imaginary quadratic extension.
We also obtain results in cases where the boundary is reducible:
###### Corollary 6.16
Let $`𝒳_1`$ be a cubic surface, $`𝒟_1𝒳_1`$ a hyperplane section, and $`_1𝒳_1`$ a line not contained in $`𝒟_1`$, all assumed to be flat over $`\mathrm{Spec}()`$. Write $`𝔮_1:=𝒟_1_1`$. Assume that
1. $`X_1`$ is smooth;
2. there exists an $`𝒪_S`$-integral point of $`(_1,𝔮_1)`$;
3. $`D_1=EC`$, where $`E`$ is a line intersecting $`L_1`$ and $`C`$ is a conic irreducible over $`K`$;
4. $`C`$ intersects $`E`$ in two points, defined over $`K_v`$ where $`v`$ is some place in $`S`$;
5. there exists at most one conic in $`X_1`$ tangent to both $`L_1`$ and $`C`$.
Then $`𝒪_S`$-integral points of $`(𝒳,𝒟)`$ are Zariski dense.
Note that the assumption on the conics tangent to $`L_1`$ and $`C`$ is used to verify GA4a.
### 6.6 Other applications
Theorem 6.11 can be applied in many situations. We give one further example:
###### Theorem 6.17
Let $`𝒳=_{𝒪_S}^1\times _{𝒪_S}^1`$, $`𝒟𝒳`$ a divisor of type $`(2,2)`$, and $`𝒳`$ a ruling of $`𝒳`$, all flat over $`𝒪_S`$. Assume that
1. $`D`$ is nonsingular;
2. $`L`$ is tangent to $`D`$ at $`q`$;
3. $`𝒪_S`$-integral points of $`(,𝔮)`$ are Zariski dense.
Then $`𝒪_S`$-integral points of $`(𝒳,𝒟)`$ are Zariski dense.
Proof. Let $`\varphi `$ be the projection for which $``$ is a section. Since $`𝒞=𝒟`$ in this case, the second arithmetic assumption of Theorem 6.11 is easily satisfied. $`\mathrm{}`$
## 7 Potential density for log K3 surfaces
We consider the following general situation:
###### Problem 7.1 (Integral points of log K3 surfaces)
Let $`X`$ be a surface and $`D`$ a reduced effective Weil divisor such that $`(X,D)`$ has log terminal singularities and $`K_X+D`$ is trivial. Are integral points on $`(X,D)`$ potentially dense?
Problem 7.1 has been studied when $`D=\mathrm{}`$ (see, for example, ). In this case density holds if $`X`$ has infinite automorphisms or an elliptic fibration.
The case $`X=^2`$ and $`D`$ a plane cubic has also attracted significant attention. Silverman proved potential density in the case where $`D`$ is singular and raised the general case as an open question. Beukers established this by considering the cubic surface $`X_1`$ obtained as the triple cover of $`X`$ totally branched over $`D`$.
Implicit in is a proof of potential density when $`X_1`$ is a smooth cubic surface and $`D_1`$ is a smooth hyperplane section. Note that this also follows from Theorem 6.13 (cf. also Corollaries 6.14 and 6.15.) After suitable extensions of $`K`$ and additions to $`S`$, there exists a line $`LX`$ defined over $`K`$ and the relevant arithmetic assumptions are satisfied. Similarly, the case of $`X=^1\times ^1`$ and $`D`$ a smooth divisor of type $`(2,2)`$ follows from Theorem 6.17.
More generally, let $`X`$ be a smooth Del Pezzo surface of index one, i.e., $`K_X`$ is saturated in $`\mathrm{Pic}(X)`$, and with degree $`d:=K_X^24`$. Let $`D`$ be a smooth anticanonical divisor. Choose general points $`W=\{x_1,\mathrm{},x_{d3}\}XD`$, and let $`X_1=\mathrm{Bl}_WX`$ and $`D_1`$ be the proper transform of $`D`$. Hence $`X_1`$ is a cubic surface, $`D_1`$ is a smooth hyperplane section, and the induced map of pairs
$$(X_1,D_1)(X,D)$$
is dominant. Since integral points for $`(X_1,D_1)`$ are potentially dense, the same holds true for $`(X,D)`$.
We summarize our results as follows:
###### Theorem 7.2
Let $`X`$ be a smooth Del Pezzo surface of degree $`3`$ and $`D`$ a smooth anticanonical divisor. Then integral points for $`(X,D)`$ are potentially dense.
We close this section with a list of open special cases of Problem 7.1.
1. Let $`X`$ be a Del Pezzo surface of degree one or two and $`D`$ an anticanonical cycle. Show that integral points for $`(X,D)`$ potentially dense.
2. Let $`X`$ be a Hirzebruch surface and $`D`$ an anticanonical cycle. Find a smooth rational curve $`L`$, intersecting $`D`$ in exactly one point $`p`$, so that the induced map $`\phi :L^1`$ is finite surjective.
### 7.1 Appendix: some geometric remarks
The reader will observe that the methods employed to prove density for integral points on conic bundles (with bisection removed) are not quite analogous to the methods used for elliptic fibrations. The discrepancy can be seen in a number of ways. First, given a multisection $`M`$ for a conic bundle (with bisection removed), we can pull-back the conic bundle to the multisection. The resulting fibration has two rational sections, $`\mathrm{Id}`$ and $`\tau _M`$ (see section 4). However, a priori one cannot control how $`\tau _M`$ intersects the boundary divisor (clearly, this is irrelevant if the boundary is empty). A second explanation may be found in the lack of a good theory of (finite type) Néron models for algebraic tori (see chapter 10 of ).
We should remark that in some special cases these difficulties can be overcome, so that integral points may be obtained by geometric methods completely analogous to those used for rational points. Consider the cubic surface
$$x^3+y^3+z^3=1$$
with distinguished hyperplane at infinity. This surface contains a line with equations $`x+y=z1=0`$. Euler showed that the resulting conic bundle admits a multisection $`(x_0,y_0,z_0)=(9t^4,3t9t^4,19t^3),`$ which may be reparametrized as $`(x_1,y_1,z_1)=(9t^4,3t9t^4,1+9t^3).`$ Lehmer showed that this is the first in a sequence of multisections, given recursively by
$`(x_{n+1},y_{n+1},z_{n+1})`$ $`=`$ $`2(216t^61)(x_n,y_n,z_n)(x_{n1},y_{n1},z_{n1})`$
$`+`$ $`(108t^4,108t^4,216t^4+4)`$
This should be related to the fact that the norm group scheme
$$u^23(108t^61)v^2=1,$$
admits a section of infinite order $`(u,v)=(216t^61,12t^3)`$.
|
no-problem/0003/cond-mat0003327.html
|
ar5iv
|
text
|
# Burst dynamics during drainage displacements in porous media: Simulations and experiments
## Abstract
We investigate the burst dynamics during drainage going from low to high injection rate at various fluid viscosities. The bursts are identified as pressure drops in the pressure signal across the system. We find that the statistical distribution of pressure drops scales according to other systems exhibiting self-organized criticality. The pressure signal was calculated by a network model that properly simulates drainage displacements. We compare our results with corresponding experiments.
Since the early 1980s physicists have paid attention to the complex phenomena observed when one fluid displaces another fluid in porous media. The papers that have appeared in the literature mostly refer to the rich variety of displacement structures that is observed due to different fluid properties like flow rate, viscosity, interfacial tension, and wettability. The major displacement structures have been found to resemble structures generated by geometrical models like invasion percolation (IP) , DLA , and anti-DLA . Only a few authors have addressed the interplay between the displacement structures and the evolution of the fluid pressure. In slow drainage when non-wetting fluid displaces slowly wetting fluid in porous media, the pressure evolves according to Haines jumps . The displacement is controlled solely by the pressure difference between the two fluids across a meniscus (the capillary pressure), and the non-wetting fluid invades the porous medium in a series of bursts accompanied by sudden negative pressure drops.
The purpose of this paper is to study the dynamics of the fluid pressure during drainage going from low to high displacement rates. To do so, we examine the statistical properties of the sudden negative pressure drops due to the bursts. We find that for a wide range of displacement rates and fluid viscosities, the pressure drops act in analogy to theoretical predictions of systems exhibiting self-organized criticality , like IP. Even at high injection rates, where the connection between the displacement process and IP is more open, the pressure drops behave similar to the case of extreme low injection rate, where IP apply. The pressures are calculated by a network model that properly simulates the fluid-fluid displacement. Moreover, we measure the fluid pressure in drainage experiments and compare that with our simulation results.
In the simulations a burst starts where the pressure drops suddenly and stops where the pressure has raised to a value above the pressure that initiated the burst (see fig. 1). Thus, a burst may consist of a large pressure valley containing a hierarchical structure of smaller pressure jumps (i.e. bursts) inside.
A pressure jump, indicated as $`\mathrm{\Delta }p`$ in fig. 1, is the pressure difference from the point where the pressure starts decreasing minus the pressure where it stops decreasing. We define the size of the pressure valley (valley size) to be $`\chi _i\mathrm{\Delta }p_i`$, where the summation index $`i`$ runs over all the pressure jumps $`\mathrm{\Delta }p_i`$ inside the valley. The definition is motivated by experimental work in ref. . For slow displacements we have that $`\chi `$ is proportional to the geometric burst size $`s`$, being invaded during the pressure valley. This statement has been justified in ref. , where it was observed that in stable periods, the pressure increased linearly as function of the volume being injected into the system. Later, in an unstable period where the pressure drops abruptly due to a burst, this volume is proportional to $`s`$. At fast displacements the pressure may no longer be a linear function of the volume injected into the system. Therefore, a better estimate of $`s`$ there, is to compute the time period $`T`$ of the pressure valley (fig. 1). Since the displacements are performed with constant rate, it is reasonable to assume that $`T`$ is always proportional to the volume being injected during the valley and hence, $`Ts`$.
We have computed the distributions of $`\chi `$ and $`T`$ from the pressure signals of simulations and experiments. We find that the distributions are consistent with a power law, independent of injection rate and fluid viscosities (figs. 2 and 4) and that the distribution of pressure jumps $`\mathrm{\Delta }p_i`$, follows an exponential decreasing function (fig. 3).
The network model used in the simulations is thoroughly discussed in refs. and and only its main features are presented below. The porous medium consists of a two-dimensional (2D) square lattice of cylindrical tubes oriented at $`45^{}`$ relative to one of the edges of the lattice. Four tubes meet at each intersection where we put a node having no volume. The disorder is introduced by moving the intersections a randomly chosen distance away from their initial positions, giving a distorted square lattice. The distances are chosen in the interval between zero and less than one half of the grid size to avoid overlapping intersections in the new lattice. We let $`d_{ij}`$ denote the length of the tube between node (intersection) $`i`$ and $`j`$ in the lattice and $`r_{ij}=d_{ij}/2\alpha `$ defines the corresponding radius of the tube. Here $`\alpha `$ is the aspect ratio between the tube length and its radius.
The tubes are initially filled with a wetting fluid of viscosity $`\mu _\mathrm{w}`$, and a non-wetting fluid of viscosity $`\mu _{\mathrm{nw}}`$ is injected at constant injection rate $`Q`$ along the bottom row. The wetting fluid is displaced and flows out along the top row and there are periodic boundary conditions in the horizontal direction. The fluids are assumed incompressible and immiscible and an interface (meniscus) is located where the fluids meet in the tubes. The capillary pressures of the menisci behave as if the tubes where hourglass shaped with effective radii following a smooth function. Thus, we let the capillary pressure $`p_\mathrm{c}`$ be a function of the meniscus’ position in the tube in the following way: $`p_\mathrm{c}=(2\gamma /r)[1\mathrm{cos}(2\pi x/d)]`$. Here we have omitted the subscripts $`ij`$. The first term results from Young-Laplace law when assuming that the principal radii of curvature of the meniscus are equal to the radius of the tube, and that the wetting fluid perfectly wets the medium. $`\gamma `$ denotes the interfacial tension between the fluids. In the second term $`x`$ is the position of the meniscus in the tube, i.e. $`0xd`$. The advantage of the above approach is that we include the effect of local readjustments of the menisci on pore level , which is important for the description of the burst dynamics .
The fluid flow $`q_{ij}`$ through a tube from node $`i`$ to node $`j`$, is solved by using Hagen-Poiseuille flow in cylindrical tubes and Washburn’s approximation for menisci under motion giving, $`q_{ij}=(\sigma _{ij}k_{ij}/\mu _{ij})(p_jp_ip_{\mathrm{c},ij})/d_{ij}`$. Here $`p_i`$ and $`p_j`$ are the pressures at the nodes, $`p_{\mathrm{c},ij}`$ is the capillary pressure if one or two menisci are present in the tube, and $`\mu _{ij}`$ is the effective viscosity of the fluids occupying the tube. $`k_{ij}`$ and $`\sigma _{ij}`$ is the permeability and the cross section of the tube, respectively. By inserting the above equation into Kirchhoff equations at every node, $`_jq_{ij}=0`$, constitutes a set of linear equations which are solved for the nodal pressures $`p_i`$. The set of linear equations is solved by the Conjugate Gradient method . See refs. and for how the menisci are updated and other numerical details about the network model.
To characterize the fluid properties used in the simulations, we use the capillary number $`C_\mathrm{a}`$ and the viscosity ratio $`M`$. $`C_\mathrm{a}`$, denoting the ratio of capillary and viscous forces, is in the following defined as $`C_\mathrm{a}Q\mu /\mathrm{\Sigma }\gamma `$. Here $`\mu `$ is maximum viscosity of $`\mu _{\mathrm{nw}}`$ and $`\mu _\mathrm{w}`$, and $`\mathrm{\Sigma }`$ is the cross section of the inlet. The viscosity ratio $`M`$, is defined as $`M\mu _{\mathrm{nw}}/\mu _\mathrm{w}`$.
We have performed three different series of simulations with $`M=0.01`$, $`1`$, and $`100`$, respectively. In each series $`C_\mathrm{a}`$ was varied by adjusting the injection rate $`Q`$. To obtain reliable average quantities we did 10 to 20 simulations of different distorted lattices, at each $`C_\mathrm{a}`$. The lattice size of the networks was $`60\times 90`$ nodes for $`M=0.01`$, $`40\times 60`$ nodes for $`M=1`$, and $`25\times 35`$ nodes for $`M=100`$. In all simulations we set $`\gamma =30\text{dyn}/\text{cm}`$, and the radii of the tubes were inside the interval $`[0.08,0.72]\text{mm}`$. The average tube length was always 1 mm. The parameters were chosen to be close to the experimental setup in .
For all simulations we calculated the hierarchical valley size distribution $`N_{\mathrm{all}}(\chi )`$. The distribution was calculated by including all valley sizes and the hierarchical smaller ones within a large valley (see fig. 1). The result for high, intermediate, and low $`C_\mathrm{a}`$ when $`M=1`$ and $`M=100`$ is shown in a logarithmic plot in fig. 2. Identical results were obtained for $`M=0.01`$. In order to calculate the valley sizes at large $`C_\mathrm{a}`$, we subtract the average drift in the pressure signal due to viscous forces such that the pressure becomes a function that fluctuates around some mean pressure.
By assuming a power law $`N_{\mathrm{all}}(\chi )\chi ^{\tau _{\mathrm{all}}}`$ our best estimate from fig. 2 is $`\tau _{\mathrm{all}}=1.9\pm 0.1`$, indicated by the slope of the solid line. At low $`\chi `$ in fig. 2, typically only one tube is invaded during the valley and we do not expect the power law to be valid. Similar results were obtained when calculating the hierarchical distribution of the time periods $`T`$ of the valleys, denoted as $`N_{\mathrm{all}}(T)`$.
In IP the distribution of burst sizes $`N(s)`$, where $`s`$ denotes the burst size, is found to obey the scaling relation
$$N(s)s^\tau ^{}g(s^\sigma (f_0f_\mathrm{c})).$$
(1)
Here $`f_\mathrm{c}`$ is the percolation threshold of the system and $`g(x)`$ is some scaling function, which decays exponentially when $`x1`$ and is a constant when $`x0`$. $`\tau ^{}`$ is related to percolation exponents like $`\tau ^{}=1+D_\mathrm{f}/D1/(D\nu )`$ , where $`D_\mathrm{f}`$ and $`D`$ is the fractal dimension of the front and the mass of the percolation cluster, respectively. $`D_\mathrm{f}`$ depends on the definition of the front, that is, $`D_\mathrm{f}`$ equals $`D_\mathrm{e}`$ for external perimeter growth zone and $`D_\mathrm{h}`$ for hull perimeter growth zone . $`\nu `$ is the correlation length exponent in percolation theory and $`\sigma =1/(\nu D)`$ . In eq. (1) a burst is defined as the connected structure of sites that is invaded following one root site of random number $`f_0`$, along the invasion front. All sites in the burst have random numbers smaller than $`f_0`$, and the burst stops when $`f>f_0`$, is the random number of the next site to be invaded .
By integrating eq. (1) over all $`f_0`$ in the interval $`[0,f_\mathrm{c}]`$ Maslov deduced a scaling relation for the hierarchical burst size distribution $`N_{\mathrm{all}}(s)`$ following
$$N_{\mathrm{all}}(s)s^{\tau _{\mathrm{all}}},$$
(2)
where $`\tau _{\mathrm{all}}=2`$.
In the low $`C_a`$ regime in fig. 2, the displacements are in the capillary dominated regime and the invading fluid generates a growing cluster similar to IP . In this regime we also have that $`\chi s`$ and hence $`N_{\mathrm{all}}(\chi )`$ corresponds to $`N_{\mathrm{all}}(s)`$ in eq. (2). Thus, in the low $`C_a`$ regime we expect that $`N_{\mathrm{all}}(\chi )`$ follows a power law with exponent $`\tau _{\mathrm{all}}=2`$ which is confirmed by our numerical results. Similar results were obtained in ref. .
The evidence in fig. 2, that $`\tau _{\mathrm{all}}`$ does not seem to depend on $`C_\mathrm{a}`$, is very interesting and new. At high $`C_\mathrm{a}`$ when $`M=0.01`$ an unstable viscous fingering structure generates and when $`M1`$ a stable front develops. It is an open question how these displacement processes map to the proposed scaling in eq. (2). We note that in the high $`C_a`$ regime the relation $`\chi s`$ may not be correct and $`T`$ is preferred when computing $`N_{\mathrm{all}}`$. However, the simulations show that $`N_{\mathrm{all}}(\chi )N_{\mathrm{all}}(T)`$ even at high $`C_a`$.
In it was pointed out that $`\tau _{\mathrm{all}}`$ is super universal for a broad class of self-organized critical models including IP. Our result in fig. 2 indicates that the simulated displacement processes might belong to the same super universality class even at high injection rates.
Maslov also calculated the time-reversed (backward) hierarchical burst size distribution and predicted that this distribution should follow a power law with a model-dependent exponent $`\tau _{\mathrm{all}}^b`$. In our case we are dealing with 2D IP with trapping giving $`\tau _{\mathrm{all}}^b=1.68`$. We have calculated $`\tau _{\mathrm{all}}^b`$ of our simulations by simply reversing the time axis in the pressure signal in fig. 1 and repeating the steps which led to fig. 2. From that we obtain $`\tau _{\mathrm{all}}^b=1.7\pm 0.1`$ which is consistent with the predictions in .
In the inset of fig. 2 we have plotted the cumulative valley size distribution $`N(\chi >\chi ^{})`$ for the simulation at lowest $`C_\mathrm{a}=1.6\times 10^5`$ with $`M=1`$. $`N(\chi >\chi ^{})`$ was calculated for bursts that starts at pressures in a narrow strip between 2800 and 3100 $`\text{dyn}/\text{cm}^2`$ where 3100 is the maximum pressure during the displacement. From eq. (1) we have that $`N(s)s^\tau ^{}`$ for bursts that start close to the percolation threshold $`f_\mathrm{c}`$. In our simulations $`f_\mathrm{c}`$ corresponds to the maximum pressure. It is hard to observe any power law in the inset of fig. 2, however, if we assume one, our best estimate is $`1\tau ^{}=0.5`$ as indicated by the slope of the solid line. In simulations and experiments gave $`1\tau ^{}=0.45\pm 0.10`$. We need larger system sizes and more simulations to improve our statistics, but we conclude that our result are in agreement of .
We have also calculated the cumulative pressure jump distribution function $`N(P>P^{})`$ for the simulations with $`M=1`$ and $`100`$ at various injection rates. Here $`P\mathrm{\Delta }p/\mathrm{\Delta }p`$ where $`\mathrm{\Delta }p`$ is the mean of the local pressure jumps $`\mathrm{\Delta }p`$ in the pressure signal (see fig. 1). The result for two simulation, one at high and the other at low $`C_\mathrm{a}`$, is plotted in fig. 3. Both were performed with viscosity matched fluids ($`M=1`$). The distributions have been fitted to exponentially decreasing functions drawn as dashed lines in fig. 3. At low $`C_\mathrm{a}`$ we find $`N(P>P^{})e^{1.38P^{}}`$, which is consistent with results in . At high $`C_\mathrm{a}`$ the distribution function was fitted to $`e^{1.02P^{}}`$. The pre-factor in the exponent of the exponential function seems to change systematically from about 1.4 to 1.0 as $`C_\mathrm{a}`$ increases. Similar results were obtain from simulations performed with $`M=100`$.
We have performed four drainage experiments where we used a $`110\times 180`$ mm transparent porous model consisting of a mono-layer of randomly placed glass beads of 1 mm, sandwiched between two Plexiglas plates . The model was initially filled with a water-glycerol mixture of viscosity 0.17 P. The water-glycerol mixture was withdrawn from one of the short side of the system at constant rate by letting air enter the system from the other short side. The pressure in the water-glycerol mixture on the withdrawn side was measured with a pressure sensor of our own construction.
From the recorded pressure signal we calculated the hierarchical distribution of time periods of the valleys, $`N_{\mathrm{all}}(T)`$. At low $`C_a`$ this corresponds to $`N_{\mathrm{all}}(s)`$ in eq. (2). Because of the relative long response time of the pressure sensor, rapid and small pressure jumps due to small bursts are presumably smeared out by the sensor and the recorded pressure jumps are only reliable for larger bursts. Hence, from the recorded pressure signal $`T`$ appears to be a better estimate of the burst sizes than $`\chi `$.
In fig. 4 we have plotted the logarithm of $`N_{\mathrm{all}}(T)`$ for experiments (open symbols) and simulations (filled symbols) performed at four different $`C_\mathrm{a}`$, respectively. To collapse the data $`N_{\mathrm{all}}(T)`$ and $`T`$ were normalized by their means. In the simulations $`M=0.01`$ while in the experiments $`M=0.017`$ where we have assumed air to have viscosity $`0.29\times 10^2`$ P. We observe that the experimental result is consistent with our simulations and we conclude that $`N_{\mathrm{all}}(T)T^{1.9\pm 0.1}`$. This confirms the scaling of $`N_{\mathrm{all}}(\chi )`$ in fig. 2. We have also calculated the time-reversed distribution of $`N_{\mathrm{all}}(T)`$ and the result of that is consistent with the time-reversed distribution that was calculated of the simulations in fig. 2.
Note that when comparing the $`C_\mathrm{a}`$’s of the experiments with the ones of the simulations in fig. 4, we have to take into account the different system sizes. The length of the experimental setup is about three times larger than the length of the simulation network. Therefore we expect that in the experiments, viscous fingering develops at $`C_\mathrm{a}`$’s of about three times less than in the simulations.
In summary we find that $`\tau _{\mathrm{all}}=1.9\pm 0.1`$ for all displacement simulations going from low to high injection rates when $`M=0.01`$, $`1`$, and $`100`$. This is also confirmed by drainage experiments performed at various injection rates with $`M=0.017`$. At low injection rates the result is consistent with the prediction in ($`\tau _{\mathrm{all}}=2`$), which was deduced for a broad spectrum of different self-organized critical models including IP. The evidence that $`\tau _{\mathrm{all}}`$ is independent of the injection rate, may indicate that the displacement process belongs to the same super universality class as the self-organized critical models in , even where there is no mapping between the displacement process and IP. The good correspondence between our simulation results and the drainage experiments in fig. 4 and also the results reported at slow drainage in , demonstrates that the burst dynamics is well described by our simulation model.
The authors thank S. Roux for valuable comments. The work is supported by the Norwegian Research Council (NFR) through a “SUP” program and we acknowledge them for a grant of computer time.
\***
|
no-problem/0003/nlin0003024.html
|
ar5iv
|
text
|
# A formula with hypervolumes of six 4-simplices and two discrete curvatures
## 1 Introduction
This short note continues the paper and the short note . We present a formula that naturally corresponds to one of the “Alexander moves” , i.e., “elementary rebuildings” of simplicial complexes. Our formula belongs to a four-dimensional space and deals with three “initial” 4-simplices in its l.h.s. and three “final” ones in is r.h.s.
Recall that a similar formula for a three-dimensional space was obtained in , and this was done on the basis of “duality formulas” (which are valid, themselves, for any-dimensional space) from <sup>1</sup><sup>1</sup>1The duality in can be said to be dealing with “branched polymers” known in quantum gravity..
## 2 Derivation of the formula
Consider six points $`A`$, $`B`$, $`C`$, $`D`$, $`E`$ and $`F`$ in the four-dimensional euclidean space<sup>2</sup><sup>2</sup>2Yet, as will be seen below, we will allow them sometimes to “go out in the fifth dimension”.. There exist fifteen distances between them, which we denote, like in papers , as $`l_{AB}`$, $`l_{AC}`$ and so on. There exist six 4-simplices with vertices in our points, and we denote those simplices as $`\overline{A}`$, …, $`\overline{F}`$, where, say, $`\overline{A}`$ is the simplex $`BCDEF`$ (not containing the vertex $`A`$). The four-dimensional hypervolume we will denote as $`V`$, e.g., $`V_{\overline{A}}`$ is the hypervolume of the simplex $`\overline{A}`$. We will need also the areas of two-dimensional faces ($`S_{ABC}`$ being the area of face $`ABC`$ and so on) and the “defect angles”, or “discrete curvatures” concentrated in those faces (the defect angle $`\omega _{ABC}`$ corresponds to the face $`ABC`$, etc.).
A defect angle means the following. With arbitrary distances $`l_{AB}`$, …, $`l_{EF}`$, the points $`A`$, …, $`F`$ may not necessarily be placed in the (“flat”) four-dimensional euclidean space. Any five of those points, however, can be placed there, thus forming a 4-simplex with vertices in those points, an then one can calculate the “dihedral angles” between its three-dimensional hyperfaces. There are three such “dihedral angles” at the two-dimensional face $`ABC`$ — they correspond to tetrahedra $`\overline{D}`$, $`\overline{E}`$ and $`\overline{F}`$. In the flat case, the sum of those angles is $`2\pi `$, and in the general case, it equals, by definition, $`2\pi \omega _{ABC}`$.
All our considerations will take place in a small neighborhood of the flat case $`\omega _{ABC}=0`$. Arguments perfectly analogous to those in , but using \[1, formulas (15, 16)\] instead of \[1, formulas (11, 12)\], yield
$$\frac{1}{12}\left|\frac{S_{ABC}l_{AB}dl_{AB}}{V_{\overline{D}}V_{\overline{E}}V_{\overline{F}}}\right|=\left|\frac{d\omega _{ABC}}{V_{\overline{A}}V_{\overline{B}}}\right|,$$
(1)
if only $`l_{AB}`$ of all distances can vary. Similarly, one can write
$$\frac{1}{12}\left|\frac{S_{DEF}l_{DE}dl_{DE}}{V_{\overline{A}}V_{\overline{B}}V_{\overline{C}}}\right|=\left|\frac{d\omega _{DEF}}{V_{\overline{D}}V_{\overline{E}}}\right|,$$
(2)
if only $`l_{AB}`$ can vary.
If both $`l_{AB}`$ and $`l_{DE}`$ can change, but the zero curvature is fixed:
$$\omega _{ABC}0,$$
(3)
which is obviously equivalent to
$$\omega _{DEF}0,$$
(4)
then $`dl_{AB}`$ and $`dl_{DE}`$ are related by
$$\left|\frac{l_{AB}dl_{AB}}{V_{\overline{D}}V_{\overline{E}}}\right|=\left|\frac{l_{DE}dl_{DE}}{V_{\overline{A}}V_{\overline{B}}}\right|$$
(5)
(cf. \[1, (16)\]), and similarly one can write out the relation between the differentials of any pair of distances.
Consider $`\omega _{ABC}`$ as a function of fifteen distances. In a neighborhood of the flat configuration, we have
$$d\omega _{ABC}=c_{AB}dl_{AB}+\mathrm{}+c_{EF}dl_{EF},$$
(6)
where all the ratios of coefficients $`c_{\mathrm{}}`$ are fixed (at least, up to a sign) by the formula (5) and the like formulae for other pairs of distances, e.g.,
$$\left|\frac{c_{AB}}{c_{DE}}\right|=\left|\frac{l_{AB}V_{\overline{A}}V_{\overline{B}}}{l_{DE}V_{\overline{D}}V_{\overline{E}}}\right|,$$
(7)
etc. If now we write, analogously,
$$d\omega _{DEF}=c_{AB}^{}dl_{AB}+\mathrm{}+c_{EF}^{}dl_{EF},$$
(8)
then the coefficients $`c_{\mathrm{}}^{}`$ will, obviously, have the same ratios. Thus, the differentials of curvatures $`\omega _{ABC}`$ and $`\omega _{DEF}`$ as functions of all distances are proportional, namely, from (1, 2 and 5) we find:
$$\left|\frac{d\omega _{ABC}}{S_{ABC}V_{\overline{A}}V_{\overline{B}}V_{\overline{C}}}\right|=\left|\frac{d\omega _{DEF}}{S_{DEF}V_{\overline{D}}V_{\overline{E}}V_{\overline{F}}}\right|.$$
(9)
Let us, finally, write this in the form of the desired “six-term equation”:
$$\frac{S_{ABC}\delta (\omega _{ABC})}{V_{\overline{D}}V_{\overline{E}}V_{\overline{F}}}=\frac{S_{DEF}\delta (\omega _{DEF})}{V_{\overline{A}}V_{\overline{B}}V_{\overline{C}}}.$$
(10)
Here $`\delta `$ is the Dirac delta function, and instead of writing out the absolute value signs, we assume that the signs of (oriented) hypervolumes and areas are chosen “properly”. It is implied that both sides of (10) can be integrated in any of $`dl_{AB}`$, …, $`dl_{EF}`$.
## 3 Remarks
1. Our equation (10) corresponds to a “move of type $`33`$”, i.e., three simplices are transformed into three new ones. Nontrivial seems the question of what to do with the other Alexander moves, that is $`24`$ and $`15`$. Similarly, in the paper the moves $`23`$ are analyzed, while $`14`$ requires further investigation.
2. Our formulas are likely to be useful for quantum gravity. Namely, they may help to find the most symmetric integration measure for “functional integrals” in the discrete Regge-type models of space-time.
3. The triangulated manifold where our rebuildings take place is not bound to be flat — see the similar Remark 2 in the end of paper .
|
no-problem/0003/astro-ph0003043.html
|
ar5iv
|
text
|
# Jets from compact objects
## 1. Jet precession and warped disks
Precession is measured directly in the jets of SS 433, whose direction varies with an approximate 165d period. Indirect evidence is seen in the morphology of the hot spots of AGN jets. An example is as Cyg A, where the radio lobes show ‘fossil’ hot spots, offset from the present (most luminous) hot spot position by rotation over angles of some 10 degrees. This gives an approximate point symmetric appearance to the lobes of.
If the central engine causes the direction of the jet to change with time (precession), its path in space at a given time appears curved, like the spray of water from a rotating garden sprinkler. At each point along the instantaneous path of the jet, there is a slight difference between the direction of fluid motion and the tangent to the jet’s path. In many cases, this may be the simplest explanation for apparent bending in FRII jets. Alternatives like redirection by clouds in the path of the jet have would be called for only if there is supporting evidence like the dissipation and decollimation that accompanies the redirection of supersonic flows by external obstacles (observe this by directing the jet from a garden hose at the tiles on your garden path).
If jets are produced by accreting compact objects, their flow direction is plausibly determined either by the rotation axis of the disk, or that of the accreting object. The rate at which the rotation axis of the compact object can change is limited by the rate $`\dot{M}/M`$ at which its angular momentum can change by accretion. The disk itself can change direction more rapidly, for example if the angular momentum vector of the gas supplied to the disk changes in time. It then takes only the viscous time in the disk for this change to propagate to the inner region where the jet originates. If the jet is caused by the disk itself, as in the magnetic wind model, its direction will naturally follow the orientation of the inner disk. If, on the other hand, the jet is caused by the rotation of the compact object, as in the Blandford-Znajek (1977) model (see also Blandford, 1993), one might at first sight expect the jet direction to be given also by the axis of the rotating hole. In this mechanism, however, a disk must be present to supply the magnetic field to the horizon of the hole and extract the rotation energy. Since the mechanism by itself does not produce a highly collimated jet, it is possible that the collimation of a Blandford-Znajek outflow is also provided by the disk. In this case, the direction of the jet could follow the disk axis even though it is powered by the hole.
In either case, we arrive at changes of orientation of a disk as a likely explanation for jet precession (cf. van den Heuvel et al. 1982). The most promising cause for such changes proposed so far is an instability due to irradiation of the disk by the central source. Such irradiation can cause the outer parts of the disk to develop a radiation-heated atmosphere which drives a wind (Begelman et al. 1983). Schandl and Meyer (1994) have shown that the momentum flux in such a wind can cause the disk to become unstable to bending out-of-the plane, i.e. warping. As soon as the disk is warped, the radiation intercepted by one side of the disk is larger than the other, and the wind pressure on that side larger. The net torque on the disk due to the difference in wind pressure causes the irradiated part of the disk to precess. At the same time, the warp propagates radially by viscous diffusion, and grows in amplitude with time. Shadowing of parts of the disk by warps in regions closer in makes the nonlinear development of the warps quite complicated. Schandl and Meyer show how this irradiation-driven wind instability can explain the precessing tilted disk in Her X-1.
A radiation-driven wind is expected to be important in the outer regions of a disk, where the Compton temperature corresponding to the incident X-ray spectrum is of the order of the escape velocity from the disk. Closer to the compact object, wind losses by this process are small. Pringle (1996) studied the same instability without an irradiation-induced wind, using only the effect of radiation pressure on the disk. Pringle (1997) follows the evolution of such warps to arbitrary tilt angles, including the self-shadowing effects, and concludes that the inner regions of AGN disks can tilt over more than 90. Apart from a time-dependent jet direction, this means that one should expect little correlation between jet axis and the plane of the host galaxy.
The equations for the evolution of warps in thin accretion disks have been corrected with respect to previous treatments, and put on a firm mathematical basis by Ogilvie (1999). He also presents a practical scheme for the numerical treatment of evolving warps of arbitrary amplitude.
A final cause for precession could be the momentum carried by a magnetically accelerated jet. As in the case of irradiation- and wind -induced warping, the reaction of the jet thrust on the disk may make the disk unstable if the thrust depends on the disk inclination. This possibility has apparently not been studied much, so far.
## 2. Unconfined jets
The confinement of jets, i.e. mechanisms opposing the widening and decollimation of the jet by the internal pressure, have played an important role in the early interpretations of jet observations (e.g. Begelman et al. 1984). It is useful to keep in mind the simple possibility of unconfined jets, i.e. purely ballistic flows like the jet from a fire hose, however. (This is sometimes called ‘inertial confinement’). The rate of unconfined sideways expansion due to internal pressure may actually be small enough to explain narrow jets in many cases, especially in relativistic jets. To see this, assume as an example the decollimation by internal pressure of an initially collimated jet. That is, we assume that the central engine provides a collimated jet and we follow how it widens when it is exposed to an external vacuum. The widening converts the enthalpy of the gas, $`w=c_\mathrm{s}^2/(\gamma 1)`$ into kinetic energy of expansion, where $`c_\mathrm{s}`$ is the sound speed and $`\gamma `$ the ratio of specific heats (assumed fixed). The expansion velocity (perpendicular to the jet axis), as seen in the comoving frame, is thus of the order of the sound speed. If the Lorentz factor of the jet is $`\mathrm{\Gamma }`$, the travel time of the jet (between the central engine and the its termination at the hot spot, for example, as seen in the comoving frame), is reduced by a factor $`\mathrm{\Gamma }`$. The opening angle $`\delta `$ of this freely expanding jet is thus $`\delta =c_\mathrm{s}/(\beta \mathrm{\Gamma }c)`$. As an example, even when the internal sound speed is initially as large as half the speed of light, the opening angle of a freely expanding jet with $`\mathrm{\Gamma }=10`$ is only 6 degrees.
The most collimated AGN jets are the FRII’s, for which Lorentz factors of order 10 are invoked, and where dissipation along the jet is small compared with the dissipation at the terminal shocks (the hot spots). In these jets, the above argument shows that collimation by an external medium is probably not neccessary on observed lengths scales (VLBI and up), if the central engine itself (the unresolved part) provides enough collimation. It is not desirable either, since the inevitable dissipation associated with the interaction with a collimating external agent would probably disagree with the observed low emission from the jet. Curvature of FRII jets could be accounted for by a modest precession rate (see section 1).
## 3. Knots in jets
Where resolved, jets in general show knots, i.e. sections of high brightness separated by low emission intervals. Various mechanisms have been proposed, including internal instabilities in the jet, or Kelvin-Helmholtz instability due to interaction with an environment. The simplest of all, proposed by Rees (1978) assumes that the central engine is not exactly steady, but that the flow speed varies by a modest amount. In the faster episodes, the flow overtakes the slower bits. If the jet speed is as highly supersonic as the observations indicate, this process produces large density contrasts and internal shocks . The dissipation in these shocks then produces the observed synchrotron emission (in the case of AGN) or molecular hydrogen and atomic emission (in the case of protostellar jets). In this model, as opposed to most others, the origin of the knots observed at large distances lies in the central engine. If this engine produces symmetric jets, one would expect the knots also to be symmetric on both sides. Internal instabilities and interaction with an environment do not naturally produce such symmetry. In FRII jets, the Lorentz factors are generally so large that only the approaching jet is clearly observable, so this prediction can not be easily tested. In the galactic superluminal sources known so far, however (e.g. Mirabel and Rodriguez, 1999), both the approaching and receding jet are seen, and the observed symmetry of knots (distorted by the Doppler effect, and thereby yielding allowing determination of orientation and speed of the jet) clearly argues in favor of the modulated-flow model. Convincing evidence is also given protostellar jets, where the knots (Herbig-Haro objects) are often quite symmetric. An example is the HH212 jet (Zinnecker et al. 1998), shown in figure 1. Though at larger distances the knots in this object are less symmetric, indicating interaction with the environment, the inner regions are beautifully symmetric and present a clear case for the modulated-flow model.
### 3.1. Internal shocks in GRB and Blazars
Modulation in the central engine also connects naturally with models of blazars and $`\gamma `$-ray bursts (GRB) in which internal shocks are invoked, and thereby plays a unifying role for models ranging from the protostellar to the GRB scale.
In blazars, a systematic variation of the spectral energy distribution with luminosity is observed (Fossati et al. 1998). In these sources, believed to be AGN with the relativistic jet pointed at the observer, the spectrum is dominated (energy-wise) by two humps, one in the IR-to UV range and one in the gamma range. The first is interpreted as synchrotron emission, the second as Comptonized radiation (both Doppler-shifted by the relativistic motion towards us). The seed photons of the second bump can be either synchrotron radiation generated internally in the jet or external UV radiation from the accretion disk, and are upscattered by the inverse Compton process on energetic electrons in the jet. In order to produce the required energetic electrons, internal shocks are invoked in a model by Ghisellini et al. (1998). With increasing disk luminosity, the energetic electrons produced by the shocks are cooled more effectively by the inverse Compton process, so that both the synchrotron and the Compton peak shift to lower photon energy, while the luminosity carried by the Comptonized component increases relative to the synchrotron component. These are the correlations with luminosity noted by Fossati et al. For the model to work, the properties of the jet and the internal shocks are assumed to be rather insensitive to the overall luminosity of the source.
Collimated outflows are also invoked in models for GRB (e.g. Meszaros 1999, Sari et al. 1999). In some GRB (in particular the 23 of January 1999 event), the total radiated energy in the $`\gamma `$ range of can be as high as $`\mathrm{3\hspace{0.17em}10}^{54}`$ erg, if the source is assumed to radiate isotropically. If the source emits energy only in a narrow cone (pointed at us), the required energy budget can be reduced, bringing currently favored models for the central engine, based on stellar-mass collapsed objects, within the range of relevance. The very erratic light curves of GRB are interpreted, in this class of models, as due to a variation in bulk Lorentz factor of the outflow, caused by a (strong) modulation in the power output of the central engine. Dissipation of the shocks developing in this varying outflow speed accelerates energetic electrons (as in blazar models) producing the observed radiation as synchrotron and synchrotron-self Compton radiation (e.g. Huang et al. 1998, Chiang and Dermer 1999).
## 4. Acceleration mechanisms
There appear to be 3 acceleration mechanisms for jets still being pursued: the magnetohydrodynamic disk wind, the Blandford-Znajek mechanism, and the ‘Compton rocket’. The most popular is perhaps the magnetic wind model (Bisnovatyi-Kogan and Ruzmaikin 1976, Blandford 1976, Blandford and Payne 1982, for an introduction and more references see Spruit 1996). This is in part because detailed calculations are possible for this model, resulting in confidence that it is actually realizable in nature. It also has problems that prevent quantitative application to observed systems, however. One of these is the still poorly understood structure of the magnetic field in the disk. A global, ordered poloidal magnetic field is usually assumed (see, however, figure 4 in Blandford and Payne, 1982), but it is not clear if such a field can be maintained in a disk in the presence of a turbulent diffusion (van Ballegooijen 1989, Lubow et al. 1994). Such a diffusion seems indicated by current numerical simulations of magnetic turbulence in accretion disks (e.g. Hawley & Stone 1998, Armitage 1998). Related to this is the ‘launching problem’: the mass flow in the wind depends sensitively on the transition between disk and wind, which in turn depends on details of the disk stratification and field configuration near the disk surface which are poorly known. Not all disks show jets or outflows, and in the ones that do, the outflows are often sporadic in a way which does not correlate very clearly with the mass accretion rate. An example are the jets in 1915+105, which appear intermittently, possibly correlated with certain transitions in X-ray behavior but not with X-ray flux itself. A third problem is that of collimation.
In the ‘Compton rocket’ model (O’Dell 1981, Kondo 1997, Renauld and Henri 1998), it is assumed that the disk produces, near its surface, a plasma consisting mostly of $`e^\pm `$ pairs. Pair annihilation in this plasma produces radiation which accelerates the remaining plasma outward. The physics of this model is closely related to the ‘fireball’ models for gamma-ray bursts (Paczyński 1986, Goodman 1986), in which an optically thick, high-temperature pair plasma expands as a relativistic outflow. As in the GRB case, the difficulty with the Compton rocket model is finding a plausible scenario for making the energetic pair plasma.
In the Blandford-Znajek (1977) mechanism the rotation energy of the accreting black hole is used. A magnetic field in the accretion disk feeds field lines through the horizon of the hole. (In the absence of an external source of field lines, the hole would be unmagnetized). Dragging of inertial frames near the rotating hole twists these field lines, putting stress into them which is released as a Poynting flux away from the hole. In the original form of the model, this Poynting flux is assumed to decay into an $`e^\pm `$-plasma, in much the same way as is believed to happen in the Crab pulsar wind (e.g. Gallant and Arons 1994, Melatos and Melrose 1996). The escaping relativistic pair wind forms a jet (after suitable collimation, for example by the same disk-maintained magnetic field that magnetizes the hole).
It is not necessary that the wind be exclusively in the form of a pair plasma. The accretion flow inside the last stable orbit carries baryonic matter into the region where the field line twisting occurs, and this matter may also be accelerated out as a wind. Quantitative models in which this happens have been made by Camenzind (in preparation). If the plasma density remains large enough in the outflow for an MHD approximation to be valid, it will remain a normal (ionized gas) plasma rather than a pair plasma. In these models, there is a gradual transition from a disk-generated magnetic wind, at larger distances from the hole, to a wind powered by field lines dragged around in the hole’s rotating gravitational field. This shows that, though it is often assumed that the Blandford-Znajek mechanism will produce a jet consisting of pair plasma, it is equally possible that it will result in a normal plasma outflow.
An energetic argument has been given by Livio et al. (1998) in which the jet-powering capacity of a rotating hole is compared with that of the inner regions of an accretion disk. When a magnetic field is twisted, the energy stored in the field by the twisting torque generally limits the twisted field component to a value not larger than the original untwisted field strength (in some volume-averaged sense). When the field is strained further, the field lines open up, and the twisted (azimuthal) field component as well as the torque decrease again (Aly 1991, 1994, Lynden-Bell & Boily 1994). Approximating the twisting as if it were all occurring at the horizon $`r_\mathrm{h}`$, the maximum torque on the hole is thus of the order $`B_\mathrm{h}^2r_\mathrm{h}^3`$, where $`B_\mathrm{h}`$ the field strength at the horizon, and the rate of energy extraction is $`\mathrm{\Omega }_\mathrm{h}B_\mathrm{h}^2r_\mathrm{h}^3`$. Since $`B_\mathrm{h}`$ is provided by the inner regions of the disk, it is not larger than the field strength $`B_\mathrm{d}`$ in these regions. The maximum stress exerted on the disk by a magnetic disk wind is of the order $`B^2/2\pi `$, hence the total wind torque from the inner regions, with radius $`r_d`$ and area $`\pi r_d^2`$, has a maximum of the order $`B_\mathrm{d}^2r_\mathrm{d}^3`$. The maximum rate of energy extraction is then $`\mathrm{\Omega }_\mathrm{d}B_\mathrm{d}^2r_\mathrm{d}^3`$, where $`\mathrm{\Omega }_\mathrm{d}`$ is the rotation rate of the disk. With these estimates, the energy extraction rate from a hole by a wind is less than the maximum rate of energy extraction by a magnetic wind from the disk surrounding it, except when the hole rotates near its maximum rate, in which case the two are comparable.
Though the hole can have a very large rotational energy available, the rate at which this can be extracted does not exceed the rate at which the disk itself can power a magnetic jet. Note that this conclusion is not an argument against the possible importance of the Blandford-Znajek mechanism. It may well be that there are reasons why the disk itself is not able to reach its possible maximum wind power, while the hole onto which it accretes is happily powering a Blandford-Znajek flow. But the reverse may also be the case.
### 4.1. Observational clue: photon drag
A relativistic jet accelerated inside a dense radiation field will upscatter these photons to higher energies, as in the blazar models mentioned, but in the process the jet also loses kinetic energy. If a jet is accelerated to its terminal speed close to a disk (accreting at a known rate), the rate of energy loss in some observed systems would dominate the accretion power (for example, the galactic superluminal source GRS1915+105, see Gliozzi et al. 1999).
This problem is solved if the acceleration can be spread out over a large distance (compared with the inner disk), so that the radiation density is low in the region of largest speed. This is not the case with the Compton rocket mechanism, in which the acceleration is assumed to take place at the disk surface.
### 4.2. Poynting flux
Magnetic acceleration models can satisfy this requirement elegantly. In a hydromagnetic disk wind, for example, the acceleration is gradual, with most of the acceleration (energetically speaking) taking place away from the disk, near the Alfvén radius. The acceleration of a hydromagnetic disk wind is usually described in terms of the centrifugal force acting along the field lines (‘bead-on-a-wire’), which illustrates that the acceleration is gradual. Alternatively, the acceleration in this model can be described in terms of energy fluxes. Near the disk, the flux is in the form of a Poynting flux, which gradually gets converted (in part) into a kinetic energy flux. The two descriptions are equivalent. Thus, it is not necessary to appeal exclusively to a Blandford-Znajek process when observations indicate the need for a Poynting flux. An MHD wind will do just as well.
### 4.3. $`e^\pm `$-Winds from accretion disks
The distinction between the observational consequences of the Blandford-Znajek and the magnetic disk wind mechanisms is further blurred by the possibility that disks may produce a pair dominated wind instead of a normal plasma flow. Supposing that the disk has a strong poloidal magnetic field (i.e. with field lines sticking out above the disk). If conditions are right, disk matter is accelerated up along the field centrifugally, causing an ordinary plasma flow. (The conditions are that the temperature in the disk atmsphere are high enough and/or the inclination of the field lines with respect to the vertical large enough).
On the other hand, if these conditions are not satisfied, and mass flow from the disk into the wind region inhibited, the rotating magnetic field of the disk will still produce effects like in pulsar magnetospheres such as that of the Crab pulsar (e.g. Michel 1991, Gallant and Arons 1994). The enormous electric field strengths associated with the rotating vacuum magnetic field near a black hole accelerates any stray plasma particles to energies sufficient to create pairs. These are themselves accelerated and create a pair cascade until a sufficiently dense pair plasma is produced to limit the electric field by plasma currents. This plasma is then accelerated outward as an MHD wind in much the same way as a normal ionized gas plasma. Except for the different source of rotational energy, the process would be much the same as in the Blandford-Znajek case, including questions such as what collimates the flow.
## 5. The magnetic acceleration model
Details of the magnetic acceleration model have been given on numerous occasions (e.g. Spruit 1996). Only a few of its properties are discussed in the following, in which axisymmetric steady flow assumed. Deviations from axisymmetry and steadyness are probably important in particular for the collimation phase, as discussed below.
The process can be divided conceptually into three stages. In the transition from disk to wind (the ‘launching’ phase), the mass flux into the wind is regulated by the temperature of the plasma and the inclination and strength of the field lines (Blandford and Payne 1982, Ogilvie & Livio 1998). After passing through the sonic point (located close to the disk surface unless the temperature is near virial) the main acceleration phase sets in, and the wind can be treated as cold to a good approximation (gas pressure negligible). The acceleration is essentially complete at the Alfvén surface. Finally, there must be a collimation phase, during and/or after the acceleration phase.
For non-relativistic flows, the essence of the model is described by the cold (gas pressure neglected) Weber-Davis model (1967, hereafter WD model). This model depends on only one parameter, a magnetization or mass flux parameter. If $`\eta =\rho v_\mathrm{p}/B_\mathrm{p}`$ is the mass flux along a field line, per unit of magnetic flux, then a dimensionless mass flux parameter is
$$\mu =4\pi \eta v_{\varphi 0}/B_0,$$
(1)
where $`v_{\varphi 0}`$ is the orbital velicity of the foot point of the field line at the disk, and $`B_0`$ the field strength there. The terminal speed of the flow is (Michel, 1969):
$$v_{\mathrm{}}=v_{\varphi 0}\mu ^{1/3}$$
(2)
At low mass loading $`\mu 1`$, the final speed exceeds the orbital velocity at the source of the wind. In this case, the centrifugal acceleration picture is a good description of the flow. Inside the Alfvén surface, which is far from the disk, the flow corotates approximately with the foot point. For $`\mu 1`$, there is no corotating region; instead, the flow is more accurately described as being slowly pushed outward by a highly coiled-up magnetic field. The transition from low mass flux $`\mu <1`$ to high mass flux $`\mu >1`$ has been studied in numerical simulations by Turner et al. (2000). Because of the weak dependence of the terminal velocity on the mass flux, the terminal speed tends to be near the escape speed $`v_{\varphi 0}`$. Large terminal speeds require low mas fluxes, and correspondingly low gas densities in the accelerating region. In numerical simulations, such low densities are hard to deal with because they imply large Alfvén speeds that strongly limit the time step. A low-density, high velocity magnetically acccelerated wind is an intrinsically hard numerical problem.
Qualitative properties of the wind that are not captured by the Weber-Davis model are the collimation (the WD wind is uncollimated) and the asymptotic ratio of Poynting- to kinetic energy fluxes. In an axisymmetric WD wind this ratio is of order unity. This also applies in general if the wind is asymptotically well collimated parallel to the rotation axis. If the poloidal field lines diverge sufficiently rapidly with distance, near the Alfvén surface, much more of the Poynting flux is converted into kinetic energy (Begelman and Li, 1994). Such flows are poorly collimated.
If the gas pressure is not neglected, the stationary disk wind (in a Weber-Davis approximation, or if the shape of the poloidal field lines is assumed to be given) depends on two parameters: a mass flux parameter and a temperature parameter. For a concise treatment see Sakurai (1985).
In the relativistic case the equivalent of the WD model has been given by Michel (1969, 1973). There are now two parameters on which the results depend. In addition to the mass flux parameter, the finite speed of light introduces a relativity parameter $`v_\varphi /c`$, where $`v_\varphi `$ is the rotation velocity of the foot point of a field line. As the mass flux decreases, the Alfvén radius asymptotically approaches the light surface (light ‘cylinder’) $`r_\mathrm{L}=rc/v_\varphi `$ from the inside. Detailed two-dimensional models for steady flows of this kind have been made by Camenzind (1987). For recent time dependent, general-relativistic simulations see Koide et al. (1999).
### 5.1. Jet (de-)collimation
In an axisymmetric steady calculations, excellent collimation of the flow is found in most cases. Many flows become asymptotically parallel to the rotation axis, i.e. the collimation is perfect. The calculations predict that the flow is initially (near the disk surface) poorly collimated. This is because in order for centrifugal acceleration to work, the flow initially has to move away from the axis. Evidence for such poor initial collimation is found in some of the best resolved jets (e.g. Junor et al. 1999). The mechanism that achieves collimation in almost all magnetic models proposed so far is the ‘hoop stress’ of the wound-up magnetic field, and this does indeed work very well in axisymmetry. It is highly likely, however, that this nearly azimuthal field is very unstable to nonaxisymmetric modes which destroy the collimating hoop stresses. An external collimating mechanism is probably required for the magnetic disk-wind model. In Spruit et al. (1996) it is argued that this ingredient is poloidal magnetic flux anchored in the disk at larger distances from the central object.
## References
Aly, J.J., 1991, ApJ, 375, 61
Aly, J.J., 1994, A&A, 288, 1012
Armitage, P.J. 1998, ApJ, 501, 189
Begelman M.C., McKee, C.F. & Shields, G.A. 1983, ApJ, 271, 70
Begelman, M.C., Blandford, R.D. & Rees, M.J. 1984, Rev. Mod. Phys., 56, 255
Begelman, M.C. & Li, Z.Y. 1994, ApJ, 505, 835
Bisnovatyi-Kogan. G. & Ruzmaikin, A.A. 1976, Ap&SS, 42, 401
Blandford, R.D. 1976, MNRAS, 176, 465
Blandford, R.D. & Znajek, R.L., MNRAS, 179, 433
Blandford, R.D. & Payne, D.G. 1982, MNRAS, 199, 883
Blandford, R.D. 1993, in Astrophysical Jets, eds. D. Burgarella, M. Livio & C. O’Dea, (Cambridge: Cambridge University Press), 15
Camenzind, M. 1987, A&A, 184, 341
Fossati, G., Maraschi, L., Celotti, A., Comastri, A. & Ghisellini, G., 1998, MNRAS, 299, 433
Gallant Y.A., & Arons J., 1994, ApJ, 435, 230
Ghisellini, G., Celotti, A., Fossati, G., Maraschi, L., & Comastri, A., 1998, MNRAS, 301, 451
Gliozzi, M., Bodo, G., Ghisellini, G., 1999, MNRAS, 303, 37
Goodman, J. 1986, ApJ, 308, 47
Hawley, J.F., Stone, J.M., 1998, ApJ, 501, 758
Huang, Y.F., Dai, Z.G., Wei, D.M., & Lu, T., 1998, MNRAS, 298, 459
Junor, W., Biretta, J.A., & Livio, M. 1999, Nature 401, 891
Koide S.,. Meier D.L., Shibata K., & Kudoh T., astro-ph/9907435 (see also this volume).
Kondo M., 1997, in The Central Regions of the Galaxy (IAU Symp 184), 244
Livio, M., Ogilvie, G.I., Pringle, J.E., 1999, ApJ, 512, 100
Lynden-Bell, D. & Boily, C., 1994, MNRAS, 267, 146
Melatos, A., Melrose, D. 1996, MNRAS, 279, 1168
Meszaros, P. 1999, A&AS, 138, 533
Michel, F. C. 1991, Theory of neutron star magnetospheres. (Chicago: Univ. Chicago Press)
Michel, F. C. 1969, ApJ, 158, 727
Michel, F.C. 1973 ApJ, 180, L133
Mirabel, I.F., & Rodriguez, L.F. 1999, ARA&A, 37, in press
O’Dell S. L., 1981, ApJ, 243, L143
Ogilvie, G.I., & Livio, M., 1998, ApJ, 499, 329
Ogilvie, G.I. 1999, MNRAS, 304, 557
Paczyński, B., 1986, ApJ, 304, 1
Lubow, S.H., Papaloizou, J.C.B., & Pringle, J.E., 1994, MNRAS, 267, 235
Pringle J.E., 1996, MNRAS, 281, 357
Pringle J.E., 1997, MNRAS, 292, 136
Renauld N., & Henri G., 1998, MNRAS, 300, 1047
Rees M.J. 1978, MNRAS, 184, 61
Turner, N.J., Bodenheimer, P. & Rȯźyczka, M. 2000, preprint
Sakurai, T. 1985, A&A152, 121
Sari, R., Piran, T., & Halpern, J.P. 1999, ApJ, 519, 17
Schandl S., & Meyer F., 1994, A&A, 289, 149
Spruit, H.C. 1996, in ‘Physical Processes in Binary Stars’, eds. R.A.M.J. Wijers, M.B. Davies and C.A. Tout, Kluwer Dordrecht, p249 (see also http://www/mpa-garching.mpg.de./ henk)
Spruit, H.C., Foglizzo, T. & Stehle R. 1996, MNRAS, 288, 333
van Ballegooijen, A.A. 1989, in Accretion Disks and Magnetic Fields in Astrophysics, ed. G. Belvedere (Dordrecht: Kluwer), p.99
van den Heuvel, E.P.J., Ostriker, J.P., Petterson, J.A., 1980, A&A, 81, L7
Weber, E.J., & Davis, L. 1967, ApJ, 148, 217
Zinnecker, H., McCaughrean, M. J., & Rayner, J. T. 1998, Nature, 394, 862-865.
|
no-problem/0003/astro-ph0003381.html
|
ar5iv
|
text
|
# Untitled Document
AN OUTLINE OF RADIATIVELY-DRIVEN COSMOLOGY
Robert L. Kurucz
Harvard-Smithsonian Center for Astrophysics
December 13, 1991
Revised January 24, 1993
Revised October 2, 1993
Revised October 19, 1994
Revised May 8, 1997
Revised March 5, 2000
AN OUTLINE OF RADIATIVELY-DRIVEN COSMOLOGY
Robert L. Kurucz
Harvard-Smithsonian Center for Astrophysics, 60 Garden St, Cambridge, MA 02138
ABSTRACT
A Big Bang universe consisting, before recombination, of H, D, <sup>3</sup>He, <sup>4</sup>He, <sup>6</sup>Li, and <sup>7</sup>Li ions, electrons, photons, and massless neutrinos, at closure density, with a galaxy-size perturbation spectrum but no large-scale structure, will evolve into the universe as we now observe it. Evolution during the first billion years is controlled by radiation. Globular clusters are formed by radiatively-driven implosions, galaxies are formed by radiatively triggered gravitational collapse of systems of globular clusters, and voids are formed by radiatively-driven expansion. After this period the strong radiation sources are exhausted and the universe has expanded to the point where further evolution is determined by gravity and universal expansion.
Subject headings: cosmology — stars: Population III — stars: Population II — clusters: globular — galaxies: evolution
1. INTRODUCTION
Cosmology suffers from the same sort of conceptual error as did geology and evolutionary biology earlier in this century. “Gradualism” or “uniformitarianism” and slow changes are assumed, probably because it makes modeling easier. “There was the Big Bang. There was decoupling (= recombination). Nothing much else has happened. Gravity is the only force that matters. Evolution is proceeding slowly and only a fraction of matter has formed galaxies and stars. The “microwave” background just sits there. The only important science is determining the expansion parameters”. Gradual evolution has always turned out to be a delusion produced by oversimplification.
In reality, a second force is produced by radiative acceleration. It triggers rapid collapses that go to almost 100% completion. It produces “catastrophic” or “episodic” evolution.
Here we present the results of gedanken experiments (Kurucz 1992) in a traditional, linear, chronological sequence in the hope of stimulating research on the many topics considered.
2. CONDITIONS BEFORE RECOMBINATION
The evolution of the universe from before recombination to the present time can be explained by simple, elementary physics. Let us start when the universe is a few hundred thousand years old, at the time when the temperature has fallen to about 10000K. Let it consist of H, D, <sup>3</sup>He, <sup>4</sup>He, <sup>6</sup>Li, and <sup>7</sup>Li ions, electrons, photons, and massless neutrinos at the closure density, between 10<sup>4</sup> and 10<sup>5</sup> per cubic centimeter. Abundances are taken from standard Big Bang nucleosynthesis calculations shown in Figure 1. These abundances are ten times higher for Li, 10 times lower for <sup>3</sup>He and D than cosmologists have assumed in the past but they are consistant with observation (He, Sasselov and Goldwirth 1995; D upper limit, Lubowich et al 1994; Li, Kurucz 1995) in that there are no observations of primordial <sup>3</sup>He or D, and in that the Li abundance in extreme Population II stars has been grossly underestimated.
The gas is opaque. The redshift Z is approximately 1300. There is uniform expansion and cooling of the universe. There is no large-scale structure; the universe is filled with galaxy-size perturbations in density and temperature that were created at an earlier time. As the universe expands those perturbations evolve into highly structured galaxies with myriad condensations, and the galaxies themselves form large-scale structures.
3. GALAXY-SIZE PERTURBATIONS
Ignoring mergers and collisional destruction, every galaxy extant corresponds to a prerecombination perturbation, and vice versa. Thus the distribution function for the perturbation masses is approximately the distribution function for galaxy masses now, except at the extremes. There are no symmetries in the initial galaxy-size perturbations. They have facets, convexities, concavities, etc. from early close packing (as in a Voronoi tesselation). Because there is no symmetry, every perturbation has angular momentum.
The perturbations are in quasi-hydrostatic equilibrium with gravity pulling inward trying to increase the density while radiative acceleration pushes outward trying to smooth out the perturbation. The cosmological expansion enhances the perturbation. The denser, hotter regions are compressed (i.e., they expand less rapidly) while less dense regions are pulled apart. The local gravity vector g does not point radially toward the perturbation “center”. The radiative acceleration vector g<sub>rad</sub> has similar components pointing in the opposite direction. The effective gravity at any point is g<sub>eff</sub> = g + g<sub>rad</sub> . The surface and volume of the perturbations are defined by the surfaces g<sub>eff</sub> = 0.
From this starting point the universe continues to expand and cool until the temperature drops to a few thousand degrees. The electrons combine with the ions until most of the matter is neutral or negative. The opacity of the gas drops and radiative acceleration plummets.
4. FORMATION OF ATOMS AND MOLECULES
Recombination actually starts as soon as electrons and protons are formed. What happens at “recombination” or “decoupling” is that photoionization (of hydrogen) stops. The electron number density drops drastically so that the gas pressure drops by a factor of 2. The electron contribution to the opacity drops drastically as does the radiative acceleration.
Recombination and cooling is much more complicated than has been assumed. The recombinations in order of energy are
<sup>7</sup>Li<sup>+++</sup> \+ e $`>`$ <sup>7</sup>Li<sup>++</sup> \+ 987660 cm-1
<sup>6</sup>Li<sup>+++</sup> \+ e $`>`$ <sup>6</sup>Li<sup>++</sup> \+ 987647 cm-1
<sup>7</sup>Li<sup>++</sup> \+ e $`>`$ <sup>7</sup>Li<sup>+</sup> \+ 610080
<sup>6</sup>Li<sup>++</sup> \+ e $`>`$ <sup>6</sup>Li<sup>+</sup> \+ 610066
<sup>4</sup>He<sup>++</sup> \+ e $`>`$ <sup>4</sup>He<sup>+</sup> \+ 438909
<sup>3</sup>He<sup>++</sup> \+ e $`>`$ <sup>3</sup>He<sup>+</sup> \+ 438889
<sup>4</sup>He<sup>+</sup> \+ e $`>`$ <sup>4</sup>He + 198311
<sup>3</sup>He<sup>+</sup> \+ e $`>`$ <sup>3</sup>He + 198291 ?
<sup>2</sup>H<sup>+</sup> \+ e $`>`$ <sup>2</sup>H + 109709
<sup>1</sup>H<sup>+</sup> \+ e $`>`$ <sup>1</sup>H + 109679
<sup>7</sup>Li<sup>+</sup> \+ e $`>`$ <sup>7</sup>Li + 43487
<sup>6</sup>Li<sup>+</sup> \+ e $`>`$ <sup>6</sup>Li + 43472
<sup>2</sup>H + e $`>`$ <sup>2</sup>H<sup>-</sup> \+ 6061+ ?
<sup>1</sup>H + e $`>`$ <sup>1</sup>H<sup>-</sup> \+ 6061
<sup>7</sup>Li + e $`>`$ <sup>7</sup>Li<sup>-</sup> \+ 4981
<sup>6</sup>Li + e $`>`$ <sup>6</sup>Li<sup>-</sup> \+ 4981- ?
At the same time, there may also be high temperature molecules: all the positive and negative ions of Li<sub>2</sub>, LiHe, LiH, He<sub>2</sub>, HeH, and H<sub>2</sub> and their isotopomers. For example,
<sup>7</sup>Li<sup>1</sup>H<sup>++</sup> <sup>7</sup>Li<sup>1</sup>H<sup>+</sup> <sup>7</sup>Li<sup>1</sup>H <sup>7</sup>Li<sup>1</sup>H<sup>-</sup>
<sup>7</sup>Li<sup>2</sup>H<sup>++</sup> <sup>7</sup>Li<sup>2</sup>H<sup>+</sup> <sup>7</sup>Li<sup>2</sup>H <sup>7</sup>Li<sup>2</sup>H<sup>-</sup>
<sup>6</sup>Li<sup>1</sup>H<sup>++</sup> <sup>6</sup>Li<sup>1</sup>H<sup>+</sup> <sup>6</sup>Li<sup>1</sup>H <sup>6</sup>Li<sup>1</sup>H<sup>-</sup>
<sup>6</sup>Li<sup>2</sup>H<sup>++</sup> <sup>6</sup>Li<sup>2</sup>H<sup>+</sup> <sup>6</sup>Li<sup>2</sup>H <sup>6</sup>Li<sup>2</sup>H<sup>-</sup>
Passing through each He recombination reduces the number of particles and the gas pressure by 5%. The H recombination reduces the number of particles and the gas pressure by 45%. Li remains partially ionized and provides free electrons which can form H<sup>-</sup> and Li<sup>-</sup>. It can also participate in charge exchange reactions.
Decoupling is never complete because there are free electrons from the Li that Thomson scatter, because H and He Rayleigh scatter, because Li has lines in the visible that are optically thick on globular cluster scales, and because H<sup>-</sup> has continuous absorption in the visible and infrared that is optically thick at galaxy scales. The universe is optically thick to the recombination radiation. Thus the “microwave” background is not from the primordial black body but from a later time.
5. FORMATION OF GLOBULAR-CLUSTER-SIZE PERTURBATIONS
When the radiation field suddenly decouples, g<sub>rad</sub> becomes small and P<sub>gas</sub> collapses by a factor of more than 2, and g<sub>eff</sub> suddenly, impulsively increases to g. This inward impulse produces waves that travel at the speed of sound. However, because there is no symmetry, these waves cannot behave coherently. They cannot propagate far before interacting with other waves. They interfere in three dimensions. Perhaps they form shocks. The globular-cluster-size perturbation spectrum that they produce has high-density, low-mass maxima and low-density minima, all superimposed on the galaxy-size perturbation (Figure 2). At this stage every point in the universe has two peculiar velocity components: one toward the local globular-cluster-size perturbation maximum and one toward the local galaxy-size perturbation maximum. Research is needed to find out whether the waves leave behind microturbulent motions in the perturbations.
The temperature changes in the new perturbations are spectacular. In the less dense regions the temperature drops. In the dense centers the gas heats and partially ionizes. The opacity increases. Positively and negatively charged atoms and molecules flourish and radiate through the cool surface. As soon as the “recombination” or “decoupling” era begins it is over. The background blackbody radiation is completely destroyed. The radiation field comes from globular-cluster-size perturbations irradiating each other.
The universal expansion amplifies perturbations. Minima become relatively wider and maxima become sharper, both on the galaxy-size scale and on the globular-cluster-size scale. The universal expansion naturally separates the galaxy-size-perturbations and produces surfaces through which there can be outward flux. This also happens with the globular-cluster-size perturbations, and the outermost globular-cluster-size perturbations can radiate out of the galaxy-size perturbations and thus cool more rapidly than interior perturbations.
Coldness is a modern invention. The temperature of any matter never got below 500K, say, until the initial Population II stars produced dust by mass loss. The physics of the contemporary interstellar medium is not relevant at early times.
6. FORMATION OF POPULATION III STARS
The universe expands by a factor of 100 from recombination, say z = 1300, to Population III star formation, say z = 13 . The background radiation produced by the collapsing perturbations cools proportionally and fills the expanded volume. This radiation is always coupled to the perturbations. Even when it is redshifted by a factor of 100, it is still absorbed by molecules in the perturbations.
Li and any heteronuclear molecules have lines in the visible and infrared. There are between 300,000 and 400,000 lines: electronic, vibrational-rotational, and rotational. The red-shifted background radiation produces an overpopulation of the excited levels. The excited levels can absorb radiation and then emit at higher frequencies that are not likely to be absorbed by the cooler surface. This mechanism allows the perturbation to get rid of excess energy from the collapse. There are likely to be fluoresences that couple the different species and produce energy redistributions. The line opacity may be enhanced by the high microturbulent velocity. Differential velocities from the collapse can reduce or enhance absorption and emission.
The perturbations range in mass from more than 100 M to 10<sup>6</sup>M . The perturbations can collapse only as fast as excess energy can escape in radiation. A small perturbation radiatively cools faster than a large perturbation because it has a larger surface to volume ratio. The outermost perturbations radiate mostly into open space between the galaxy-size perturbations. The smallest perturbations collapse to form, say, 100 M Population III stars.
7. FORMATION OF GLOBULAR CLUSTERS
Massive Population III stars are superluminous. They radiate about 10<sup>53</sup> ergs in 10<sup>6</sup> years and then explode as supernovas. These are the only Population III stars and only their dead supernova remnants now remain, amounting to only a small fraction of the mass of the universe. Because there is not enough time for larger perturbations to evolve, all other matter in the universe is contaminated by the supernovas and becomes Population II material.
No matter what the perturbation spectrum, the big perturbations will in general be surrounded by small perturbations. These might have masses as small as 100 M. In diameter, these are only 20 times smaller than a 10<sup>6</sup> M perturbation and 50 times smaller than a 10<sup>7</sup> M perturbation. The radiative acceleration from each Population III star contributes to the radiatively-driven implosion of all its neighboring perturbations into globular clusters. Four Population III stars tetrahedrally arranged may be sufficient to implode the largest perturbations.
Globular cluster formation happens in layers like an onion. The surface of a perturbation is compressed and contaminated by the Population III stars. It becomes optically thick and forms a layer of Population II stars and becomes optically thin again. Simple versions of this process for radiatively imploding bumps on the surface of a molecular cloud and for radiatively imploding a small cloud between two hot stars have been presented in a series of papers by Sandford, Whitaker, and Klein (Sandford, Whitaker, and Klein 1982; 1984; Klein, Sandford, and Whitaker 1983), Figure 3, but they never extrapolated the idea to the formation of a globular cluster. Any leftover material in the outer shell is driven inward. The layering process repeats inward until all the matter in a large perturbation is formed into stars. The stellar abundances and masses are determined by the number and proximity of the supernovas. The distribution function of these Population II masses is the initial mass function. The masses can range over the whole spectrum but because the Population II material has higher opacity than the Population III material, and because its collapse is helped along by external forces, the masses are smaller than the Population III masses and can even be quite small. However, the smallest Population II stars are still larger than the smallest (future) Population I stars which form easily because of high opacity gas and dust. There are no initial Population II brown dwarfs.
A globular cluster can be formed at any time in any population. The only requirement is the existence of hot stars surrounding and radiatively imploding a large cloud.
8. FORMATION OF GALAXIES
Asymmetries in the distribution of the Population III stars around each large perturbation produce a small, net globular cluster velocity. Since there are excess Population III stars at the surface of galaxy-size perturbations, the globular clusters near those boundaries will be accelerated away from the boundaries and will have velocities inward on the order of a fraction of a km s<sup>-1</sup>. This is the radiative trigger that leads to the gravitational implosion (violent relaxation) of the systems of globular clusters into elliptical galaxies. Figure 4 shows a schematic calculation of such violent relaxation. As galaxy-size perturbations have no symmetry, they have angular momentum and they spin up as they collapse.
At this point at z $``$ 10 we have a statistically uniform universe filled with elliptical galaxies. The elliptical galaxies are transparent and widely spaced, but any line of sight intersects many galaxies. For the first time the universe becomes transparent. The “microwave” background comes either from some subsequent event in galaxy-quasar evolution that produces tremendous power near 100$`\mu `$m, or from the pair annihilation of background neutrinos integrated from transparency until now, or from both.
All of the globular clusters in these elliptical galaxies are the same age. The globular clusters collide and gain internal energy and rapidly disintegrate. By today 99.9% of them have disintegrated. The clusters that are left are not typical or representative of the properties of the initial ensemble. They were the cold tail. They are not pure, having added and lost stars through their whole lives. The current members of one of these globular clusters are not necessarily siblings, coeval, or even Population II. There can be dark globular clusters in which all or almost all the stars are neutron stars and white dwarfs.
Both globular cluster formation and galaxy formation produce intergalactic Population II gas and stars as leftovers or as high velocity ejecta. These stars may now be main sequence dwarfs, luminous giants, white dwarfs, or neutron stars. Galaxy formation also produces intergalactic globular clusters because high velocity clusters can be ejected in the violent relaxation.
Figures 5 through 12 schematically describe galactic evolution.
If the initial mass functions of the globular clusters that form an elliptical galaxy have almost all low mass stars, the galaxy remains an elliptical galaxy forever. These galaxies have low luminosity until the giant branch is strongly populated. A few, more massive, stars lose enough mass to fill the galaxy with the tenuous gas that produces the Lyman $`\alpha `$ forest.
If the initial mass functions of the globular clusters have mostly high mass stars, the elliptical galaxy evolves into a spiral galaxy. Supernova remnants and the mass lost by intermediate mass supergiants collapse into a bulge and a disk, which spin up.
An intermediate case produces an irregular or “young” galaxy.
When there is a significant high mass tail, after some 20 million years, the whole elliptical galaxy fills with supernovas and supernova remnants. The galaxy fills with jumbled magnetic structures. The galaxy becomes opaque. The supernova remnants cannot orbit because of their large collision cross-sections. They collapse into a central bulge with a quasar at the center. The magnetic structures are swept in as well. If there is a process in all of this that produces submillimeter radiation, that radiation is the microwave background.
Since the supernova remnants have high abundances, the bulge gas has high abundances and must form high abundance stars. This can happen both in galaxies that are today elliptical or spiral. These initial quasars continue to be powered by infall of gas that is blown off intermediate mass stars when the stars climb the giant branch. This gas is low abundance Population II gas. It dilutes the supernova remnant gas. This gas forms the disk of spiral galaxies so that stars in the disk have abundances initially lower than bulge abundances. The oldest population of stars in the disk suffers many globular cluster collisions, so it is dispersed into a thick disk.
The quasars eventually run out of fuel. If later the fuel is replenished, say by galaxy-galaxy collisions, the quasar can re-ignite.
The activity that we have been describing takes place in the first 10<sup>9</sup> years. The time scales are set by orbital and collapse times, and by stellar evolutionary time scales. It takes, say, one orbital time to form the bulge and quasar, and a few orbital times for the mass loss infall to form the disk.
Since the disk is formed from mass-loss material from Population II stars in the halo, the mass of the disk gives a lower limit to the mass of the one- to six-solar-mass primordial Population II stars in the halo and to the number of white dwarfs. Each star loses its own mass less the mass of a white dwarf.
Since the central object and bulge are formed from Population II supernova remnants, the mass of the central object and bulge (less the equivalent volume of halo stars) give a lower limit to the mass of the, say, 7 solar mass and greater primordial Population II stars in the halo and to the number of neutron stars. Each star loses its own mass less the mass of the neutron star.
9. FORMATION OF D AND <sup>3</sup>HE
The initial Population II supernovas produce remnants with magnetic fields that accelerate cosmic rays. The halo fills with magnetic structures and cosmic rays until the supernova remnants collapse to the center to produce the quasar and bulge. The cosmic rays that are not dragged along with the magnetic fields then decay through normal collisional attrition.
The cosmic rays interact with the primordial neutrino background to undergo ladder transmutations to higher or lower elements. In particular a small fraction of <sup>4</sup>He cosmic rays are transformed into <sup>3</sup>He cosmic rays (<sup>4</sup>He + $`\overline{\nu }_e`$ $``$ <sup>4</sup>H + e<sup>+</sup> = <sup>3</sup>H + n + e<sup>+</sup> $``$ <sup>3</sup>He + n + e<sup>+</sup> \+ e<sup>-</sup> \+ $`\overline{\nu }_e`$, and similarly for $`\overline{\nu }_\mu `$ and $`\overline{\nu }_\tau `$) and perhaps into D (<sup>4</sup>He + $`\overline{\nu }_e`$ $``$ <sup>4</sup>H + e<sup>+</sup> = <sup>2</sup>H + n + n + e<sup>+</sup>, if possible). Through collisions <sup>3</sup>He cosmic rays spall into D + p. Thus D and <sup>3</sup>He are Population II artifacts and their abundances are a measure of Population II supernova activity.
Massive, relatively abundant even element cosmic rays are partially transmuted to odd element cosmic rays ((Z,A) + $`\overline{\nu }_e`$ $``$ (Z–1,A) + e<sup>+</sup>; (Z,A) + $`\nu _e`$ $``$ (Z+1,A) + e<sup>-</sup>).
10. FORMATION OF VOIDS AND LARGE SCALE STRUCTURE
Next we consider radiatively-driven expansion. Primordial galaxies produce a tremendous amount of radiation. Any galaxy that is a spiral now originally had most of its mass in massive stars. A 10<sup>12</sup> M spiral galaxy produces, say, 10<sup>11</sup> supernovas yielding 10<sup>62</sup> ergs. The precurser stars radiate even more during their lifetimes, say 10<sup>63</sup> ergs. There might be 3x10<sup>11</sup> intermediate mass stars that radiate 10<sup>63</sup> ergs and end up as white dwarfs. In addition the quasar itself produces 10<sup>46</sup>\- 10<sup>47</sup> ergs s<sup>-1</sup> for say 3x10<sup>8</sup> years or about 10<sup>63</sup> ergs. There is also a great deal of energy from the collapse that heats the gas and is eventually radiated away, partly by the quasar. If half the large galaxies are spirals, it is easy to produce 10<sup>51</sup> ergs M<sup>-1</sup> averaged over all galaxies. \[Neutrinos produced by the supernovas add up to a similar amount of energy.\]
During the first billion years galaxies are much closer together than now. If that era corresponds to redshifts of say z=10 to z=5, galaxies are between 11 and 6 times closer than now. Statistically it is possible for a large group of galaxies (say 10<sup>5</sup>) to be optically thick to their own radiation (except for radio). Any photon emitted at the center passes through so many spiral galaxies that it must be absorbed, Figure 13. Thus the clump of galaxies expands from its own radiation pressure. Galaxies with high projected opacity-to-mass ratios, perhaps face-on spirals, are accelerated the most, followed by all the other spirals. The elliptical galaxies are dragged along by gravitational attraction. A low density region forms and continues to expand from radiation pressure as long as the galaxies are very bright and until the clump of galaxies becomes optically thin. The expansion of the universe eventually guarantees the latter. Eventually the role of radiation becomes insignificant compared to gravity.
Regős and Geller (1991) have shown that some of the small, low-density expanding regions in a uniform background will continue to expand gravitationally as the universe expands, Figure 14. They form voids that collide and merge. The collisions produce galaxy clusters, streaming in the void walls, and eventually the large scale structure that we see today.
11. SUMMARY
A Big Bang universe consisting, before recombination, of a gas of H, D, <sup>3</sup>He, <sup>4</sup>He, <sup>6</sup>Li, and <sup>7</sup>Li ions, electrons, photons, and massless neutrinos at a density sufficient to produce a flat universe, will evolve into the universe as we now observe it. Evolution during the first billion years is controlled by radiation.
The universe has evolved as follows since recombination:
1) There were pre-existing galaxy-size perturbations.
2) Recombination halved the gas pressure and removed the outward radiative acceleration from these perturbations thereby producing an inward impulse. The impulse generated waves that interfered and shocked to fill the large perturbations with globular-cluster-size perturbations.
3) The smallest perturbations formed superluminous Population III stars whose radiation caused
4) larger perturbations to implode and form globular clusters of Population II stars, and then
5) systems of globular clusters suffered radiatively-triggered collapse (violent relaxation) into elliptical galaxies, some of which
6) evolved to form quasars and spirals that
7) gave off so much radiation that, in some places, statistically, voids were formed by radiation pressure, and then
8) void collisions and void walls produced clusters of galaxies and the large scale flows and structure that we see today.
9) The microwave background radiation is recent, younger than the galaxies.
The number of Population III stars was very small and they all exploded so that only remnants are left. Essentially all matter has been processed in stars. The interstellar medium was produced by stars. The intergalactic medium was produced by galaxies. It is not primordial.
All spiral and irregular galaxies that have not been damaged by collisions or interactions have large, massive, elliptical halos.
Figure 15 is the table of contents for our galaxy. Our galaxy has a halo containing about 10<sup>11</sup> neutron stars, 3x10<sup>11</sup> white dwarfs, visible K and M stars, and 10<sup>11</sup> slightly evolved low mass stars (all numbers to astronomical accuracy). It also has over 10<sup>2</sup> coeval globular clusters that are the remnants of 10<sup>6</sup> primordial globular clusters from which our galaxy was formed. There is a central, inactive, quasar surrounded by a bulge of high abundance Population II stars. Both were made from the first Population II supernova remnants which collapsed from the halo to the center of the galaxy. The disk was made from gas lost by intermediate mass Population II stars in the halo when they evolved up the giant branch, and that gas subsequently collapsed into the disk and spun up to conserve angular momentum. Thus the disk has lower abundances than the bulge, even though it was formed later. There were still many globular clusters at the time of disk formation so many disk stars were scattered by collisions with globular clusters and formed a thick disk population. There are stars in the halo and globular clusters that were formed in the disk or bulge and were accreted by globular clusters and carried into the halo. The stars in globular clusters need not be siblings, coeval, or Population II. Non-primordial globular clusters could have been formed in the bulge, the disk, or in collapsing gas clouds.
During the first billion years evolution was controlled by changing matter into radiation in massive stars. Gravity became dominant only after these initial bursts of radiation were exhausted.
This work was supported in part by NASA grants NAG5-824 and NAGW-1486.
REFERENCES
Klein, R.I., Sandford, M.T.,II, & Whitaker, R.W. 1983, ApJ, 271, L69
Kurucz, R.L. 1992, Comments on Astrophysics, 16, 1-15.
Kurucz, R.L. 1995, ApJ, 452, 102-108.
Lubowich, D.A., Pasachoff,J.M., Galloway, R.P., Kurucz, R.L., and Smith, V.V.
1994, BAAS, 26, 1479.
Regős, E. & Geller, M.J. 1991, ApJ, 377, 14-28.
Sandford, M.T.,II, Whitaker, R.W., & Klein, R.I. 1982, ApJ, 260, 183-201.
Sandford, M.T.,II, Whitaker, R.W., & Klein, R.I. 1984, ApJ, 282, 178-190.
Sasselov, D. and Goldwirth, D. 1995, ApJL, 444, L5-L8.
FIGURE CAPTIONS
Figure 1. Big Bang abundances work if the density is chosen to close the universe. Observations: He, Sasselov and Goldwirth (1995); D upper limit, Lubowich et al (1994); Li, Kurucz (1995).
Figure 2. Schematic globular-cluster-size perturbations superposed on top of galaxy-size perturbations.
Figure 3. Simulations of radiatively-driven implosions of Population I clouds indicate the plausiblity of forming a globular cluster by surrounding a cloud with hot stars.
Figure 4 qualitatively demonstrates that small radiative accelerations are sufficient to trigger the collapse of a universe full of globular clusters into a universe full of elliptical galaxies. I borrowed the program from Regős that she used to model void formation (Regős and Geller 1991). The universe is periodically tesselated into cubes with constant density of globular clusters, 128\**3 per cube. Each cube is subdivided into 8 parallellopipeds as shown in the upper left. This is an arbitrary choice intended not to look like galaxy precursers. All the surfaces of all the parallellopipeds are given a small inward velocity as would be produced by excess supernovas at the the surfaces. The initial condition is zero gravitational force. The small motion of the surface globular clusters is enough to cause violent relaxation into a galaxy, except in one case where neighboring galaxies cause the smallest object to disintegrate and then assimilate its remains.
Figure 5. Schematic evolution of galaxy of 1/2 M stars.
Figure 6. Schematic evolution of galaxy of 1 M stars.
Figure 7. Schematic evolution of galaxy of 10 M stars.
Figure 8. Schematic evolution of galaxy with distribution function peaking at 2/3 M stars.
Figure 9. Schematic evolution of galaxy with distribution function peaking at 1 M stars.
Figure 10. Schematic evolution of galaxy with distribution function peaking at 10 M stars.
Figure 11. Evolution of our galaxy.
Figure 12. Isolated galaxy classification as a function of galaxy mass and of stellar mass distribution function peak.
Figure 13. The galaxies are so close together that for some large samples any ray out from the center intersects enough spiral galaxies to be absorbed. The collection of galaxies is optically thick.
Figure 14. Regős and Geller (1991) showed that starting with a uniform density universe, one could evolve voids and large scale structure by removing half the matter from small spheres and redistributing it in expanding shells.
Figure 15. Table of contents of our galaxy.
|
no-problem/0003/astro-ph0003091.html
|
ar5iv
|
text
|
# A 𝑅𝑂𝑆𝐴𝑇 HRI study of the open cluster NGC 3532
## 1 Introduction
The $`ROSAT`$ PSPC and HRI detectors have provided X-ray images for a large number of open clusters sampling the age range from $``$ 20 to 600 Myr (e.g., Randich randich00 (2000) and references therein; Jeffries jeffries99 (1999) and references therein; see also Belloni belloni97 (1997), for a review on older open clusters). The data have allowed investigating in great detail the dependence of X-ray activity on mass, age, rotation, and, in particular, to check the validity of the rotation–activity–age paradigm. The overall picture emerging from $`ROSAT`$ generally confirms that there is a tight dependence of X-ray activity on rotation (or on the so called Rossby number, the ratio of the rotation period over the convective turnover time – e.g., Noyes et al. noyes84 (1984)) and, through rotation, on age: the level of X-ray activity increases with increasing rotation and, since stars spin down as they age, the average or median X-ray luminosity decays with increasing age. However, the X-ray luminosity (or X-ray over bolometric luminosity) does not depend simply on some power of the rotational rate, and the activity–age dependence cannot be described by a Skumanich–type power law. In addition, a few puzzling results have arisen from $`ROSAT`$ data. For example, the finding that the bulk of the population of Praesepe solar-type stars have a significantly lower X-ray luminosity than the coeval Hyades and Coma Berenices clusters (Randich & Schmitt randich95 (1995); Randich et al. randich96 (1996)) has casted doubts on the common thinking that a unique activity–age relationship holds, and, consequently, that the X-ray properties of a cluster of a given age are representative of all clusters of the same age. A study by Barrado y Navascués et al. (barrado98 (1998)) seems to exclude that this result is due to a strong contamination of the Praesepe sample by cluster non-members; at the same time, $`ROSAT`$ observations of NGC 6633 suggest that this cluster, which is coeval to the Hyades and Praesepe, is more Praesepe–like than Hyades–like (Franciosini et al. francio00 (2000); Totten et al. totten00 (2000)). We also mention that the comparison of the Pleiades (120 Myr) with NGC 6475 (200 Myr) and with other clusters with ages of the order of 100–200 Myr also suggests that a tight/unique age–activity relationship may not hold (e.g. Randich randich00 (2000)). The issue of the uniqueness of the activity–age relationship is therefore not at all settled. In addition to optical studies that should ascertain cluster membership and provide complete (or close to completeness) lists of members and better defined cluster ages, additional, and possibly deeper, X-ray surveys of samples of coeval clusters are clearly required to further address this problem.
We present here a $`ROSAT`$ study of the NGC 3532 cluster: NGC 3532 is a very rich southern open cluster with an estimated age $`200350`$ Myr (Fernandez & Salgado fs80 (1980); Johansson johan81 (1981); Eggen eggen81 (1981); Koester & Reimers koester93 (1993); Meynet et al. meynet93 (1993)); it is therefore a good candidate to investigate the X-ray activity–age–rotation relationship at ages intermediate between the Pleiades and the Hyades, where, to our knowledge, X-rays studies exist for only one cluster (NGC 6475). The most likely value for the reddening of NGC 3532 is $`E(BV)=0.04`$ (Fernandez & Salgado fs80 (1980); Eggen eggen81 (1981); Schneider schneid87 (1987); Meynet et al. meynet93 (1993)); the metallicity of the cluster has been estimated to be close to solar (\[Fe/H\] $`0.02`$; Clariá & Lapasset claria88 (1988)). The cluster is located at very low galactic latitude ($`b=+1.43`$ deg). Distance determinations range from 405$`{}_{55}{}^{}{}_{}{}^{+76}`$ pc (from Hipparcos; Robichon et al. robich99 (1999)) to 500 pc (Eggen eggen81 (1981)); in this paper the most recent value of 405 pc by Robichon et al. (robich99 (1999)) has been adopted.
## 2 Optical catalog
The first detailed study of NGC 3532 was carried out by Koelbloed (koelb59 (1959)), who obtained photoelectric or photographic photometry and proper motions for 255 stars down to a limiting magnitude $`V11.7`$. A new proper motion survey of these stars was later performed by King (king78 (1978)). The most extensive study of this cluster is the photometric study by Fernandez & Salgado (fs80 (1980)), who obtained photoelectric and photographic photometry for 700 stars (including nearly all Koelbloed’s stars) down to a limiting magnitude $`V=13.5`$. Photoelectric photometry for another 24 stars down to $`V=18.3`$ was obtained by Butler (butler77 (1977)). We mention in passing that only 15 G–type and 7 K–type dwarf cluster members are present in the total sample of 724 stars. Additional photometric studies of these stars have been performed by Johansson (johan81 (1981); UBV, 16 stars), Eggen (eggen81 (1981); Strömgren, 33 stars), Wizinowich & Garrison (wiz82 (1982); UBVRI, 68 stars), Schneider (schneid87 (1987); Strömgren, 164 stars) and Clariá & Lapasset (claria88 (1988); UBV and DDO, 12 stars). Radial velocities are available for about a hundred stars from the studies by Harris (harris76 (1976)) and by Gieseking (gies80 (1980), gies81 (1981)). Gieseking (gies81 (1981)) derived a mean cluster radial velocity $`v_\mathrm{r}=4.6\pm 2`$ km/s.
Our input catalog is based on the lists of stars by Fernandez & Salgado (fs80 (1980)) and Butler (butler77 (1977)). From these lists, we selected as probable members those stars with radial velocity, when available, within 4 km/s (i.e. $`2\sigma `$) of the cluster mean $`v_\mathrm{r}`$, or with membership probability from proper motions greater than 80%, or which were suggested as members in photometric studies. We rejected stars that would be considered members according to either radial velocity or proper motion, but with photometry inconsistent with cluster membership. For stars with no individual membership information, but with UBV photometry available, we accepted as possible members those falling in a band between $`0.2^m`$ below and $`0.7^m`$ above the cluster main sequence.
The resulting catalog contains 248 probable and possible members; 174 of them, including 4 giants, are located within 17 arcmin of the $`ROSAT`$ nominal pointing position. In Fig. 1 we show the $`V`$ vs. $`(BV)`$ C–M diagram for the probable and possible members in our field of view. It is evident from the figure that the majority of the known members are early-type stars. Except for three very late possible cluster members, the cluster main sequence is truncated at $`V=13.5`$, corresponding to G–type stars; only 13 G–type and 5 K–type members (excluding giants) are present in our catalog, compared to 104 B–A and 48 F stars. We also mention that most of the stars with spectral type later than F5 were selected as members only on the basis of photometry.
## 3 Observations and data analysis
The X-ray data used in this study have been retrieved from the $`ROSAT`$ public archive (obs. IDs 202075h, 202075h-1, 202075h-2). NGC 3532 was observed with the HRI during three separate pointings on January 21, 1996, July 28, 1996, and June 19, 1997. The net exposure times were respectively 30.5 ksec, 37 ksec, and 34 ksec. The nominal pointing position for all observations was RA $`=11^\mathrm{h}5^\mathrm{m}43.2^\mathrm{s}`$, DEC $`=58\mathrm{°}43\mathrm{}12\mathrm{}`$ (J2000).
The analysis was performed using EXSAS routines within MIDAS. We first checked the alignment of the three single images by comparing the positions of common sources; since the shifts between the images are very small (less than 1 image pixel), we did not apply any correction to the data. The three Photon Events Tables (PET) were then merged into a single PET, from which an image with a total exposure time of 101.5 ksec was generated. We then followed the standard steps for data reduction. A background map was created from the global image by removing outstanding sources previously detected with the local/detection algorithm and then smoothing with a spline filter. Source detection was performed using the Maximum Likelihood (ML) algorithm. The ML algorithm was first run on a provisional list of sources obtained from the Local and Map Detection, resulting in the detection of 47 sources with ML $`>10`$ (corresponding to a significance of 4$`\sigma `$), lying within 17 arcmin from the image center; two additional sources (nos. 48 and 49) were detected above the same threshold by running the ML on the input optical catalog. Of these sources, 15 have at least one cluster member counterpart within 10 arcsec, 13 have an optical counterpart which is probably a cluster non-member, and 21 do not have any known optical counterpart (additional positions of non-member stars from the survey of Andersen & Reiz anders83 (1983) and from the Guide Star Catalog have also been considered). The X-ray and optical properties of the sources with an optical counterpart are listed in Tables 1 (cluster members) and 2 (non-members); the list of unidentified sources is given in Table 3. For the cluster members without associated X-ray sources we estimated 3$`\sigma `$ upper limits from the background count rates at the optical position.
We note that sources no. 27 in Table 1 and nos. 24 and 31 in Table 2 are barely visible above the background on the X–ray image (as indicated also by their low ML) and therefore could be not real. However, since two of them are identified with cluster non-members (nos. 24 and 31) and the other with an A-type cluster member (no. 27), including or excluding them from our source list would not change our main results/conclusions.
We estimated the number of spurious identifications due to chance coincidences, following Randich et al. (randich95a (1995)). Such a number ($`N_\mathrm{s}`$), is given by:
$$N_\mathrm{s}=D_\mathrm{c}\times N_\mathrm{X}\times A_{\mathrm{id}.}$$
(1)
where $`D_\mathrm{c}`$ is the density of cluster candidates within the surveyed area (i.e., the number of clusters candidates divided by the HRI field of view), $`N_\mathrm{X}`$ is the number of X–ray sources, and $`A_{\mathrm{id}.}`$ is the area of our identification circle. Considering $`D_\mathrm{c}=174/(289\times \pi )`$ arcmin<sup>-2</sup>, $`N_\mathrm{X}=49`$, and $`A_{\mathrm{id}.}=0.028\times \pi `$ arcmin<sup>2</sup>, we obtain $`N_\mathrm{s}=0.83`$, i.e., less than one spurious identification.
X–ray luminosities were derived as follows. We assumed a conversion factor (CF) of $`2.6\times 10^{11}`$ erg cm<sup>-2</sup> sec<sup>-1</sup> per HRI count sec<sup>-1</sup>, estimated using PIMMS (version 2.7) assuming a Raymond-Smith plasma with $`T=10^6`$ K and a column density $`\mathrm{log}N_H=20.3`$ cm<sup>-2</sup>; higher temperatures do not significantly affect the value of the conversion factor, and the same is true if a two-temperature model is assumed. X-ray luminosities for both detections and upper limits were then computed assuming a cluster distance of 405 pc. The resulting sensitivity in the center of the field is $`L_\mathrm{x}3.6\times 10^{28}`$ erg sec<sup>-1</sup>, a factor $``$ 2 higher than the limiting sensitivity of the X-ray studies of the coeval cluster NGC 6475 (Prosser et al. prosser95 (1995); James & Jeffries james97 (1997)). Had we assumed a 10% larger distance to the cluster ($`d=450`$ pc), the X-ray luminosities and upper limits would have been a factor of 20% larger, not introducing any significant change in our results. Note that, due to the relative short exposure times of the three individual images, we are not able to put stringent constraints on source variability. We just mention that for the few X-ray sources that were detected in the single images we obtained count rates very similar to the ones that we inferred from the global image.
## 4 Results
As mentioned in the previous section, 15 sources have been identified with cluster members. For two sources (nos. 22 and 40) two cluster members are found within the identification radius. Our analysis resulted in the detection of 11 F–type cluster stars out of 48 (detection rate 23%), one G–type dwarf out of 13 (detection rate 8%), and one of the four giants. None of the five K dwarfs in our field has been detected. Four A–type stars were also detected. The detected stars are indicated as filled symbols in Fig. 1. The issue of X-ray emission from early-type (i.e., earlier than F0) stars which, due to the lack of a convective zone, cannot generate magnetic fields (and thus magnetic activity) via the dynamo process, has been discussed at length in several papers (e.g., Micela et al. micela96 (1996) and references therein); the most likely possibility is that their X-ray emission is due to unseen binary companions. Therefore, we focus the following discussion on solar-type (namely, F and G-type) stars only.
As to the X-ray sources identified with non-members, they do not warrant much further discussion. Most of them, as indicated by their position on the C–M diagram are most likely G/early–K type foreground stars. Given that the cluster is basically located on the galactic plane, it is not surprising to find such a large contamination from cluster non-members among X-ray sources.
### 4.1 Comparison with other clusters
In Figs. 2a–2b we compare the $`\mathrm{log}L_\mathrm{x}`$ vs. $`(BV)_0`$ distribution of NGC 3532 with those of the supposedly coeval NGC 6475 cluster and the older Hyades. The comparison with NGC 6475 (Fig. 2a) suggests that the bulk of NGC 3532 F and G-type stars may be less X-ray luminous than NGC 6475. The few detections have X-ray luminosities comparable to the luminosities of similar stars in NGC 6475, but the majority of NGC 3532 solar-type stars were not detected; most important, the upper limits we derived for a very large fraction of the late–F and G–type stars in NGC 3532 are as low as or even below the luminosities of the least X-ray luminous stars of NGC 6475.
Given the low number of detections, a direct comparison of the X-ray luminosity distribution function (XLDF) of NGC 3532 with the XLDF of the coeval cluster NGC 6475 would not be of much help. In Fig. 3 we show instead the X-ray luminosity distribution function (XLDF) for G–type stars with $`0.59(BV)_0<0.81`$ in NGC 6475 with vertical bars indicating the upper limits and the one detection in this spectral range for NGC 3532. The figure seems to confirm that the population of solar-type stars in NGC 3532 is less X-ray active than NGC 6475. Such a conclusion is supported by a statistical comparison of the X-ray properties of G dwarfs in the two clusters, carried out using various two-sample tests as implemented in the Astronomy SURVival Analysis (asurv) Ver. 1.2 software package (see Feigelson & Nelson feig85 (1985); Isobe et al. isobe86 (1986)); the tests indicate that the hypothesis that NGC 3532 and NGC 6475 solar-type stars are drawn from the same parent population can be rejected with a confidence level higher than 99.9 %. In addition, considering the XLDF of NGC 6475 and using the method described by Randich et al. (randich98 (1998)) for IC 4756, we estimate that the probability of getting the observed ULs distribution of NGC 3532 if the XLDF of NGC 3532 were the same as the one of NGC 6475 is virtually 0.
Several possibilities can explain our results: a) first, and most obviously, the reddening to the cluster could be significantly wrong; a higher reddening would mean a higher column density of absorbing material and would eventually imply that our upper limits (as well as the X–ray luminosities of the detected stars) are underestimated. However, all the sources in the literature, using different methods, agree in deriving a reddening to the cluster $`E(BV)0.1`$, with the most quoted value being in fact $`E(BV)=0.04`$. If we assume a reddening as high as $`E(BV)=0.1`$ (Johannson et al. johan81 (1981)), we get a factor 1.5 higher CF for $`T=10^6`$ K (CF $`=4.0\times 10^{11}`$ instead of $`2.6\times 10^{11}`$ erg cm<sup>-2</sup> sec<sup>-1</sup> per HRI count sec<sup>-1</sup>) and the same CF for higher temperatures; similar results are found using two-temperature models. Therefore, it seems rather unlikely that the use of an incorrect value for the reddening is the major cause of the discrepancy between NGC 6475 and NGC 3532; b) second, NGC 6475 is an X-ray selected sample, i.e. most of its solar–type and lower mass members were not known until X–ray surveys of the cluster were carried out and they were detected in X-rays. Therefore, we cannot exclude that a low activity (with X-ray luminosities below 10<sup>29</sup> erg sec<sup>-1</sup> – see Fig. 2a) population exists that was not detected in the two $`ROSAT`$ surveys of this cluster. The comparison of the XLDF of NGC 6475 with that of the Pleiades or other young clusters indeed suggests that this is a very likely possibility. Such a population would contribute to the low luminosity tail of NGC 6475 distribution function; nevertheless, Fig. 2a indicates that, as a matter of fact, NGC 3532 also lacks the high luminosity population that is present in NGC 6475. We conclude that, although the presence of an X-ray faint population in NGC 6475 would partly reduce the inconsistency between the two clusters, it could not completely cancel it, unless one assumes that the low X-ray luminosity population of NGC 6475 is 5–10 times more numerous than the high luminosity one; c) the NGC 3532 sample is incomplete and the membership for most of the late-type cluster members is based on photometry only. Therefore, on the one hand, our optical sample could be highly contaminated by non-members and, on the other hand, several other optically unknown members could exist. If all or most of the 21 X-ray sources without a known optical counterpart turn out to be solar-type (or later) cluster members and, at the same time, part of the optically selected members turn out to be non-members, the discrepancy between NGC 6475 and NGC 3532 would possibly be solved. The 21 unidentified X-ray sources if located at the cluster distance would have X-ray luminosities in the range $`1.1\times 10^{29}1.0\times 10^{30}`$ erg sec<sup>-1</sup>; if all these sources were G–type cluster members, the XLDF for NGC 3532 would have indeed a median $`\mathrm{log}L_\mathrm{x}=29.3`$, slightly lower than the median for NGC 6475 (29.4). Therefore we cannot exclude that the results presented here are due, at least in part, to the incompleteness of the presently known optical cluster sample; nevertheless, if this were true, it would be difficult in any case to explain why virtually all the currently known solar-type cluster members are X-ray faint; d) if neither point b) or c) (or both together) were proven to explain entirely why NGC 3532 is less X-ray luminous than NGC 6475, then the conclusion could be drawn that there is a real difference between the X-ray properties of the two clusters. In this case, two hypothesis could be made: i) NGC 3532 is actually older than NGC 6475; ii) NGC 6475 and NGC 3532 are about coeval, and our result represents an additional piece of evidence that the age–activity relationship is not unique. Fig. 2b indeed indicates that the X-ray properties of NGC 3532 may be more similar to those of the Hyades than to NGC 6475. Using again the two sample tests, we find that the hypothesis that NGC 3532 and Hyades solar-type stars are drawn from the same parent population can be excluded with a confidence level ranging between 95 and 98 %, depending on the adopted test. We mention that the age of NGC 3532 has been generally estimated using C–M diagram fitting or, in two cases, from the magnitude of the turn-off. As mentioned in the introduction different methods result in an age between 200 Myr (Fernandez & Salgado fs80 (1980); Johansson johan81 (1981)) and 350 Myr (Eggen eggen81 (1981)); the most recent determinations give $`300`$ Myr (Koester & Reimers koester93 (1993); Meynet et al. meynet93 (1993)). Note that Meynet et al. (meynet93 (1993)) using the same method/isochrones derived an age of $`220`$ Myr for NGC 6475; it seems, therefore, that NGC 3532 might be slightly older than NGC 6475, but not as old as the X-ray data would suggest.
## 5 Conclusions
We have analyzed $`ROSAT`$ archive data of the open cluster NGC 3532. The comparison of the X-ray properties of solar-type stars in the cluster with those of the supposedly coeval NGC 6475 cluster indicates that NGC 3532 is considerably X-ray underluminous with respect to NGC 6475. If this result is not due to selection effects and biases in the two cluster samples, it would provide an additional piece of evidence that the X-ray activity–age relationship is not unique and that other parameters, in addition to rotation, determine the level of coronal emission. However, before such a conclusion can be accepted, additional X-ray and optical observations should be performed. Namely, I. an additional X-ray survey of NGC 6475 should be carried out; the survey should be deeper than the $`ROSAT`$ ones so that, if present, an X-ray faint population of cluster members could be detected; II. additional photometric and spectroscopic studies of NGC 3532 should be carried out in order to confirm cluster membership for the optical candidates known at present and to detect still unidentified solar-type and lower mass stars in the cluster. These studies would also provide information on rotation for cluster members; III. If possible, an effort should also be done, once more low-mass cluster members are known, to provide a definitive estimate of the cluster age using also low main-sequence fitting.
Besides the 15 cluster members, the X-ray survey resulted in the detection of 13 foreground/background stars – which is not surprising given the low cluster galactic latitude – and of 21 objects without any known optical counterparts. Priority should be given to optical observations aimed at determining the nature of these sources, and, in particular, at ascertaining whether they are cluster members or not.
###### Acknowledgements.
We thank the referee, Dr. F. Verbunt, for useful comments and suggestions.
|
no-problem/0003/cond-mat0003386.html
|
ar5iv
|
text
|
# New paradoxical games based on Brownian ratchets
## Abstract
Based on Brownian ratchets, a counter-intuitive phenomenon has recently emerged – namely, that two losing games can yield, when combined, a paradoxical tendency to win. A restriction of this phenomenon is that the rules depend on the current capital of the player. Here we present new games where all the rules depend only on the history of the game and not on the capital. This new history-dependent structure significantly increases the parameter space for which the effect operates.
In the early 1990’s it was shown that a Brownian particle in a periodic and asymmetric potential moves to the right (say) in a systematic way when the potential is switched on and off, either periodically or randomly . This so-called flashing ratchet is in the class of phenomena known as Brownian ratchets . The flashing ratchet can be viewed as the combination of two dynamics: Brownian motion in an asymmetric potential and Brownian motion on a flat potential. In each of these two cases, the particle does not exhibit any systematic motion. However, when they are alternated the particle moves to the right. The effect persists even if we add a uniform external force pointing to the left. In that case, the two dynamics discussed above yield motion to the left but when they are combined, the particle moves to the right.
It has recently been shown, in the seminal papers , that a discrete-time version of the flashing ratchet can be interpreted as simple gambling games. Here we have two losing games which become winning when combined. These games are the simplest situation of a paradoxical mechanism which, we believe, can be present in many situations of interest. The apparent paradox points out that if one combines two dynamics in which a given variable decreases the same variable can increase in the resulting dynamics. Examples of related phenomena include enzyme transport analyzed by a four-state rate model , finance models where capital grows by investing in an asset with negative typical growth rate , stability produced by combining unstable systems , counter-intuitive drift in the physics of granular flow , the combination of declining branching processes producing an increase and counter-intuitive drift in switched diffusion processes in random media .
The games originally described in are expressed in terms of tossing biased coins. The games rely on a state-dependent rule based on the player’s capital and two losing games can surprisingly combine to win. This effect was shown to be essentially a discrete-time Brownian ratchet . This is of interest to information theorists who have long studied the problem of producing a fair game from biased coins and winning games from fair games , inspired by the work of von Neumann – the games we are discussing go a step further, demonstrating a winning expectation produced from losing games and have recently been analyzed from the point of view of information theory . Seigman has reinterpreted the capital of the games in terms of electron occupancies in energy levels, recasting the problem in terms of rate equations. Similarly, Van den Broeck et al have likened the analysis of the transition probabilities of the games to Onsager’s treatment of reaction rates in circular chemical reactions . It has been suggested in that an area of interest to quantum information theory would be to recast the games in term of quantum probability amplitudes along the lines of . Quantum ratchets have now been experimentally realised and thus quantum game theory based on ratchets is of interest.
However, one of the limitations the game paradox and its applicability to further situations is that it relies on a modulo rule based on the capital of the player. The modulo arithmetic rule is quite natural for an interpretation of the paradox in terms of energy levels, say, however for processes in biology and biophysics it is unnatural. Applicability of the paradox to population genetics, evolution and economics has been suggested and thus a desirable version of the paradox would be to have rules independent of capital.
In this letter we present a new interpretation of the paradox in terms of good and bad biased coins which are played more or less often when the two games are combined. This interpretation allows us to introduce an important modification to the original games, namely, games which do not depend on the capital but only on the recent history of wins and loses.
The two original games are as follows. The player has some capital $`X(t)`$, $`t=0,1,2,\mathrm{}`$ In game A the capital is increased by one with probability $`p`$ and decreased by one with probability $`1p`$. In game B, the rules are:
| | Prob. of win | Prob. of loss |
| --- | --- | --- |
| $`X(t)/3`$ | $`p_1`$ | $`1p_1`$ |
| $`X(t)/3`$ | $`p_2`$ | $`1p_2`$ |
Here “win” means increasing the capital by one and “loss” decreasing it by one. For the choice, $`p=1/2ϵ`$, $`p_1=1/10ϵ`$, and $`p_2=3/4ϵ`$, with $`ϵ>0`$, the two games have a tendency to lose. More precisely $`X(t)`$ is a decreasing function of the number of runs $`t`$. However, if in each run we randomly choose the game we play, then, for $`ϵ`$ small enough, $`X(t)`$ is an increasing function of $`t`$.
An explanation of this paradox is as follows. First, let us imagine the above rules as implemented by three biased coins, $`A`$, $`B_1`$ and $`B_2`$, with probability for tails $`p`$, $`p_1`$ and $`p_2`$, respectively. We see that $`A`$ and $`B_1`$ are “bad coins,” whereas $`B_2`$ is a “good coin” for the player. When game B is played alone, at first sight one would say that $`B_1`$ is used one third of the time. However, this is not the case. When the capital is multiple of three, $`X(t)=3n`$, there is a high probability of losing, i.e., $`X(t+1)=3n1`$ is the most likely value for the capital at $`t+1`$. If this is the case, we have to use coin $`B_2`$ in the $`t+1`$ run and the most likely outcome is now a win. Therefore, the most likely capital at $`t+2`$ is again $`X(t+2)=3n`$. We see that the probability of $`X(t)`$ being multiple of three is bigger than $`1/3`$, due to the very rules of game B. The precise value of the equilibrium probability can be calculated by defining the Markov process $`Y(t)X(t)mod3`$, which only takes on three values, $`Y(t)=0,1,2`$. The stationary distribution for $`Y(t)`$, when $`ϵ=0`$ is given by: $`\pi _0=\frac{5}{13};\pi _1=\frac{2}{13};\pi _2=\frac{6}{13}`$. The fairness of the game is indicated by $`\pi _0p_1+(\pi _1+\pi _2)p_2=1/2`$.
When coin $`A`$ comes to play, the stationary distribution changes. For instance, if games A and B are switched at random, one has: $`\pi _0^{}=\frac{245}{709};\pi _1^{}=\frac{180}{709};\pi _2^{}=\frac{284}{709}`$. The game is no longer fair because $`\pi _0^{}=245/709=0.346`$ is closer to $`1/3`$ than $`\pi _0=0.385`$, for the “bad coin” and now the “good coin,” $`B_2`$, is played more often than before. The effect persists even if coin $`A`$ is bad, leading to the paradox.
This interpretation helps us to find a new version of the paradox with capital independent games. Game A is the same as before and we introduce game B which is played with four coins: $`B_1^{}`$, $`B_2^{}`$, $`B_3^{}`$, and $`B_4^{}`$. Which coin is used now depends on the history of the game:
| Before last | Last | Coin | Prob. of win | Prob. of loss |
| --- | --- | --- | --- | --- |
| $`t2`$ | $`t1`$ | | at $`t`$ | at $`t`$ |
| loss | loss | $`B_1^{}`$ | $`p_1`$ | $`1p_1`$ |
| loss | win | $`B_2^{}`$ | $`p_2`$ | $`1p_2`$ |
| win | loss | $`B_3^{}`$ | $`p_3`$ | $`1p_3`$ |
| win | win | $`B_4^{}`$ | $`p_4`$ | $`1p_4`$ |
This is in fact the most general game depending on the outcome of the two last runs. The paradox could even be reproduced with this type of game if the “bad” coins in game B are played more often than what is expected in a completely random game, ie. one quarter of the time.
Notice that the capital $`X(t)`$ in game B is not a Markovian process. However, one can define the vector
$$Y(t)=\left(\begin{array}{c}X(t)X(t1)\\ X(t1)X(t2)\end{array}\right)$$
(1)
which can take four values $`(\pm 1,\pm 1)`$, and does form a Markov chain. The transition probabilities are easily obtained from the rules of game B. Let $`\pi _1(t)`$, $`\pi _2(t)`$, $`\pi _3(t)`$ and $`\pi _4(t)`$ be the probabilities that $`Y(t)`$ is $`(1,1)`$, $`(1,1)`$, $`(1,1)`$, and $`(1,1)`$, respectively. The probability distribution $`\stackrel{}{\pi }(t)`$ verifies the evolution equation: $`\stackrel{}{\pi }(t+1)=𝐀\stackrel{}{\pi }(t)`$, where the matrix $`𝐀`$ is given by the transition probabilities and reads:
$$𝐀=\left(\begin{array}{cccc}1p_1& 0& 1p_3& 0\\ p_1& 0& p_3& 0\\ 0& 1p_2& 0& 1p_4\\ 0& p_2& 0& p_4\end{array}\right).$$
(2)
The stationary distribution $`\stackrel{}{\pi }_{\mathrm{st}}`$ of this Markov chain is by definition invariant under the action of the matrix $`𝐀`$, i.e., $`\stackrel{}{\pi }_{\mathrm{st}}𝐀=\stackrel{}{\pi }_{\mathrm{st}}`$. This distribution reads:
$$\stackrel{}{\pi }_{\mathrm{st}}=\frac{1}{N}\left(\begin{array}{c}(1p_3)(1p_4)\\ (1p_4)p_1\\ (1p_4)p_1\\ p_1p_2\end{array}\right)$$
(3)
where $`N`$ is a normalization constant.
In the stationary regime, the probability to win in a generic run is:
$$p_{\mathrm{win}}=\underset{i=1}{\overset{4}{}}\pi _{\mathrm{st},i}p_i=\frac{p_1(p_2+1p_4)}{(1p_4)(2p_1+1p_3)+p_1p_2}$$
(4)
which can be rewritten as $`p_{\mathrm{win}}=1/(2+c/s)`$, with $`s=p_1(p_2+1p_4)>0`$ for any choice of the rules, and $`c=(1p_4)(1p_3)p_1p_2.`$
Therefore, the tendency of game B obeys the following rule: if $`c<0`$, B is winning; if $`c=0`$, B is fair; and if $`c>0`$, B is losing. Again, here losing, winning and fair means that $`X(t)`$ is, respectively, a decreasing, increasing or constant function of $`t`$.
Since when game B is combined with game A the vector $`Y(t)`$ as defined in Eq. (1) is still a Markov chain, the same procedure applies. The probabilities of winning are now replaced by $`p_i^{}=(p_i+p)/2`$. Summarizing, to reproduce the paradox with capital independent games we have to find a set of five numbers, $`p`$ and $`p_i`$ ($`i=1,2,3,4`$), such that
$`1p`$ $`>`$ $`p`$ (5)
$`(1p_4)(1p_3)`$ $`>`$ $`p_1p_2`$ (6)
$`(2p_4p)(2p_3p)`$ $`<`$ $`(p_1+p)(p_2+p),`$ (7)
where the third equation is just the second with $`p_i^{}`$ and the inequality reversed (to make the combined game winning instead of losing).
One of the coins in game B must be “bad” and used more often than one quarter of the time. It cannot be either $`B_1^{}`$ or $`B_4^{}`$ because the probability of using these coins depends on whether the game is losing or winning (if $`B_1^{}`$ is played more often than $`B_4^{}`$, it is obvious that the game is losing). The bad coins should be $`B_2^{}`$ and $`B_3^{}`$. Let us set $`p=1/2ϵ`$, $`p_1=9/10ϵ`$, $`p_2=p_3=1/4ϵ`$, and $`p_4=7/10ϵ`$. With these numbers, one can see that the two first inequalities in Eq. (7) are always satisfied if $`ϵ>0`$, whereas the third is satisfied if $`ϵ<1/168=0.00595`$ – ie. the paradox occurs when $`0<ϵ<1/168`$, for our chosen parameter set in this example.
The simulation in Fig. 1 shows that as games A and B evolve individually the capital declines, as expected (ie. they are losing games). On the same graph we see the remarkable result that when A and B are alternated either randomly or periodically, the capital now increases. This reproduces the paradoxical behavior first observed in the original games , but now without state dependence on capital. The slopes of the curves corresponding to game B and to the random combination can easily be calculated as: $`X(t+1)X(t)=2p_{\mathrm{win}}1`$, with $`p_{\mathrm{win}}`$ given by Eq. (4). The old and new games have a fundamental difference in that the old ones can be interpreted in terms of a random walk in a periodic environment (RWPE) or a Brownian particle in a periodic potential, whereas the rules of the present games are homogeneous. We could say that the periodic structure of the original games has been transferred to the memory of the rules in the new games. Therefore, the paradox needs at least one of these two ingredients: inhomogeneity or non-markovianity.
Consider now a periodic combination of games A and B. Fig. 2 shows the capital after 500 games – where game A is played $`a`$ times and game B is played $`b`$ times. We can observe that the resulting capital is greater when the games are switched more frequently. This behavior agrees with with that of the original games . Note that in Fig. 2 changing the value of $`ϵ`$ only affects the vertical capital displacement, thus setting $`ϵ=0`$ pushes the graph into the positive region.
For the randomized games, we can now observe the volume of parameter space for which the paradox takes effect, by plotting the surfaces that represent the boundaries of the inequalities in Eq. (7). This is shown in Fig. 3, where for convenience we have set $`p_2=p_3`$ to produce the graph in three variables. The volumes enclosed by the surfaces marked $`Q_1,\mathrm{},Q_4`$ are the regions of parameter space for which the paradox takes effect. Regions $`Q_1`$ and $`Q_3`$ are where two losing games combine to win. On the other hand, $`Q_2`$ and $`Q_4`$ represent the reverse effect where two winning games combine to lose. This conjugate region can be simply thought of in terms of changing the sign of the capital, so that the perspective of the concepts ‘win’ and ‘lose’ reverse. This was observed in the original capital-dependent games , however the conjugate regions were symmetrical. What is now interesting is that the new history-dependent games have asymmetrical conjugate regions, as can be seen in Fig. 3.
Another important comparison between the new history-dependent games and the original capital-dependent games is that the volume of parameter space is now bigger. A numerical mesh analysis on Fig. 3 revealed that the new games have a parameter space about 50 times larger than the original games reported in . For applications such as in biophysics, it is important to find such gaming models with large and hence robust parameter spaces. Although it appears that the rates of winning from the slopes of Fig. 1 are about factor of 2 lower than the original games, this is only the case for the particular chosen parameters. The 50 times increase in parameter space is favorable for applications in modeling evolutionary processes in biology, for example, where a weak pay-off can gradually accumulate over a long period of time.
In summary, we have shown that the apparently paradoxical effect where two losing games can cooperate to win does work with a history based state-dependent rule rather than the original restriction of a modulo capital based state-dependence. This, together with an increased parameter space opens up the phenomenon to a wider range of possible application areas. This suggests that future investigation of further types of history-based rules and other types of state dependencies may be fruitful.
This work was supported by the Dirección General de Enseñanza Superior e Investigación Científica (Spain) Project No. PB97-0076-C02, GTECH (USA), The Sir Ross and Sir Keith Smith Fund (Australia) and the Australian Research Council (ARC).
|
no-problem/0003/cond-mat0003433.html
|
ar5iv
|
text
|
# Signatures of granular microstructure in dense shear flows
## Acknowledgments
We thank Eiichi Fukushima, James Jenkins, Christophe Josserand, Dov Levine, Milica Medved, Vachtang Putkaradze, Mark Rivers, and Alexei Tkachenko for helpful discussion, and Doris Stockwell from Spiceland for the donation of mustard seeds for the experiment. This work was supported by an NFS research grant and by the MRSEC Program of the NSF.
|
no-problem/0003/hep-th0003014.html
|
ar5iv
|
text
|
# Renormalization of the Inverse Square Potential
## Abstract
The quantum-mechanical $`D`$-dimensional inverse square potential is analyzed using field-theoretic renormalization techniques. A solution is presented for both the bound-state and scattering sectors of the theory using cutoff and dimensional regularization. In the renormalized version of the theory, there is a strong-coupling regime where quantum-mechanical breaking of scale symmetry takes place through dimensional transmutation, with the creation of a single bound state and of an energy-dependent $`s`$-wave scattering matrix element.
The quantum-mechanical inverse square potential is a singular problem that has generated controversy for decades. For instance, the solution proposed in Ref. failed to give a Hamiltonian bounded from below, and this led to a number of alternative regularization techniques based on appropriate parametrizations of the potential—including the replacement of self-adjointness by an interpretation of the “fall of the particle to the center” . However, it is generally recognized that the singular nature of this problem lies in that its Hamiltonian, being symmetric but not self-adjoint, admits self-adjoint extensions . Recently, a renormalized solution was presented using field-theoretic techniques , but it was just limited to the one-dimensional case and cutoff renormalization.
In this Letter (i) we generalize the results of Ref. to $`D`$ dimensions (including the all-important $`D=3`$ case) using cutoff regularization in configuration space; (ii) present a complete picture of the renormalized theory; and (iii) confirm the same conclusions using dimensional regularization . This problem is crucial for the analysis and interpretation of the point dipole interaction of molecular physics , and may be relevant in polymer physics . In addition (i) it displays remarkable similarities with the two-dimensional $`\delta `$-function potential ; (ii) it provides another example of dimensional transmutation in a system with a finite number of degrees of freedom; and (iii) it illustrates the relevance of field-theoretic concepts in quantum mechanics .
This problem is ideally suited for implementation in configuration space , where the radial Schrödinger equation for a particle subject to the $`r^2`$ potential in $`D`$ dimensions reads (with $`\mathrm{}=1`$ and $`2m=1`$)
$$\left[\frac{1}{r^{D1}}\frac{d}{dr}\left(r^{D1}\frac{d}{dr}\right)+E\frac{l(l+D2)\lambda }{r^2}\right]R_l(r)=0,$$
(1)
which is explicitly scale-invariant because $`\lambda `$ is dimensionless . In Eq. (1), $`l`$ is the angular momentum quantum number and $`\lambda >0`$ corresponds to an attractive potential; with the transformation $`R_l(r)=r^{(D1)/2}u_l(r)`$, Eq. (1) is recognized to have solutions $`R_l(r)=r^{(D/21)}Z_{s_l}(\sqrt{E}r)`$, where $`Z_{s_l}(z)`$ represents an appropriate linear combination of Bessel functions of order $`s_l=[\lambda _l^{()}\lambda ]^{1/2}`$, with
$$\lambda _l^{()}=(l+D/21)^2.$$
(2)
If $`\lambda `$ were allowed to vary, one would see that the nature of the solutions changes around the critical value $`\lambda _l^{()}`$, for each angular momentum state. For $`\lambda <\lambda _l^{()}`$ (including repulsive potentials), the order $`s_l`$ of the Bessel functions is real, so that the solution regular at the origin is proportional to the Bessel function of the first kind $`J_{s_l}\left(\sqrt{E}r\right)`$. However, the same solution fails to satisfy the required behavior at infinity for bound states ($`E<0`$); in other words, in the weak-coupling regime, the potential cannot sustain bound states. Moreover, the scattering solutions are scale-invariant , with $`D`$-dimensional phase shifts $`\delta _l^{(D)}=\{[\lambda _l^{()}]^{1/2}[\lambda _l^{()}\lambda ]^{1/2}\}\pi /2`$. Nothing is surprising here: the potential $`r^2`$ is explicitly scale-invariant and no additional scale arises at the level of the solutions, which are well-behaved—one could say that the potential looks like a regular “repulsive” one. However, this picture changes dramatically for $`\lambda >\lambda _l^{()}`$: all the Bessel functions acquire an uncontrollable oscillatory character through the imaginary order $`s_l=i\mathrm{\Theta }_l`$, where $`\mathrm{\Theta }_l=[\lambda \lambda _l^{()}]^{1/2}`$, as we shall see next.
For the remainder of this Letter, we will mainly analyze the strong-coupling regime $`\lambda >\lambda _l^{()}`$. First, for the bound-state sector, from Eq. (1), $`u_l(r)\sqrt{r}K_{i\mathrm{\Theta }_l}(\sqrt{|E|}r)`$, with $`K_{s_l}(z)`$ being the modified Bessel function of the second kind , whose behavior near the origin is of the form
$$K_{i\mathrm{\Theta }_l}(z)\stackrel{(z0)}{}\sqrt{\frac{\pi }{\mathrm{\Theta }_l\mathrm{sinh}\left(\pi \mathrm{\Theta }_l\right)}}\mathrm{sin}\left[\mathrm{\Theta }_l\mathrm{ln}\left(\frac{z}{2}\right)\delta _{\mathrm{\Theta }_l}\right]\left[1+O\left(z^2\right)\right],$$
(3)
where $`\delta _{\mathrm{\Theta }_l}`$ is the phase of $`\mathrm{\Gamma }(1+i\mathrm{\Theta }_l)`$. In Eq. (3), the wave function oscillates with a monotonically increasing frequency as $`r0`$. As a result, there is no criterion for the selection of a particular subset of states and the bound-state spectrum is continuous and not bounded from below. Clearly, the problem should be renormalized in such a way that the Hamiltonian recovers its self-adjoint character .
A first attempt is to use Eq. (3) and recognize that the orthogonality condition for the eingenstates restores the discrete nature of the spectrum; unfortunately, in this approach, the Hamiltonian is not bounded from below. However, as was proposed in Ref. for the particular simple case $`D=1`$, Eq. (3) can be regularized by introducing a short-distance cutoff $`a`$, with $`a|E|^{1/2}`$, so that the regular boundary condition $`u_l(a)=0`$ is implemented in lieu of the undefined behavior at $`r=0`$. Then, Eq. (3) gives the zeros of the modified Bessel function of the second kind with imaginary order, $`z_n=2e^{(\delta _{\mathrm{\Theta }_l}n\pi )/\mathrm{\Theta }_l}`$ \[up to a correction factor $`1+O(z_n^2/\mathrm{\Theta }_l)`$\], where $`n`$ is an integer; moreover, the assumption that $`z_n1`$, with $`\mathrm{\Theta }_l0`$, implies that $`(n)<0`$, with the conclusion that $`n=1,2,3,\mathrm{}`$. Parenthetically, $`z_n1`$ only if $`\mathrm{\Theta }_l1`$, so that $`\delta _{\mathrm{\Theta }_l}=\gamma \mathrm{\Theta }_l+O(\mathrm{\Theta }_l^2)`$ (with $`\gamma `$ being the Euler-Mascheroni constant) and the energy levels become
$$E_{n_rl}=\left(\frac{2e^\gamma }{a}\right)^2\mathrm{exp}\left(\frac{2\pi n_r}{\mathrm{\Theta }_l}\right),$$
(4)
where $`n=n_r`$ stands for the radial quantum number.
Equation (4) should now be renormalized by requiring that $`\mathrm{\Theta }_l=\mathrm{\Theta }_l(a)`$ in the limit $`a0`$. More precisely, in order for the ground state \[characterized by the quantum numbers $`(\mathrm{gs})\left(n_r=1,l=0\right)`$\] to “survive” the renormalization prescription with a finite energy, it is required that $`\mathrm{\Theta }_{_{(\mathrm{gs})}}(a)\stackrel{(a0)}{}0^+`$. This condition amounts to a “critically strong” coupling, $`\lambda (a)\stackrel{(a0)}{}\lambda _{_{(\mathrm{gs})}}^{()}+0^+`$ (where the notation $`\mathrm{\Theta }_0=\mathrm{\Theta }_{_{(\mathrm{gs})}}`$ and $`\lambda _0^{()}=\lambda _{_{(\mathrm{gs})}}^{()}`$ is understood for the ground state). In particular, with this ground-state renormalization, the required relation between $`\mathrm{\Theta }_{_{(\mathrm{gs})}}(a)`$ and $`a`$, for $`a`$ small, is
$$g^{(0)}=\frac{2\pi }{\mathrm{\Theta }_{_{(\mathrm{gs})}}(a)}+2\mathrm{ln}\left(\frac{\mu a}{2}\right)+2\gamma ,$$
(5)
where $`\mu `$ is an arbitrary renormalization scale with dimensions of inverse length and $`g^{(0)}`$ is an arbitrary finite part associated with the coupling, such that
$$E_{_{(\mathrm{gs})}}=\mu ^2\mathrm{exp}\left[g^{(0)}\right]\mu ^2.$$
(6)
In Eq. (6), it is understood that, due to the arbitrariness of both $`g^{(0)}`$ and $`\mu `$, the simple choice $`g^{(0)}=0`$ can be made. Finally, the ground-state wave function is obtained in the limit $`\mathrm{\Theta }_{_{(\mathrm{gs})}}(a)\stackrel{(a0)}{}0^+`$, which yields
$$\mathrm{\Psi }_{_{(\mathrm{gs})}}(𝐫)=\sqrt{\mathrm{\Gamma }\left(\frac{D}{2}\right)\left(\frac{\mu ^2}{\pi }\right)^{D/2}}\frac{K_0(\mu r)}{\left(\mu r\right)^{D/21}},$$
(7)
whose functional form, up to a factor $`r^{(D/21)}`$, is dimensionally invariant .
The existence of a ground state with a dimensional scale $`\mu \left|E_{_{(\mathrm{gs})}}\right|^{1/2}`$ violates the manifest scale invariance of the theory defined by Eq. (1), but its magnitude is totally arbitrary and spontaneously generated by renormalization. Here we recognize the fingerprints of dimensional transmutation .
The next question refers to the possible existence of excited states in the renormalized theory. For any hypothetical state with angular momentum quantum number $`l>0`$, this question can be straightforwardly answered from the ground-state renormalization condition $`\mathrm{\Theta }_{_{(\mathrm{gs})}}(a)\stackrel{(a0)}{}0^+`$, which, together with Eq. (2), provides the inequality $`\lambda =\lambda _{_{(\mathrm{gs})}}^{()}=(D/21)^2<\lambda _l^{()}`$. Then, if such a state existed, it would automatically be pushed into the weak-coupling regime, with the implication that it could not survive the renormalization process. This means that there are no excited states with $`l>0`$. Next, the question arises as to the possible existence of bound states with $`l=0`$ and $`n_r0`$. The fact that these hypothetical bound states also cease to exist in the renormalized theory follows from the exponential suppression
$$\left|\frac{E_{n_r0}}{E_{_{(\mathrm{gs})}}}\right|=\mathrm{exp}\left[\frac{2\pi \left(n_r1\right)}{\mathrm{\Theta }_{_{(\mathrm{gs})}}}\right]\stackrel{(\mathrm{\Theta }_{_{(\mathrm{gs})}}0,n_r>1)}{}0.$$
(8)
Moreover, it is easy to see that, for these hypothetical states, the corresponding limit of the wave function becomes ill defined, so that they effectively vanish. In conclusion, the renormalization process annihilates all candidates for a renormalized bound state, with the only exception of the ground state of the regularized theory, which acquires the finite energy value (6) and the normalized wave function (7).
Similarly, the scattering solutions can be studied by going back to Eq. (1), which implies that $`u_l(r)/\sqrt{r}`$ is a linear combination of the Hankel functions $`H_{i\mathrm{\Theta }_l}^{(1,2)}(kr)`$ , whose asymptotic behavior ($`r\mathrm{}`$), combined with the regularized boundary condition $`u_l(a)=0`$, provides the scattering matrix elements $`S_l^{(D)}(k;a)`$ and phase shifts $`\delta _l^{(D)}(k;a)`$. For example, the phase function $`\varphi _l^{(D)}(k;a)=\delta _l^{(D)}(k;a)(l+D/21)\pi /2`$ is given by
$$\mathrm{tan}\mathbf{(}\text{ }\varphi _l^{(D)}(k;a)\mathbf{)}\text{ }=\mathrm{tanh}\left(\frac{\pi \mathrm{\Theta }_l}{2}\right)\frac{1𝒯_l(k;a)\varrho _l}{𝒯_l(k;a)+\varrho _l},$$
(9)
where $`𝒯_l(k;a)=\mathrm{tan}\left[\mathrm{\Theta }_l\mathrm{ln}\left(ka/2\right)\right]`$ and $`\varrho _l=v_{,l}/iv_{+,l}`$, with $`v_{\pm ,l}=\mathrm{\Gamma }(1i\mathrm{\Theta }_l)\pm \mathrm{\Gamma }(1+i\mathrm{\Theta }_l)`$. Equation (9) is ill defined in the limit $`a0`$; in effect, the variable $`𝒯_l(k;a)`$ oscillates wildly between $`\mathrm{}`$ and $`\mathrm{}`$, unless $`\mathrm{\Theta }_l0`$, just as for the bound-state sector. From Eqs. (5)–(6), when $`a0`$, the renormalized $`s`$-wave phase shift becomes
$$\mathrm{tan}\mathbf{(}\text{ }\delta _0^{(D_0)}(k)(D/21)\pi /2\mathbf{)}\text{ }=\frac{\pi }{\mathrm{ln}\left(k^2/|E_{_{(\mathrm{gs})}}|\right)}.$$
(10)
Equation (10) explicitly displays the scattering behavior of $`s`$ states, as well as its relation with the bound-state sector of the theory. Both the functional form of Eq. (10) and the existence of a unique bound state in the renormalized theory are properties shared by the two-dimensional $`\delta `$-function potential .
The analysis leading to Eq. (10) refers to $`l=0`$. For all other angular momenta ($`l>0`$), the coupling will be weak, so that the phase shifts will be given by their unregularized values, with the condition that $`\lambda =\lambda _{_{(\mathrm{gs})}}^{()}=\left(D/21\right)^2`$; then,
$$\delta _l^{(D)}|_{l0}=\left[(l+D/21)\sqrt{l\left(l+D2\right)}\right]\frac{\pi }{2},$$
(11)
which is a scale-invariant expression.
We now turn to an outline of a similar analysis using dimensional renormalization . In particular, we will focus on the bound-state sector of the theory, to illustrate and emphasize the fact that proper renormalization using different regularizations yields the same physics. In this alternative regularization scheme, we define the dimensionally-regularized potential in $`D^{}`$ dimensions in terms of its momentum-space expression, according to
$`V^{(D^{})}(r^{})`$ $`=`$ $`\lambda _B{\displaystyle \frac{d^D^{}k^{}}{(2\pi )^D^{}}e^{i𝐤^{}𝐫^{}}\left[d^Dre^{i𝐤𝐫}\frac{1}{r^2}\right]_{𝐤=𝐤^{}}}`$ (12)
$`=`$ $`\lambda _B\pi ^{ϵ/2}\mathrm{\Gamma }\left(1ϵ/2\right)/\left(r^{}\right)^{(2ϵ)},`$ (13)
where $`ϵ=DD^{}`$ and $`\lambda _B`$ is the dimensional bare coupling, which will be rewritten as $`\mu _B=\lambda \mu ^ϵ`$, with $`[\lambda ]=1`$ and $`\mu `$ being the floating renormalization scale. The corresponding $`D^{}`$-dimensional Schrödinger equation can be converted, by means of a duality transformation
$$\{\begin{array}{c}\left|E\right|^{1/2}r=z^{2/ϵ}\hfill \\ |E|^{D^{}/4}u_l(r)=w_{l,ϵ}(z)z^{1/ϵ1/2}\hfill \end{array},$$
(14)
into
$$\left\{\frac{d^2}{dz^2}+\stackrel{~}{\eta }\stackrel{~}{𝒱}_ϵ(z)\frac{p^21/4}{z^2}\right\}w_{l,ϵ}(z)=0,$$
(15)
where $`\stackrel{~}{𝒱}_ϵ(z)=4\mathrm{sgn}(E)z^{4/ϵ2}/ϵ^2`$. In Eq. (15), the new parameters are
$$\stackrel{~}{\eta }=\frac{4\lambda \pi ^{ϵ/2}\mathrm{\Gamma }\left(1ϵ/2\right)}{ϵ^2}\left(\frac{|E|}{\mu ^2}\right)^{ϵ/2},$$
(16)
and $`p=2\left(l+D^{}/21\right)/ϵ`$. The key to solving Eq. (15) is that (i) the parameter $`p`$ is asymptotically infinite; and (ii) the term $`\stackrel{~}{𝒱}_ϵ(z)`$ in Eq. (15) behaves as an infinite hyperspherical potential well in the limit $`ϵ0`$. Then, for bound states, as a first approximation, the particle is trapped in a well with a smooth left boundary proportional to $`1/z^2`$ and an infinite-well boundary at $`z_21`$; as the left turning point is $`z_1p/\stackrel{~}{\eta }^{1/2}`$, the WKB quantization condition—which we expect to be asymptotically correct for $`p\mathrm{}`$—becomes
$$_{p/\stackrel{~}{\eta }^{1/2}}^1\sqrt{\stackrel{~}{\eta }\frac{p^21/4}{z^2}}𝑑z\left(n_r\frac{1}{4}\right)\pi ,$$
(17)
so that $`\stackrel{~}{\eta }^{1/2}=p+C_{n_r}p^{1/3}`$, where $`C_{n_r}=[3\pi (n_r1/4)]^{2/3}/2`$. Therefore, the regularized energies are
$$|E_{n_rl}|=\mu ^2\left[\frac{\lambda }{\lambda _l^{()}}\right]^{2/ϵ}\mathrm{exp}\left[𝒢_{n_rl}(ϵ)\right],$$
(18)
where
$$𝒢_{n_rl}(ϵ)=2^{4/3}C_{n_r}\left(\lambda _l^{()}\right)^{1/3}ϵ^{1/3}+\left[\mathrm{ln}\pi +\gamma +2\left(\lambda _l^{()}\right)^{1/2}\right].$$
(19)
Equation (18) can be renormalized by demanding that it be finite for the ground state and by letting $`\lambda =\lambda (ϵ)`$; explicitly,
$$\lambda (ϵ)=\lambda _{_{(\mathrm{gs})}}^{()}\left\{1+\frac{ϵ}{2}\left[g^{(0)}𝒢_{_{(\mathrm{gs})}}(ϵ)\right]\right\}+o(ϵ),$$
(20)
with an arbitrary finite part $`g^{(0)}`$. In particular, $`\lambda (ϵ)\stackrel{(ϵ0)}{}\lambda _{_{(\mathrm{gs})}}^{()}+0^+`$, i.e., upon renormalization, the coupling becomes critically strong with respect to $`s`$ states. Just as for cutoff regularization, it follows that only bound states with $`l=0`$ survive the renormalization process. As for the excited states with $`l=0`$ in Eq. (18), they are exponentially suppressed according to
$$\left|\frac{E_{n_r0}}{E_{_{(\mathrm{gs})}}}\right|=\mathrm{exp}\left[2^{4/3}\left(C_{n_r}C_1\right)\left(\lambda _{_{(\mathrm{gs})}}^{()}\right)^{1/3}ϵ^{1/3}\right]\stackrel{(ϵ0,n_r>1)}{}0.$$
(21)
Parenthetically, the regularized energies of Eqs. (4) and (18), for finite $`a`$ and $`ϵ`$, are noticeably different; nonetheless, as expected, their renormalized counterparts carry exactly the same informational content.
In short, we have analyzed the inverse square potential and found that: (i) a critical coupling divides the possible behaviors into two regimes; (ii) in the strong-coupling regime, the theory is ill defined and requires renormalization; and (iii) upon renormalization of the strong-coupling regime, only one bound state survives and $`s`$-wave scattering breaks scale invariance with a characteristic logarithmic dependence. The existence and order of magnitude of a critical coupling $`\lambda _{_{(\mathrm{gs})}}^{()}=1/4`$ for $`D=3`$ is in agreement with recent experimental results for a wide range of polar molecules .
A final remark is in order. Strictly, even though a more careful treatment with dimensional regularization changes Eq. (18), the difference appears only at the level of the finite parts (linear in $`ϵ`$) and is immaterial to the arguments presented here. These corrections, as well as a detailed analysis of the scattering sector of the theory, will be presented elsewhere.
This research was supported in part by CONICET and ANPCyT, Argentina (L.N.E., H.F., and C.A.G.C.) and by the University of San Francisco Faculty Development Fund (H.E.C.). The hospitality of the University of Houston and instructive discussions with Prof. Carlos R. Ordóñez are gratefully acknowledged by H.E.C.
|
no-problem/0003/hep-ph0003230.html
|
ar5iv
|
text
|
# Alignment in hadronic interactions
## I Introduction
The Pamir experiment comprises an X-ray film chamber which contains events with genuinely correlated dark spots produced by very energetic cosmic ray shower particles distributed implausibly asymmetrically. This effect, called “alignment” occurs, according to Ref. , at a primary particle energy of $`8\times 10^{15}`$eV above which the rate increases rapidly with interaction energy. For the energies of interest the cosmic ray flux is very low, so the statistics in the Pamir experiment are very limited; specifically only 62 events of visible electromagnetic energy in the range from 700 to 2000 TeV. Problems with the statistics and the quite complicated methodology of the large area X-ray film calorimeter make the measurements very difficult. Thus, independent confirmations by the Chacaltaya group, the Tien-Shan extensive air shower experiment, and the one event of high quality recorded in the Concorde French-Japanese experiment, are indispensable. All this implies rather strongly that there is the azimuthal asymmetry of particle production at energies above about $`10^{16}`$eV.
Some explanations exist in the literature already. The most conservative one is the fluctuation explanation by J.N. Capdevielle. Calculations show that with conventional fluctuations of the elementary act the probability of producing the Concorde event could exceeded the “5$`\sigma `$” level, the commonly accepted limit for a “new physics” discovery. However, the increase of the fraction of aligned events to (27$`\pm `$0.09)% at the highest energies seen, makes this explanation less probable . Even if it could be responsible for the one stratospheric event, which is, in fact, slightly different than the rest of “X-ray aligned” events, taken all together a different solution is required.
The rotating nuclei fragmentation hypothesis of Erlykin and Wolfendale is naturally connected with the postulated increase of the fraction of heavy nuclei in the primary cosmic ray flux around $`10^{16}`$eV (up to 50%). However, due to the existence of the few TeV threshold in the X-ray film technique, the experiments noticeably favour primaries of higher energy per nucleon (for the same energy per particle). and the existence of heavy nuclei alone is not enough to account for the 30% alignment that is needed. The problem was discussed in detail in Ref. . The mechanism of delayed fragmentation of fast rotating nuclei need to be established theoretically as well as the possibility of a reduction of its cross section for interaction with air nuclei.
Additionally, the observation that aligned events are more abundant in the vertical component, supports the concept that the origin of the phenomenon is a deeply penetrating cosmic ray particle (most likely a proton).
The original explanations by the Pamir group is supported by extensive calculations made by Mukhamedshin. He shows that the Pamir data required a significant change of the particle production act. Particle creation with a few GeV transverse momentum in a one plane cascade-like process is needed but even this is not enough to explain the data. Perhaps there is, additionally, a heavy, long lived and relatively weak interacting particle? Some theoretical ideas of the “new physics” were discussed in, e.g., Ref. .
## II The proposed concept of alignment in the hadronization mechanism
The inelastic collision of two elementary particles is conventionally treated as a scattering of particle constituents, i.e. partons, associated with the creation of excited, intermediate objects (jets, strings, chains, or fireballs), followed by their hadronization. Such a picture is confirmed by Bose-Einstein correlation studies, where the time and spatial extensions of the newly created particle source is seen and measured. Both the collision and excitation processes can not be fully and quantitatively described by QCD. Specifically the hadronization is, by definition, a low momentum transfer phenomenon and at present it can be taken into account only using models. Most of the effects seen in cosmic ray interaction physics are low $`p_t`$ effects. Alignment seems to be an exception, but it will be shown later that it is probably not.
It is well known that the scattering can be described in the impact parameter representation. This point is used to maintain unitarity by the eikonal formalism, both by the dual parton and by relativistic string models. For our objective, however, the details are not crucial. Differences between particular model realizations such as, for example parton scattering cross sections (at given impact parameter $`b`$) or the character of the intermediate object created, do not change the essence of the concept of this paper, and they will not be discussed here. The implications of the impact parameter picture considered strictly are valid for most of the models of multiparticle production.
The important characteristic of the string (this name will be used hereafter to label the intermediate excited objects in spite of their fireball, chain, or jet nature) is its mass. It is certainly proportional to the interaction energy \[available in the center of mass system (c.m.s.) – $`\sqrt{s}`$\], but it could also depend on other collision parameters such as the impact parameter $`b`$.
On the other hand, the string mass could be the random variable and it could fluctuate from collision to collision according to the respective probability distribution. The most familiar way of considering the chain mass behaviour is to use the parton distribution functions $`F(x,Q^2)`$ which describe the probability density of the colliding parton for a fraction $`x`$ of the total hadron momentum ($`Q^2`$ defines the scale). The particular shape of $`F`$ (with its $`s`$ dependence) could not be obtained from QCD. However, recent progress in lepton scattering at HERA have expanded our experimental knowledge of $`F`$ significantly .
If we denote by $`b`$ the impact parameter of the inelastic collision of two hadrons (protons) and by $`M`$ the masses of two strings created (for simplicity we can take the masses to be equal, but this is not necessary) then, due to the conservation laws, the strings carry an angular momentum of
$$J_3bc\sqrt{s}\left(\frac{M}{\sqrt{s}}\right)^2.$$
(1)
Here, $`M`$ is, to be precise, the energy of the (rotating) string in its c.m.s.. It is of the same order as the energy (mass) of the string in the co-rotating frame.
The angular momentum of the string is related to its (end) rotation velocity, $`\omega `$. In the case of particular model of the string (Nambu-Goto-Polyakov, NGP) , which will be used later as a numerical example, it is given by
$$J_3=\frac{aR}{2\omega }\left[\frac{\mathrm{arcsin}\left(\omega R/c\right)}{\omega R/c}\sqrt{1\left(\frac{\omega R}{c}\right)^2}\right]\frac{Mc^2}{2\omega }.$$
(2)
Comparing Eqs.(1) and (2), the angular velocity of the relativistic string is of the order of
$$\omega \frac{c}{b}\frac{\sqrt{s}}{M}.$$
(3)
The above relations certainly does not hold for central collisions, and also in all other cases it should be treated rather approximately, just to illustrate the importance of relativistic rotation of fragmenting strings.
## III The angular momentum problem of string fragmentation.
If we have a (rotating) string of mass $`M`$, the next step to be considered is its fragmentation. The commonly used hadronization models can be cluster type-like (HERWIG) or string fragmentation-like (LUND). Nevertheless, both deal with a one-dimensional coloured field structure, and this one-dimensionality is an important part of the models. The LUND picture possesses a well described space–time particle production scheme. It is shown in Fig. 1. Details and particular model parameter values are adjusted to the measurements. The recent and most accurate tests of hadronization models are made with the precise data on the Z<sup>0</sup> energy from LEP. The important point is that just as in the case of $`e^+`$$`e^{}`$ annihilation the linear structure of the Z<sup>0</sup> decay created chain is (can be) very well justified due to the vanishing of the string angular momentum. The fact that the same procedures can be used for hadronization of fast rotating strings from hadronic inelastic collisions is rather astonishing. However, a closer look at particular Monte Carlo realizations exhibits many modifications which make the previous amazement less surprising.
The problem with the angular momentum, or rather the lack of it, seems to have a very long history. The famous Fermi statistical model was published just fifty years ago. It is interesting that the importance of the problem was clear to Fermi. He described the exact way of avoid it by using the impact parameter formalism. He even made some calculations, concluding that “It was found in most cases that the results so obtained differ only by small numerical factors from those obtained by neglecting the conservation of angular momentum. This has been done as a rule in order to simplify the mathematics.” This is not very surprising taking into account the low energies which Fermi had to deal with (in the main part of his paper); these were of the order of hundreds of MeV to a GeV or so. However, an interesting remark can be found in the second last page where he discussed the “collisions of extremely high energy” (10<sup>12÷13</sup>eV). He found that the conservation of angular momentum reduces the produced pion multiplicity, and also ”…has the effect that the angular distributions of particles produced is no longer isotropical…”.
Ten years after Fermi’s paper, when it become clear that the products of high energy collisions are strongly collimated along the interaction axis, Hagedorn published paper concerning the statistical treatment of the not-so-isotropic angular (in c.m.s.) distribution of collision products. In this paper interesting statements appeared: “This whole question \[the angular momentum conservation problem\], though of practical importance, seems to be still not understood. At least angular momentum conservation does not play an important quantitative role. … So at present it seems most reasonable to disregard angular momentum at all,…”. Although the model discussed in the mentioned paper was assigned for central collisions, everything was entirely correct. The problem of the most frequent, peripheral collisions, however, remains. Hagedorn himself, again ten years later, gave the solution in his paper called “The Thermodynamical Model”. The solution was transient rather than fundamental. He introduced the velocity weight functions which describes a part of strong interaction physics (unknown from first principles) and “To this one should add over-all conservation of angular momentum”. And this temporary solution survived thirty years! It can be found in more or less sophisticated transcription in many contemporary string hadronization models.
Coming back to the LUND hadronization and its space–time structure shown in Fig. 1, an important remark has to be made concerning the time sequence of the chain breakups. The dashed hyperbolic curve in the figure represents the string point of the same proper time in their local co-moving frames. This can be associated with the hadronization time of the string, but, as is seen, in the string center of mass system hadrons occur at different times. It is clear in Fig. 1 that the slowest hadrons appear first, while those with high velocities (in the string c.m.s.), especially those containing initial string creating partons, materialize last. This somehow puzzling statement has, in fact, been known from the very beginning of relativistic string theory (see e.g., Refs. ). For the relatively low energy collisions with only a few particles created it may cause problems . To be precise, one has to note that, of course, in some cases, due to the random nature of the process (which is slightly more complicated than that which is shown in the discussed simple graph), some exceptions can be expected.
The main point is that the central part of the string fragments after some “freeze-out” time (about 1 fm/c) and the very end needs longer time (in the string c.m.s.) and we can expect that the rotation speed could be large enough to bend the particle production direction away from the interaction axis in the intervening time. Of course all particles in between will tend to lie on the one plane defined by the impact parameter vector and in this way the particles created are apparently aligned in the laboratory frame of reference. What should be noticed here is that the production of particles with relatively high transverse momentum (with respect to the interaction axis) is not due to any special high momentum transfer process (new physics) but is simple a result of kinematic with the usual non-perturbative hadronization.
## IV The curved string fragmentation.
The consequences of the string rotation presented above could, however, be quite wrong, because of the well known fact of the non-existence of “rigid” rotation in highly relativistic system. The one dimensional fragmentation structure of the LUND model should be extended making the problem definitely much more complex when we deal with a fast rotating QCD string.
The main question here is the shape of the string. Detailed calculations (see, e.g., Ref.) assuming a particular model of action for the string system show that there are some deviations from the straight string shape. In Fig. 2 the solution is presented. The measure of the curvature of the string is in this case proportional to the, e.g., rate of longitudinal expansion, so the vertical scale in the figure is in this sense arbitrary. The solution was obtained (in an analytical form) assuming small string deflections by perturbation of the straight string solution, thus, dropping some terms in the general string evolution equation, which do not have to be necessarily small in our case. The lack of an exact solution makes further examination somehow uncertain, but we do expect that the general behaviour of the real string shape is similar to the one shown in Fig. 2.
The main difference between the straight rotating string and the “real” (Fig. 2) string is that the end quark of the string (leading one) still moves almost along the interaction axis direction while the inner part of the string bends.
The problem of hadronization and specially its space-time structure (similar to the one presented in the Fig. 1), of the curved string is, in general, unsolved. Inertial forces acting along the string could have an effect on the string area law thereby changing the string “decay” constant. Additionally, the problem of clock synchronization becomes non-trivial for rotating frames. Thus, if we can expected that the rotating string should decay later the meaning of the word later, is not entirely and indisputably defined. This makes our further analysis more uncertain, but nevertheless we will try to obtain some qualitative results.
The time evolution of the curved NGP string (such as that shown in Fig. 2) can be simply described by the symmetrical expansion in both $`x`$ and $`y`$ directions as the string length grows. If, at some instant, the string begins to break (starting from its central part) the created particles will conserve the expansion speed of the particular piece of the string. From the figure it is straightforward that some of the particles created at the beginning will follow the same direction given by the string deviation. The momentum transverse to the interaction axis of subsequent particles will be getting larger (it is proportional to their longitudinal momentum) up to the moment when the fragmentation of the curved part of the string begin. Further emission angles will be smaller and smaller and in the end the rest of the created particles will follow the leading quark direction.
The relatively slow growth of multiplicity (a power-law in $`s`$ with the power index of 0.3 to 0.1 or quadratic in logarithm of $`s`$\] leads to the specific scaling of the arrangement of particle creation points on the curved string with the interaction energy.
As has been mentioned, the central part of the string gives, in the laboratory system, one collimated jet. It, together with the very forward produced particles forming another jet, leads to the clear binocular event reported in Ref. by the Chacaltaya experiment. The central and the forward jets are formed by many constituting hadrons, thus they carry enough energy (and particles) to be visible in the X-ray chamber as two core events at energies smaller than these of the alignment phenomenon. As the energy increases, the number of particles in the central part of the string, as well as in its forward end, grows. When it is high enough, some particles appear at the transitional angles. Certainly not only the probability of production of a few high energy particles at angles in between increases, but also their energies grow, making them less sensitive to the cascading processes in the later cascade development in the atmosphere. Further calculation are needed in order to settle the details but the fast rise of the rate of aligned events seams plausible taking into account the common character of curving the relativistically rotating string.
Additionally, it is worthwhile mentioning that the discussed mechanism leaves the question of the correlation of energies and the positions of the observed energetic cores open. The general absence of any clear correlation in experimental data is difficult to explain by other models of alignment.
## V Summary.
We have proposed a mechanism which can be responsible for the alignment of the very high energy interaction product observed in high altitude cosmic ray experiments.
We postulate no “new physics”. The unusual alignment of the creation processwhich had previously been suggested as extraordinary high momentum transfer processes or new, exotic particles, could be strictly kinematical effect due to the conservation laws. The conservation of the angular momentum in the creation of fast rotating strings leads to its co-planar decay. The problem of quantitative description of the hadronization of such object needs detailed knowledge of the nature of the string – chain, fireball or jet. Each of these words has its individual connotation and it has not yet been decided which (if any) describes the high energy particle production process.
The qualitative description of the relativistic rotating string could help to explain the phenomena of binocular and aligned events seen in some cosmic ray experiments.
###### Acknowledgements.
It is a pleasure to tkank Prof. A.W. Wolfendale for a carful reading of the manuscript.
|
no-problem/0003/nucl-th0003072.html
|
ar5iv
|
text
|
# Relativistic predictions of quasielastic proton-nucleus spin observables based on a complete Lorentz invariant representation of the NN scattering matrix
## I Introduction
In a previous paper we developed a relativistic plane wave model for studying medium modifications of the nucleon-nucleon (NN) interaction via complete sets of spin observables for quasielastic $`(\stackrel{}{p},\stackrel{}{p}^{^{}})`$ and $`(\stackrel{}{p},\stackrel{}{n})`$ scattering . A systematic survey of the predictive power of the latter model compared to experimental data will be presented in this paper.
The main aspect of our model is the use of a general Lorentz invariant representation of the NN scattering matrix referred to as the IA2 representation. This complete expansion of the interaction matrix contains 44 independent invariant amplitudes consistent with parity and time-reversal invariance as well as charge symmetry together with the on-mass-shell condition for the external nucleons . Five of the 44 amplitudes are determined from free NN scattering data and are therefore identical to the amplitudes employed in the previously-used five-term parameterization of the NN scattering matrix referred to as the IA1 representation. The remaining 39 amplitudes may be obtained via solution of the Bethe-Salpeter equation employing a one-boson exchange model (with pseudovector pion-nucleon coupling) for the NN interaction . The use of a complete set of NN amplitudes eliminates ambiguities inherent in the IA1 representation. The effect of the nuclear medium on the scattering wave functions is incorporated by replacing free nucleon masses in the Dirac spinors with smaller effective projectile and target nucleon masses within the context of the relativistic mean field approximation of Serot and Walecka . Experimental data on quasielastic spin observables suggest that nuclear shell effects are unimportant, and hence the target nucleus is treated as a non-interacting Fermi gas.
One of the great triumphs of Dirac phenomenology has been the successful prediction of the analyzing power for quasielastic $`{}_{}{}^{40}\text{Ca}(\stackrel{}{p},\stackrel{}{p}^{^{}})`$ scattering at 500 MeV based on the IA1 representation of the NN interaction within the framework of a simple relativistic plane wave model . The latter success is achieved by replacing free nucleon masses with effective nucleon masses in the Dirac spinors, thus enhancing the lower components of the Dirac spinors and resulting in a reduction of the analyzing power relative to the value for free scattering: this reduction has been called a ”relativistic signature” since no mechanism has been found for its explanation within the framework of the conventional nonrelativistic Schrödinger equation. Despite the successful prediction of the analyzing power, however, the relativistic IA1-based model yields inconsistencies in the sense that quasielastic $`(\stackrel{}{p},\stackrel{}{p}^{^{}})`$ and $`(\stackrel{}{p},\stackrel{}{n})`$ spin observables prefer different five-term representations of the NN scattering matrix. As already explained, a more rigorous and unambiguous approach must be based on the IA2 representation of the scattering matrix within the relativistic plane wave impulse approximation. In Ref. we showed that the inclusion of effective masses within the IA2 representation fails to reproduce the large quenching effect predicted by the IA1 representation for the $`{}_{}{}^{40}\text{Ca}(\stackrel{}{p},\stackrel{}{p}^{^{}})`$ analyzing power at 500 MeV. Hence, we concluded that any large deviations of spin observables relative to the corresponding free values are merely artifacts of using an incorrect IA1 representation of the NN scattering matrix, and consequently other effects, such as distortions and multiple scattering, should be considered as possible candidates for reproducing the 500 MeV analyzing power within the IA2 representation.
The question now arises as to how IA2-based predictions compare to data at energies lower than 500 MeV for a range of scattering angles, and how do they compare to the corresponding IA1-based predictions. In principle all calculations should be based on the more rigorous IA2 representation, however, for comparison to previous predictions, the IA1-based calculations are included. In addressing the above questions, we attempt to fully understand the role of effective-mass-type medium effects on spin observables before attempting to incorporate additional effects into our relativistic model. The aim of this paper, therefore, is to perform a systematic study of the predictive power of IA2-based model compared to the published quasielastic polarization data listed in Table I. The following questions will also be addressed:
* How successful is the effective mass concept, inherent to Dirac phenomenology, in describing quasielastic $`(\stackrel{}{p},\stackrel{}{p}^{^{}})`$ and $`(\stackrel{}{p},\stackrel{}{n})`$ scattering data?
* How do numerical results based on the IA2 representation of the NN scattering matrix compare to those utilizing the incomplete (and therefore ambiguous) IA1 representation?
In Sec. II the sensitivity of complete sets of quasielastic $`(\stackrel{}{p},\stackrel{}{p}^{^{}})`$ and $`(\stackrel{}{p},\stackrel{}{n})`$ spin observables is investigated with respect to a range of different effective projectile and target nucleon masses for both IA1- and IA2-based models. In addition, calculations based on optimal combinations of effective projectile and target nucleon masses are also compared to spin observable data at the centroid of the quasielastic peak. Our main conclusions are presented in Sec. III.
## II Sensitivity of spin observables to effective masses
In Ref. it was shown that an IA2-based prediction fails to reproduce the <sup>40</sup>Ca$`(\stackrel{}{p},\stackrel{}{p}^{^{}})`$ analyzing power at an incident energy of 500 MeV. In order to give an initial feeling for the predictive power of our model, the latter reference employed values of the effective nucleon masses which were theoretically extracted by Hillhouse and De Kock . However, the question arises as to whether other combinations of physically acceptable effective projectile and target nucleon masses exist, which provide a better description of the analyzing power. Furthermore, one can also ask whether the latter combination still provides a good description of all the other spin observables, and if not, whether one can find a combination of physically acceptable effective masses which reproduce a complete set of spin observables.
Table I lists all the reactions for which calculations are done. In this paper we only present the results for the <sup>40</sup>Ca target since this is representative of the results which were obtained for all the other target nuclei. Results for the last four reactions can be found in Ref. . Complete sets of spin observable data exist for all the energies and targets used, except <sup>40</sup>Ca$`(\stackrel{}{p},\stackrel{}{n})`$ at $`T_{lab}=\mathrm{\hspace{0.33em}495}`$ MeV for which no analyzing power data are available and <sup>40</sup>Ca $`(\stackrel{}{p},\stackrel{}{p}^{^{}})`$ at $`T_{lab}=\mathrm{\hspace{0.33em}200}`$ MeV for which only $`A_y`$ and $`D_{nn}`$ data are available. The reaction <sup>40</sup>Ca$`(\stackrel{}{p},\stackrel{}{n})`$ at $`T_{lab}=\mathrm{\hspace{0.33em}495}`$ MeV is included since data exist at two different laboratory scattering angles and furthermore it is complementary to the reaction <sup>40</sup>Ca$`(\stackrel{}{p},\stackrel{}{p}^{^{}})`$ at $`T_{lab}=\mathrm{\hspace{0.33em}500}`$ MeV. The $`(\stackrel{}{p},\stackrel{}{p}^{^{}})`$ data at $`T_{lab}=\mathrm{\hspace{0.33em}200}`$ MeV are complementary to the $`(\stackrel{}{p},\stackrel{}{n})`$ data at $`T_{lab}=\mathrm{\hspace{0.33em}200}`$ MeV and are therefore also included.
### A Effective mass bands
To answer the above questions, we introduce the concept of an effective mass band in this section, which serves to demonstrate the sensitivity of spin observables to different combinations of effective masses for projectile and target nucleons for both IA1- and IA2-based models. In principle the effective masses can be calculated theoretically following a procedure similar to that outlined in Ref. , however, the effective masses are now considered as free parameters which are varied, in step sizes of 0.01, over the following range of physically acceptable values:
$`(0.50;0.50)`$ $`({\displaystyle \frac{M_1}{M}};{\displaystyle \frac{M_2}{M}})`$ $`(1.0;1.0).`$ (1)
$`M`$ denotes the free nucleon mass, and $`M_1`$ and $`M_2`$ the effective projectile and target nucleon masses respectively. The lower limit of 0.50 corresponds to the effective nucleon mass in infinite nuclear matter . For the purpose of this exercise we focus on values of the spin observables at an excitation energy corresponding to the centroid of the quasielastic peak in the unpolarized inclusive excitation spectrum. For different laboratory scattering angles empirical data for quasielastic spin observables are relatively constant as a function of nuclear excitation energy at the momentum transfers of interest ($`|\stackrel{}{q}|>\mathrm{\hspace{0.17em}0.5}`$ fm <sup>-1</sup>). Hence the trends displayed by observables at the quasielastic peak will be representative of the behavior of spin observables as a function of the energy transferred to the nucleus.
We now introduce the concept of an effective mass band for a particular reaction at a fixed incident energy as a function of laboratory scattering angle. Let $`D_{i^{}j}(\omega ,\theta _{lab},\frac{M_1}{M},\frac{M_2}{M})`$ denote a particular spin observable from the complete set $`\{A_y,D_{\mathrm{}^{}\mathrm{}},D_{s^{}s},D_\mathrm{}^{}s,D_s^{}\mathrm{},D_{nn}\}`$ with $`D_{0n}A_y`$, where $`\omega `$ is the energy transferred to the nucleus and $`\theta _{lab}`$ is the laboratory scattering angle. For the IA2-based model, the procedure for calculating quasielastic spin observables is outlined in Ref. . For the IA1 representation of the NN scattering matrix we employ the phenomenological Horowitz-Love-Franey model with pseudovector pion-nucleon coupling as explained in Ref. . In order to do the IA1 calculations, new Horowitz-Love-Franey parameters were generated for the energy range of 80 to 195 MeV in steps of 5 MeV , and for laboratory energies higher than 200 MeV we employed the Maxwell parameterization of the NN amplitudes .
In order to generate the effective mass bands, the spin observables are first calculated as a function of $`\omega `$ (for fixed $`\theta _{lab}`$), and then the value of the particular spin observable is extracted at the quasielastic peak, i.e.
$`D_{i^{}j}^{(peak)}(\theta _{lab},{\displaystyle \frac{M_1}{M}},{\displaystyle \frac{M_2}{M}})`$ $`=`$ $`D_{i^{}j}(\omega =\omega _{peak},\theta _{lab},{\displaystyle \frac{M_1}{M}},{\displaystyle \frac{M_2}{M}}).`$ (2)
where $`\omega _{peak}`$ is the experimental value of the energy transfer associated with the centroid of the quasielastic peak. For a fixed $`\theta _{lab}`$, each spin observable is calculated successively for each of the different effective mass combinations in Eq. (1). This is repeated for $`10^{}\theta _{lab}\mathrm{\hspace{0.17em}60}^{}`$ and therefore each effective mass combination generates a curve as a function of $`\theta _{lab}`$. Instead of plotting all the different curves on one graph, we calculate, for a fixed $`\theta _{lab}`$, the minimum and maximum values for a particular spin observable:
$`\left(D_{i^{}j}^{(peak)}\right)_{min}(\theta _{lab})`$ $`=`$ $`\mathrm{Min}[D_{i^{}j}^{(peak)}(\theta _{lab},1.0;0.5);D_{i^{}j}^{(peak)}(\theta _{lab},1.0;0.6);\mathrm{}D_{i^{}j}^{(peak)}(\theta _{lab},1.0;1.0)]`$ (3)
$`\left(D_{i^{}j}^{(peak)}\right)_{max}(\theta _{lab})`$ $`=`$ $`\mathrm{Max}[D_{i^{}j}^{(peak)}(\theta _{lab},1.0;0.5);D_{i^{}j}^{(peak)}(\theta _{lab},1.0;0.6);\mathrm{}D_{i^{}j}^{(peak)}(\theta _{lab},1.0;1.0)].`$ (4)
As $`\theta _{lab}`$ varies between $`10^{}`$ and $`60^{}`$ $`\left(D_{i^{}j}^{(peak)}\right)_{min}(\theta _{lab})`$ traces out a lower curve and $`\left(D_{i^{}j}^{(peak)}\right)_{max}(\theta _{lab})`$ traces out an upper curve on the graph. All effective mass combinations given by Eq. (1) lie between these limits, and this (as a function of scattering angle) forms an effective mass band for each spin observable. Effective mass bands for both IA1 and IA2 representations of the relativistic NN scattering matrix are presented in Figs. 1 to 4 for $`(\stackrel{}{p},\stackrel{}{p}^{^{}})`$ and $`(\stackrel{}{p},\stackrel{}{p}^{^{}})`$ scattering from a <sup>40</sup>Ca nucleus at incident energies of 200 and 500 MeV. Similar figures for the other reactions listed in Table I can be found in Ref. . The energy range is chosen to correspond to polarized proton energies of interest to experimental programs at facilities such as the National Accelerator Centre (Faure, South Africa) and The Research Center for Nuclear Physics (Osaka, Japan). The IA1 and IA2-based effective mass bands are denoted by the straight-line-hatch and dotted-hatch patterns respectively. The solid circles represent the experimental values extracted at the quasielastic peak for a specific laboratory scattering angle: the data are taken from references cited in Table I.
The effective mass bands for the different reactions in Figs. 1 to 4 are self-explanatory: if a data point falls outside a band, it means that no effective mass combination can describe that particular point; Rather one must consider other effects such as distortions, multiple scattering or recoil effects in an attempt to reproduce the data. The width of a band also gives an indication of the expected medium effect on a particular spin observable; If the band is wide, then this spin observable is sensitive to a variation in effective masses and it may exhibit a large deviation from the free mass calculation, i.e. a large medium effect. Vice versa if the band is very narrow. The advantage of the effective mass band plots is that they immediately give an indication of whether a particular spin observable can be described via the concept of an effective-mass.
Although Figs. 1 to 4 speak for themselves, we briefly highlight the main results. For both $`(\stackrel{}{p},\stackrel{}{p}^{^{}})`$ and $`(\stackrel{}{p},\stackrel{}{n})`$ scattering the IA1 bands are broader than the IA2 bands, indicating the that the IA1 representation severely overestimates the role of effective-mass-type medium effects for quasielastic scattering. In addition, as the energy is lowered, the IA1 bands become broader for $`(\stackrel{}{p},\stackrel{}{p}^{^{}})`$ scattering. For $`(\stackrel{}{p},\stackrel{}{p}^{^{}})`$ scattering at 200 MeV (Fig. 2) both representations fail to describe $`A_y`$ and $`D_{n^{}n}`$ indicating that other effects (other than effective-mass-type effects) may play a more important role at low incident energies. Note that for $`(\stackrel{}{p},\stackrel{}{p}^{^{}})`$ scattering at both 200 and 500 MeV (Figs. 1 and 2) the IA2-based model fails to reproduce the $`A_y`$ and $`D_{nn}`$ data. Fig. 4 for $`(\stackrel{}{p},\stackrel{}{n})`$ scattering at 200 MeV clearly illustrates the danger of interpreting medium effects within the IA1 representation: the band for the ambiguous IA1 representation includes the data points for both $`D_{s^{}l}`$ and $`D_{l^{}s}`$ spin observables, whereas the more rigorous IA2-based band excludes these data points.
### B Optimal effective mass combinations
Next we extract that combination of effective projectile and target nucleon masses which best describes a complete set of spin observables for a range of scattering angles at a fixed incident energy. The systematics of these so-called optimal effective masses is studied for both IA1- and IA2-based models and also compared to values calculated from empirical scalar potentials in an eikonal approximation .
We start by defining:
$`\mathrm{\Delta }({\displaystyle \frac{M_1}{M}},{\displaystyle \frac{M_2}{M}})`$ $`=`$ $`{\displaystyle \underset{i=1}{\overset{n_1}{}}}{\displaystyle \underset{j=1}{\overset{n_2}{}}}(w_{theory}^{(j)}(\theta _i)w_{expr}^{(j)}(\theta _i))^2`$ (5)
where $`w_{theory}^{(j)}(\theta _i)`$ is the theoretical value of the spin observable evaluated at the laboratory scattering angle $`\theta _i`$ at which the experimental data are available. Similarly $`w_{expr}^{(j)}(\theta _i)`$ is the experimental value of the spin observable. $`n_1`$ and $`n_2`$ denote the number of laboratory scattering angles at which data exist and the number of spin observables which were measured, respectively. For example, for the reaction <sup>40</sup>Ca$`(\stackrel{}{p},\stackrel{}{p}^{^{}})`$ at $`T_{lab}=\mathrm{\hspace{0.33em}500}`$ MeV, $`n_1=\mathrm{\hspace{0.33em}1}`$ (data measured only at one angle) , $`n_2=\mathrm{\hspace{0.33em}6}`$ ($`A_y`$, $`D_\mathrm{}^{},\mathrm{}`$, $`D_{s^{}s}`$, $`D_{\mathrm{}^{},s}`$, $`D_{s^{},\mathrm{}}`$ and $`D_{nn}`$) and $`\theta _i=\mathrm{\hspace{0.33em}19}^{}`$. Formulae for the calculation of $`w_{theory}^{(j)}(\theta _i)`$ can be found in Ref. .
The optimal set for a particular reaction is defined as that combination of effective masses for which $`\mathrm{\Delta }`$ is a minimum, i.e. it is that combination of effective masses which best describes all the spin observable data for a particular reaction at a particular energy. Table II displays the optimal effective mass combinations for the various reactions in Table I. For the second reaction in Table I (<sup>40</sup>Ca$`(\stackrel{}{p},\stackrel{}{p}^{^{}})`$ at $`T_{lab}=\mathrm{\hspace{0.33em}200}`$ MeV ) no optimal masses are listed in Table II as there were no data on complete sets of observables from which to extract them. For comparison Table II also displays the effective mass values calculated in Ref. . Generally one sees that, for both IA1 and IA2-based models, the values of the optimal effective masses agree to within 20% with the corresponding theoretical values. In addition the optimal effective masses do not exhibit a systematic behavior with respect to target mass and incident energy indicating that one cannot impose a pure plane wave model on quasielastic scattering. Additional effects must be included in a more sophisticated model.
In Figs. 1, 3 and 4 we also compare IA1- and IA2-based predictions of spin observables based on the optimal effective masses listed in Table II. The solid and dashed lines denote the IA2 and IA1 predictions respectively. Deviations of the spin observables from the free mass values (long-dash-short-dash) serve as an indication of the importance of effective-mass-type nuclear medium effects for quasielastic scattering. Generally one sees that both optimal IA1 and IA2 predictions are very close to the free mass calculations indicating the insensitivity of quasielastic spin observables to effective-mass-type medium effects.
It is convenient to consider the spin observables in three different groups. Firstly, the spin observables $`D_{\mathrm{}^{}\mathrm{}},D_{s^{}s},D_s^{}\mathrm{}`$ and $`D_\mathrm{}^{}s`$. For the whole energy range between 200 and 500 MeV both IA1 and IA2 optimal effective masses provide an adequate description at the quasielastic peak. For the $`(\stackrel{}{p},\stackrel{}{n})`$ observables the description is not as good as for the $`(\stackrel{}{p},\stackrel{}{p}^{^{}})`$ observables.
Next we focus on $`D_{nn}`$. The description of $`D_{nn}`$ becomes problematic for both $`(\stackrel{}{p},\stackrel{}{p}^{^{}})`$ and $`(\stackrel{}{p},\stackrel{}{n})`$ scattering as the energy is lowered. For the $`(\stackrel{}{p},\stackrel{}{p}^{^{}})`$ reaction the data point shifts away from the effective mass band as the energy is lowered, while for the $`(\stackrel{}{p},\stackrel{}{n})`$ reaction the theoretical calculation exhibits an oscillatory motion at 495 MeV which causes it to miss the data. At 200 MeV there is still a variation with respect to laboratory scattering angle in the theoretical calculation whereas the data are quite flat. A possible explanation for the latter discrepancy is the exclusion of distortions and recoil effects in our model.
Lastly, the analyzing power $`A_y`$ is considered. In the IA2 representation of the NN scattering matrix the optimal effective mass set does not provide a good description of the $`A_y`$ data at the quasielastic peak for the reaction $`(\stackrel{}{p},\stackrel{}{p}^{^{}})`$ at 500 MeV. (It may even be better described by some other specially chosen, but realistic pair of effective masses.) Furthermore, as the energy is lowered, the $`A_y`$ data point shifts away from the effective mass band. The $`(\stackrel{}{p},\stackrel{}{n})`$ data for $`A_y`$ are, however, much better described by the optimal IA2 set.
The failure of the IA2-based model to predict $`A_y`$ and $`D_{nn}`$ for $`(\stackrel{}{p},\stackrel{}{p}^{^{}})`$ scattering at 200 MeV calls for a more sophisticated treatment of nuclear distortions and recoil effects. To this end we have developed a relativistic distorted wave model for quasielastic scattering ; Numerical results will be presented in a future paper. Furthermore, since that distortions play a more prominent role at low energies, the measurement of a complete set of $`(\stackrel{}{p},\stackrel{}{p}^{^{}})`$ spin observables at 200 MeV will be extremely useful for checking the validity of our distorted model. The latter measurements will also complement the existing $`(\stackrel{}{p},\stackrel{}{n})`$ data measured at the Indiana University Cyclotron Facility .
Calculations have been performed for all the reactions listed in Table I as a function of energy transferred to the nucleus: these results are available from the authors on request. Conclusions based on the latter are consistent with the present investigation at the centroid of the quasielastic peak.
## III Conclusion
In this investigation effective projectile and target nucleon masses were treated as free parameters and it was found that no effective mass combination could describe both $`(\stackrel{}{p},\stackrel{}{p}^{^{}})`$ and $`(\stackrel{}{p},\stackrel{}{n})`$ scattering observables. Even though the IA2 treatment of medium effects (within the RPWIA framework) is the most advanced to date, it still fails to describe all observables; the glaring example being the prediction of $`A_y`$ for $`(\stackrel{}{p},\stackrel{}{p}^{^{}})`$ scattering as the energy is lowered from 500 MeV to 200 MeV. In general it is seen that IA2-based effective-mass predictions are close to the corresponding free values, whereas the ambiguous IA1 representation severely overestimates the importance of effective-mass-type medium effects. Despite the successes of the Walecka model effective mass concept within the relativistic plane wave impulse approximation, the theoretical work should now start to include additional effects like multiple scattering, recoil effects and distortions of the projectile. A relativistic distorted wave model (initially employing the IA1 representation of the NN scattering matrix) has been presented in Ref. , but still needs to be implemented numerically.
###### Acknowledgements.
The authors wish to thank Professor S.J. Wallace (University of Maryland, USA) for providing the IA2 invariant amplitudes used in the present calculations. The financial assistance to B.I.S.v.d.V by the Harry Crossley Foundation, the South African FRD and the National Accelerator Centre is gratefully acknowledged.
|
no-problem/0003/physics0003053.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
The possibility of radiation from superluminal sources was first considered by Heaviside in 1888 . He considered this topic many times over the next 20 years, deriving most of the formalism of what is now called Čerenkov radiation. However, despite being an early proponent of the concept of a velocity-dependent electromagnetic mass, Heaviside never acknowledged the limitation that massive particles must have velocities less than that of light. Consequently many of his pioneering efforts (and those of his immediate followers, Des Coudres and Sommerfeld ), were largely ignored, and the realizable case of radiation from a charge with velocity greater than the speed of light in a dielectric medium was discovered independently in an experiment in 1934 .
In an insightful discussion of the theory of Čerenkov radiation, Tamm revealed its close connection with what is now called transition radiation, i.e., radiation emitted by a charge in uniform motion that crosses a boundary between metallic or dielectric media. The present paper was inspired by a work of Bolotovskii and Ginzburg on how aggregates of particles can act to produce motion that has superluminal aspects and that there should be corresponding Čerenkov-like radiation in the case of charged particles. The classic example of aggregate superluminal motion is the velocity of the point of intersection of a pair of scissors whose tips approach one another at a velocity close to that of light.
Here we consider the example of a “sweeping” electron beam in a high-speed analog oscilloscope such as the Tektronix 7104 . In this device the “writing speed”, the velocity of the beam spot across the faceplate of the oscilloscope, can exceed the speed of light. The transition radiation emitted by the beam electrons just before they disappear into the faceplate has the character of Čerenkov radiation from the superluminal beam spot, according to the inverse of the argument of Tamm.
## 2 Model Calculation
As a simple model suppose a line of charge moves in the $`y`$ direction with velocity $`uc`$, where $`c`$ is the speed of light, but has a slope such that the intercept with the $`x`$ axis moves with velocity $`v>c`$. See Figure 1a. If the region $`y<0`$ is occupied by, say, a metal the charges will emit transition radiation as they disappear into the metal’s surface. Interference among the radiation from the various charges then leads to a strong peak in the radiation pattern at angle $`\mathrm{cos}\theta =c/v`$, which is the Čerenkov effect of the superluminal source.
To calculate the radiation spectrum we use equation (14.70) from the textbook of Jackson :
$$\frac{dU}{d\omega d\mathrm{\Omega }}=\frac{\omega ^2}{4\pi ^2c^3}\left[𝑑td^3r\widehat{𝐧}\times 𝐣(𝐫,t)e^{i\omega (t(\widehat{𝐧}𝐫)/c)}\right]^2,$$
(1)
where $`dU`$ is the radiated energy in angular frequency interval $`d\omega `$ emitting into solid angle $`d\mathrm{\Omega }`$, $`𝐣`$ is the source current density, and $`\widehat{𝐧}`$ is a unit vector towards the observer.
The line of charge has equation
$$y=\frac{u}{v}xut,z=0,$$
(2)
so the current density is
$$𝐣=\widehat{𝐲}Ne\delta (z)\delta \left(t\frac{x}{v}+\frac{y}{u}\right),$$
(3)
where $`N`$ is the number of electrons per unit length intercepting the $`x`$ axis, and $`e<0`$ is the electron’s charge.
We also consider the effect of the image current,
$$𝐣_{\mathrm{image}}=+\widehat{𝐲}(Ne)\delta (z)\delta \left(t\frac{x}{v}\frac{y}{u}\right).$$
(4)
We will find that to a good approximation the image current just doubles the amplitude of the radiation. For $`uc`$ the image current would be related to the retarded fields of the electron beam, but we avoid this complication when $`uc`$. Note that the true current exists only for $`y>0`$, while the image current applies only for $`y<0`$.
We integrate using rectangular coordinates, with components of the unit vector $`𝐧`$ given by
$$n_x=\mathrm{cos}\theta ,n_y=\mathrm{sin}\theta \mathrm{cos}\varphi ,\mathrm{and}n_z=\mathrm{sin}\theta \mathrm{sin}\varphi ,$$
(5)
as indicated in Fig. 1b. The current impinges only on a length $`L`$ along the $`x`$ axis. The integrals are elementary and we find, noting $`\omega /c=2\pi /\lambda `$,
$$\frac{dU}{d\omega d\mathrm{\Omega }}=\frac{e^2N^2L^2}{\pi ^2c}\frac{u^2}{c^2}\frac{\mathrm{cos}^2\theta +\mathrm{sin}^2\theta \mathrm{sin}^2\varphi }{\left(1\frac{u^2}{c^2}\mathrm{sin}^2\theta \mathrm{cos}^2\varphi \right)^2}\left(\frac{\mathrm{sin}\left[\frac{\pi L}{\lambda }\left(\frac{c}{v}\mathrm{cos}\theta \right)\right]}{\frac{\pi L}{\lambda }\left(\frac{c}{v}\mathrm{cos}\theta \right)}\right)^2.$$
(6)
The factor of form $`\mathrm{sin}^2\chi /\chi ^2`$ appears from the $`x`$ integration, and indicates that this leads to a single-slit interference pattern.
We will only consider the case that $`uc`$, so from now on we approximate the factor $`1\frac{u^2}{c^2}\mathrm{sin}^2\theta \mathrm{cos}^2\varphi `$ by 1.
Upon integration over the azimuthal angle $`\varphi `$ from $`\pi /2`$ to $`\pi /2`$ the factor $`\mathrm{cos}^2\theta +\mathrm{sin}^2\theta \mathrm{sin}^2\varphi `$ becomes $`\frac{\pi }{2}(1+\mathrm{cos}^2\theta )`$.
It is instructive to replace the radiated energy by the number of radiated photons: $`dU=\mathrm{}\omega dN_\omega `$. Thus
$$\frac{dN_\omega }{d\mathrm{cos}\theta }=\frac{\alpha }{2\pi }\frac{d\omega }{\omega }N^2L^2\frac{u^2}{c^2}(1+\mathrm{cos}^2\theta )\left(\frac{\mathrm{sin}\left[\frac{\pi L}{\lambda }\left(\frac{c}{v}\mathrm{cos}\theta \right)\right]}{\frac{\pi L}{\lambda }\left(\frac{c}{v}\mathrm{cos}\theta \right)}\right)^2,$$
(7)
where $`\alpha =e^2/\mathrm{}c1/137`$. This result applies whether $`v<c`$ or $`v>c`$. But for $`v<c`$, the argument $`\chi =\frac{\pi L}{\lambda }\left(\frac{c}{v}\mathrm{cos}\theta \right)`$ can never become zero, and the diffraction pattern never achieves a principal maximum. The radiation pattern remains a slightly skewed type of transition radiation. However, for $`v>c`$ we can have $`\chi =0`$, and the radiation pattern has a large spike at angle $`\theta _{\stackrel{ˇ}{\mathrm{C}}}`$ such that
$$\mathrm{cos}\theta _{\stackrel{ˇ}{\mathrm{C}}}=\frac{c}{v},$$
which we identify with Čerenkov radiation. Of course the side lobes are still present, but not very prominent.
## 3 Discussion
The present analysis suggests that Čerenkov radiation is not really distinct from transition radiation, but is rather a special feature of the transition radiation pattern which emerges under certain circumstances. This viewpoint actually is relevant to Čerenkov radiation in any real device which has a finite path length for the radiating charge. The walls which define the path length are sources of transition radiation which is always present even when the Čerenkov condition is not satisfied. When the Čerenkov condition is satisfied, the so-called formation length for transition radiation becomes longer than the device, and the Čerenkov radiation can be thought of as an interference effect.
If $`L/\lambda 1`$, then the radiation pattern is very sharply peaked about the Čerenkov angle, and we may integrate over $`\theta `$ noting
$$d\chi =\frac{\pi L}{\lambda }d\mathrm{cos}\theta \mathrm{and}_{\mathrm{}}^{\mathrm{}}𝑑\chi \frac{\mathrm{sin}^2\chi }{\chi ^2}=\pi $$
(8)
to find
$$dN_\omega \frac{\alpha }{2\pi }(N\lambda )^2\frac{d\omega }{\omega }\frac{L}{\lambda }\frac{u^2}{c^2}\left(1+\frac{c^2}{v^2}\right).$$
(9)
In this we have replaced $`\mathrm{cos}^2\theta `$ by $`c^2/v^2`$ in the vicinity of the Čerenkov angle. We have also extended the limits of integration on $`\chi `$ to $`[\mathrm{},\mathrm{}]`$. This is not a good approximation for $`v<c`$, in which case $`\chi >0`$ always and $`dN_\omega `$ is much less than stated. For $`v=c`$ the radiation rate is still about one half of the above estimate.
For comparison, the expression for the number of photons radiated in the ordinary Čerenkov effect is
$$dN_\omega 2\pi \alpha \frac{d\omega }{\omega }\frac{L}{\lambda }\mathrm{sin}^2\theta _{\stackrel{ˇ}{\mathrm{C}}}.$$
(10)
The ordinary Čerenkov effect vanishes as $`\theta _{\stackrel{ˇ}{\mathrm{C}}}^2`$ near the threshold, but the superluminal effect does not. This is related to the fact that at threshold ordinary Čerenkov radiation is emitted at small angles to the electron’s direction, while in the superluminal case the radiation is at right angles to the electron’s motion. In this respect the moving spot on an oscilloscope is not fully equivalent to a single charge as the source of the Čerenkov radiation.
In the discussion thus far we have assumed that the electron beam is well described by a uniform line of charge. In practice the beam is discrete, with fluctuations in the spacing and energy of the electrons. If these fluctuations are too large we cannot expect the transition radiation from the various electrons to superimpose coherently to produce the Čerenkov radiation. Roughly, there will be almost no coherence for wavelengths smaller than the actual spot size of the electron beam at the metal surface, Thus there will be a cutoff at high frequencies which serves to limit the total radiated energy to a finite amount, whereas the expression derived above is formally divergent. Similarly the effect will be quite weak unless the beam current is large enough that $`N\lambda 1`$.
We close with a numerical example inspired by possible experiment. A realistic spot size for the beam is 0.3 mm, so we must detect radiation at longer wavelengths. A convenient choice is $`\lambda =3`$ mm, for which commercial microwave receivers exist. The bandwidth of a candidate receiver is $`d\omega /\omega =0.02`$ centered at 88 GHz. We take $`L=3`$ cm, so $`L/\lambda =10`$ and the Čerenkov ‘cone’ will actually be about $`5^{}`$ wide, which happens to match the angular resolution of the microwave receiver. Supposing the electron beam energy to be 2.5 keV, we would have $`u^2/c^2=0.01`$. The velocity of the moving spot is taken as $`v=1.33c=4\times 10^{10}`$ cm/sec, so the observation angle is $`41^{}`$. If the electron beam current is 1 $`\mu `$A then the number of electrons deposited per cm along the metal surface is $`N150`$, and $`N\lambda 45`$.
Inserting these parameters into the rate formula we expect about $`7\times 10^3`$ detected photons from a single sweep of the electron beam. This supposes we can collect over all azimuth $`\varphi `$ which would require some suitable optics. The electron beam will actually be swept at about 1 GHz, so we can collect about $`7\times 10^6`$ photons per second. The corresponding signal power is $`2.6\times 10^{25}`$ Watts/Hz, whose equivalent noise temperature is about 20 mK. This must be distinguished from the background of thermal radiation, the main source of which is in the receiver itself, whose noise temperature is about 100K . A lock-in amplifier could be used to extract the weak periodic signal; an integration time of a few minutes of the 1-GHz-repetition-rate signal would suffice assuming 100% collection efficiency.
Realization of such an experiment with a Tektronix 7104 oscilloscope would require a custom cathode ray tube that permits collection of microwave radiation through a portion of the wall not coated with the usual metallic shielding layer .
## 4 Appendix: Bremsstrahlung
Early reports of observation of transition radiation were considered by sceptics to be due to bremsstrahlung instead. The distinction in principle is that transition radiation is due to acceleration of charges in a medium in response to the far field of a uniformly moving charge, while bremsstrahlung is due to the acceleration of the moving charge in the near field of atomic nuclei. In practice both effects exist and can be separated by careful experiment.
Is bremsstrahlung stronger than transition radiation in the example considered here? As shown below the answer is no, but even if it were we would then expect a Čerenkov-like effect arising from the coherent bremsstrahlung of the electron beam as it hits the oscilloscope faceplate.
The angular distribution of bremsstrahlung from a nonrelativistic electron will be $`\mathrm{sin}^2\theta `$ with $`\theta `$ defined with respect to the direction of motion. The range of a 2.5-kev electron in, say, copper is about $`5\times 10^6`$ cm while the skin depth at 88 GHz is about $`2.5\times 10^5`$ cm. Hence the copper is essentially transparent to the backward hemisphere of bremsstrahlung radiation, which will emerge into the same half space as the transition radiation.
The amount of bremsstrahlung energy $`dU_B`$ emitted into energy interval $`dU`$ is just $`YdU`$ where $`Y`$ is the so-called bremsstrahlung yield factor. For 2.5-keV electrons in copper, $`Y=3\times 10^4`$ . The number $`dN`$ of bremsstrahlung photons of energy $`\mathrm{}\omega `$ in a bandwidth $`d\omega /\omega `$ is then $`dN=dU_B/\mathrm{}\omega =Yd\omega /\omega `$. For the 2% bandwidth of our example, $`dN=6\times 10^6`$ per beam electron. For a 3-cm-long target region there will be 500 beam electrons per sweep of the oscilloscope, for a total of $`3\times 10^4`$ bremsstrahlung photons into a 2% bandwidth about 88 GHz. Half of these emerge from the faceplate as a background to $`7\times 10^3`$ transition-radiation photons per sweep. Altogether, the bremsstrahlung contribution would be about 1/50 of the transition-radiation signal in the proposed experiment.
|
no-problem/0003/cond-mat0003070.html
|
ar5iv
|
text
|
# Self-organization, Localization of Shear Bands and Aging in Loose Granular Materials
## Abstract
We introduce a mesoscopic model for the formation and evolution of shear bands in loose granular media. Numerical simulations reveal that the system undergoes a non-trivial self-organization process which is governed by the motion of the shear band and the consequent restructuring of the material along it. High density regions are built up, progressively confining the shear bands in localized regions. This results in an inhomogeneous aging of the material with a very slow increase in the mean density, displaying an unusual glassy like system-size dependence.
PACS numbers: 45.70.-n, 45.70.Mg, 05.65.+b
A large class of materials are handled in the form of dispersed solid grains at some stage of their processing. Thus the description of the rheological properties of suspensions, pastes and dry granular media is a key question which controls the ability of mixing, storing, transporting etc, these disperse media . Granular systems constitute an intermediate state of matter between fluids and solids : they flow like fluids but they also build piles indicating that a non-vanishing static shear stress is present which is characteristic of solids. From this point of view it is also of major interest to understand the shearing process in these systems. A number of experiments have been carried out on the shear process in granular materials . Most of these are triaxial tests to determine macroscopic properties such as the shear stress or the volumetric strain, as a function of the shear strain.
The intimate interplay between the geometrical arrangements and the frictional properties of the grains determines the precise form of the rheological behavior to be used at a continuum level. The underlying question is the identification of relevant internal variables. The most obvious one is the density of the sample, which can be made to vary over a wide range by the method of preparation. Compared to other parameters describing the texture (e.g. fabric tensors accounting for the distribution of contact orientations) the density has the most drastic impact on the stress needed to shear the material as well as on the mode of shearing; from an apparent homogeneous strain for loose packings to a localized steady shear band for dense assemblies . The coupling of the density to the shear properties can be understood through the concept of dilatancy .
A related question is whether statistical fluctuations have an impact on macroscopic properties. Lately, there has been an upsurge of interest in trying to characterize the large stress fluctuations in silos, Couette flow or slider block geometries, or to understand the statistics of interparticle contact forces . Recently, spectacular experiments in two-dimensional Couette shear cells were carried out where the movement and stress of individual particles were monitored in order to describe the inner structure and the force network in the sheared granular material. It was demonstrated that stationary motion is accompanied by large stress fluctuations due to the formation and breakdown of arches. Large fluctuations were also found in three dimensional steady state shear cells .
This issue has also been raised by the results of recent numerical simulations of rigid grain assemblies , where even at low densities, the shearing which appears as homogeneous over long times, in fact consists of a succession of sudden changes of quasi-instantaneous and localized strain fields. This observation suggests that the transition from the particle based description to the continuum one requires the detailed understanding of the statistical features associated with these sudden changes.
In this Letter we present a simple model for the shearing of a granular medium in loose samples. We describe the strain field at every instant as a shear band, chosen through a global optimization procedure, which is equivalent, as we shall see later, to searching for the ground state of a directed polymer in a random potential . However, this potential is not a priori frozen in but has a self-organized development due to our procedure of choosing and changing the shear band. Though very simple and with only the minimum of ingredients, the model shows that the density of the medium increases anomalously slowly. Further we are also able to predict on the basis of this model, that large scale inhomogeneities build up in a system subject to a steady shear. This could be an interesting feature to compare with experiments.
Let us consider a shear process, assumed to be invariant along the shear direction ($`z`$ in Fig. 1). This geometry is appropriate for instance, in an annular shear cell of large radius . We consider moreover a continuum
description, valid on scales much larger than that of individual grains. We now introduce a fundamental assumption of our model: We assume that the instantaneous strain field is always localized on a single shear band . Experimentally, it is known that shear bands have a typical width of about ten grain diameters. Thus at a continuum level, the velocity field is indeed discontinuous across the shear band. From the geometry of our set-up, the shear band must be a continuous surface due to topological constraints (Fig. 1). Further, we assume that because of the translational invariance along the $`z`$ axis, the system can be reduced to a two dimensional one in the $`x`$-$`y`$ (cross-section) plane, through an averaging over the $`z`$ direction.
The basic hypothesis of the localization of the shear on the shear band at all times, is not as restrictive as it may appear. We only refer here to instantaneous shear rates, and provided the shear band changes rapidly enough, coarse-graining the strain field in time may produce a uniform shear rate. Experimentally, though it is very difficult to have direct access to the instantaneous shear rate, large fluctuations found in the shear stress may indicate that the shear is never quite uniform, even at early times. As mentioned earlier, this seems indicated also by numerics .
Initially we consider a loose-packed sample. At a suitably coarse-grained scale the medium can be described as a continuum, where the density is a random function displaying fluctuations around a mean value. Under a constant normal load, a threshold shear force (or torque for an annular shear cell) has to be applied to impose a non-zero strain. Locally, after integration along the $`z`$ axis, the density controls the threshold shear force. Although this is inessential, for simplicity we assume that the ratio of shear to normal stress, i.e. the friction coefficient, increases linearly with density. As mentioned earlier, the texture of the medium also contributes to the friction coefficient. However, since we consider only shear in a fixed orientation, a single scalar parameter combining density and texture should suffice. This parameter is called “density” for short and is denoted by $`\varrho (x,y)`$. Thus at any time the state of the medium is characterized by this field.
We determine the shear band (path in the ($`x,y`$) plane) by the following three conditions: a) it is continuous, b) it spans the sample in the $`x`$ direction without overhangs and c) the sum of the density along it is minimal among all possible paths satisfying a) and b). One can recognize that this is the well known problem of finding the ground state of a directed polymer in a random potential .
Relative motion of the particles takes place within the shear band while the rest of the sample remains still. Small movements can totally rearrange the local structure and thus may induce large changes in the local density. We simplify this complex behavior by renewing the density $`\varrho `$ only along the shear band, by independent random values taken from a fixed distribution. After this, a new shear band is again located as described above. Thus the shear process consists of a succession of localized slips occurring at very small time scales. We note that in characterizing this process, in the spirit of a continuum modeling, we ignore potential stress inhomogeneities in the medium. It is a simplifying assumption of the model to relate the shear band localization only to the density, and not to the full solution of the local stress distribution.
In order to be able to simulate the above model we discretized it on a square lattice either with principle axis parallel to $`x`$ and $`y`$ and considering first and second nearest neighbours, or tilted by $`45^\mathrm{o}`$ considering only nearest neighbours. Periodic boundary conditions are imposed in the $`y`$ direction. Simulations with site and bond versions were also carried out leading essentially to the same results. We consider here square samples with system size $`N\times N`$ with $`N`$ varying from $`32`$ to $`512`$. Initially a density $`\varrho _i`$ (a random number uniformly distributed between 0 and 1) is assigned to every bond $`i`$. We define the instantaneous shear band as the spanning directed path along which $`\varrho _i`$ is minimal ( applying the usual transfer matrix method ). Once the shear band is found the bonds belonging to it are assigned new values taken from the same uniform distribution as used initially. We repeat this process and monitor different properties of the system .
We define the average density $`\varrho `$ as the mean value of the density of the sites not belonging to the shear band. This definition, as well as our procedure of choosing the least and changing it, guarantees that the average density is a monotonically increasing function of time.
The monotonic behavior and the bounded nature of the average density ($`\varrho 1`$) ensure that it has an asymptotic value. In finite samples this is equal to 1. In Fig. 2 we have plotted the deviation of the average density from this asymptotic value. At early times ($`t/N2`$) the rescaled curves go together independently of the system size; later non-trivial system size effects can be observed. The relaxation to the asymptotic value gets slower as the system size increases.
Since the system evolves entirely through the process of choosing and changing the shear band, we have monitored the following two important quantities related to the shear band: The Hamming distance $`d`$ (which is the number of different sites between successive shear bands) (Fig. 3a) and the average density of the sites along the shear band $`\varrho _{SB}`$ before change (Fig. 3b). It is apparent from the figure that there is a characteristic time of $`t_{c1}N`$, below which the distance is essentially constant and equal to the system size and the density of the shear band is roughly constant. This can be understood qualitatively from the following considerations. Since the very first shear band is equivalent to the ground state conformation of a directed polymer in a random potential, we know from this analogy that the mean density along this shear band is much less than $`0.5`$ . Once the path is refreshed, its mean density increases to $`0.5`$. The next shear band tends to be repelled by the previous one since there still exist many spanning paths with a lower density. Thus at early times, two successive shear bands differ completely (Fig. 3a) and the density of the shear band remains more or less the same (Fig. 3b). This initial phase should last until on average all sites have been refreshed a few times, a number of time steps of the order of $`N`$.
The absence of overlap between successive shear bands in this early time regime reflects the fact that no well defined shear band can be observed in loose granular media. Experimentally this is connected to the difficulty in quantifying fluctuations, when the mean shear strain is of small magnitude. So what is observed is seemingly a homogeneous shear.
There is a transition regime up to $`t_{c2}20N`$ where we still have a good quality data collapse. In this regime both curves $`d`$ and $`r0.5\varrho _{SB}`$ start to fall off. The decreasing distance indicates an increasing persistence of
the shear band. As the average density of the system increases (Fig. 2) the density of the minimal path also grows and thus the repulsive interaction between two consecutive shear bands progressively fades away. Finally, by the end of the transition regime, the interaction becomes attractive and a much slower relaxation process takes place.
The above measurements point to a localization of the shear band, induced by the imposed dynamics. In order to understand better how this comes about we present density snap shots of the system at four different instances (Fig. 4) varying from $`t/N4`$ to $`4000`$. We observe that initially (Fig. 4a) the density appears homogeneously distributed. Then progressively high density regions become apparent. The mechanism for the formation of these regions is the following: As the average density increases, the interaction between successive shear bands becomes attractive and the path gets restricted in space. Small fluctuations of the shear band then lead to a density increase in this region. The presence of these surrounding areas of high densities increases the attraction of successive shear bands, thus leading to a positive feedback process resulting in regions of finite width and very high density where the shear band is trapped in the middle, in a “canyon-like” structure (black lines surrounded by white in Figs. 4c and d).
The escape from the above described trap is only possible via a jump to another local minimum. The probability of such a jump decreases faster than exponentially with increasing density. Thus as time grows, the average jump size decreases even though large regions with relatively small densities remain. The progressive self-quenching of the shear band in the system is responsible for the anomalous slow increase in the average density. This inhomogeneous aging and extremely slow dynamics is reminiscent of a glassy behavior.
In order to get some more insight into the slow dynamics of the system we have studied the same model on a hierarchical lattice. The simple geometry allows for a detailed analytic treatment of the model. This study will
be reported elsewhere . Here we only summarize the main features of this analysis. The slow density increase and strong system size dependence seen on the square lattice are also seen in the hierarchical one. Here we can show that $`1\varrho `$ decreases as a sum of power-laws with a vanishing exponent depending on the lattice size, i.e. the number of generations of the hierarchical lattice. Further, the early time regime is a single function of $`t/N`$ as for the square lattice, while the late time regime scales instead as $`t/N^\alpha `$, where $`\alpha =1/\mathrm{log}(2)`$.
In spite of its simplicity, the model we have introduced displays some interesting consequences of collective organization of density fluctuations in a granular assembly. Although only time-independent rules are introduced, the simulations reveal a slow densification which occurs together with a non-trivial patterning of the density in the sample. Simultaneously, the shear strain is localized on shear bands which acquire progressively a longer and longer persistence. The occurrence of high density regions confining the shear band is a feature which should be observable using X-ray tomography as recently performed in triaxial tests by Desrues et al.
Acknowledgment: This work was partially supported by OTKA T024004 and T029985.
|
no-problem/0003/quant-ph0003084.html
|
ar5iv
|
text
|
# A quantum computer only needs one universe
## 1 Seven remarks
Background to remark 1. For the purpose of this first remark, by “computations” we mean elementary processing operations which achieve some given degree of transformation of a body of information, such as evolving it from one state to an orthogonal state. It is certainly not self-evident that a quantum computer does exponentially more computations than a classical computer of similar size calculating for a similar time, since there are not exponentially more computational results available. This follows immediately from Holevo’s theorem on the capacity of a quantum channel to transmit classical information. We may deduce that whatever else may be said about a quantum computer, it does not constitute many classical information processors. (It is self-evident that it does constitute one quantum information processor). No one, to my knowledge, has seriously argued that a quantum computer does constitute many classical information processors, but informal statements implying this have been quite common (and I have not been totally innocent of them).
Furthermore, when a classical computer simulates the action of a quantum computer, it may need exponentially more time steps or physical components, but then it also yields exponentially more information about the final state. Therefore:
> Remark 1. Quantum computers cannot manipulate classical information more efficiently than classical ones, and the total information about the dynamics of a quantum system which can be obtained by classical computing cannot be obtained more efficiently by quantum computing.
In this sense, the two types of computing are equally efficient. Nevertheless, the ability of a quantum computer to be focused onto specific desired results remains highly significant and useful, for the same reason efficient classical algorithms are significant compared to inefficient ones.
Background to remark 2. Some insight into computational efficiency can be obtained by examining the difference between an efficient and an inefficient classical algorithm for the same problem. Take as an example problem that of finding an item in an ordered list (for example, a name in an alphabetically ordered list). In order to make the problem capable of being efficient both in space and time, we assume that elements of the list can be generated by some fast algorithm $`f`$. The problem is then equivalent to that of finding the root of a monotonic function $`f(x)`$, where $`x`$ is an integer between zero and $`N1`$ (where for an $`n`$-bit problem, $`N=2^n`$). An efficient algorithm is the binary search (examine $`f(N/2)`$, and then according as it is less than or greater than zero, discard the first half or the second half of the list, and repeat). An inefficient algorithm is the exhaustive search (examine every element in turn, until the root is found). If we make a direct comparison between the binary search and the exhaustive search, forgetting for a moment our understanding of number theory, then each step of the binary search appears to accomplish an exponentially large number, of order $`N/2`$, steps of exhaustive search. For example if $`f(N/2)<0`$ then in one step of the binary search we have apparently accomplished the $`N/2`$ ‘computations’ $`f(0)0,f(1)0;f(2)0,\mathrm{}f(N/2)0`$. Actually, of course, only one of these computations has been carried out: the rest follow by a process of reasoning, drawing on the definition of number and the statement that $`f(x)`$ is monotonic.
> Remark 2. The quantity “amount of computation” is not correctly measured by counting the number of steps which would have had to be accomplished if the computation had been done another way.
Therefore, to measure the “amount of computation” carried out in a quantum algorithm such as Shor’s, it is inappropriate to count the steps which a classical computer would have needed. A laboratory demonstration of Shor’s algorithm does not constitute proof that huge amounts of computation have taken place in a small system in a small time—unless there is a proof to that effect which we have not yet considered.
Background to remark 3. The “proof” (or rather, evidence) usually offered is the presence of processes such as
$$\underset{x=0}{\overset{2^n1}{}}|x|0\underset{x=0}{\overset{2^n1}{}}|x|f(x),$$
(1)
in a quantum algorithm such as Shor’s. However, we know that such a process does not constitute “evaluation of $`N`$ values of the function” except in a highly qualified sense, since upon examining the computer, we will only be able to learn one value of the function. To that extent the situation is comparable to the classical binary search, where when a single function evaluation was carried out, an appearance of vast numbers of parallel evaluations arose when the algorithm was looked at from a perspective which lacked insight. Therefore, it remains open whether the mathematical notation of (1) is giving a misleading appearance or a good insight into the quantity of computation.
Let us consider examples of notations which give an impression of many simultaneous computations, but where we can prove this to be a false impression. I will propose first an artificial classical example, and then a more powerful quantum one. Suppose we have a collection of $`n`$ compass needles. Each needle can indicate north, south, east or west, or any other direction. The direction is a two-component vector which we write with the notation $`[\psi ]`$. We will use our needles as a simple classical computing device, in which pointing north represents zero and pointing east represents one. The vector for north may therefore be conveniently written $`[0]`$, and the vector east can be written $`[1]`$. A state of $`n`$ needles such as $`[0][1][0][1]`$ is written $`[0101]`$. Suppose we begin with all needles pointing north, and then rotate each needle to point northeast. This requires $`n`$ elementary operations, and performs the process
$$[0]\frac{1}{2^{n/2}}\underset{x=0}{\overset{2^n1}{}}[x].$$
(2)
Our computer now “stores all the values of $`x`$ from $`x=0`$ to $`x=2^n1`$ simultaneously”. It has also just “performed $`2^n`$ evaluations of the function $`f(x)=x`$” in only $`n`$ steps! Actually, of course, this computer only “stores” all those values in a highly qualified (and in this case almost useless) sense, and only “evaluates” all those function values in a highly qualified sense, despite the appearance of (2). (More complicated functions can be evaluated “in parallel”, by many methods: for example, by operating a sequence of base-3 logic gates on the needles, where the three logic states for each gate input and output are $`[0]`$, $`2^{1/2}([0]+[1])`$ and $`[1]`$, and then interpreting the final state of the needles as a superposition of binary numbers. Of course this is of no practical value, and only a small family of functions can be treated). This is a good illustration of the fact that the essential element in quantum (as contrasted with classical) computing is not superposition but entanglement. It is the entanglement in the right hand side of (1) which makes the quantum state computationally useful, and it is entanglement which is hard to express succinctly in mathematical notation.
A more powerful example is given by the Gottesman-Knill theorem (I take the statement from ):
> Gottesman-Knill theorem: Any quantum computer performing only: a) Clifford group gates, b) measurements of Pauli group operators, and c) Clifford group operations conditioned on classical bits, which may be the results of earlier measurements, can be perfectly simulated in polynomial time on a probabilistic classical computer.
When we recall that the Clifford group contains both Hadamard rotations and controlled-not gates, we see the strength of this statement. It means that there exist many quantum algorithms which, when written down in standard state-vector notation, have an appearance of multiple parallel computations just as strong as that of (1), and yet which can be classically simulated efficiently.
> Remark 3. In view of the fact that it is possible for mathematical notation to give a false impression of the quantity of computation represented by a given process, impressions such as the one contained in (1) do not give a reliable guide to quantity of computation.
Such an impression may merely reflect a weakness of the mathematical notation, not a profound insight into what is going on.
To conclude so far, when a quantum computer is evolved through a process such as (1) it is sometimes stated that the quantum computer ‘computes’ all the function evaluations $`f(x)`$. It is then asked, how can this be so, when a classical computer would need exponentially more time and/or space to compute all these things? However, this is a simple case of the same word ‘compute’ being used to mean two essentially different things, so there is no paradox. The quantum computer process (1) is being compared to the very different process
$$|0|1|2\mathrm{}|2^n1|f(0)|f(1)|f(2)\mathrm{}|f(2^n1).$$
(3)
There is no reason why two such thoroughly different processes should require similar resources.
The argument so far has not produced an indisputable case, but in view of the remarks made, I would say the burden of proof lies with those who claim that quantum computation does really constitute a vast quantity of computation carried out in parallel. The remaining remarks argue directly against that claim. The aim of the discussion is not merely to say what quantum computation is not, however—I will also argue for an alternative, admittedly incomplete, view of what it is.
> Remark 4. An $`n`$-qubit quantum computer is only sensitive to decoherence to the level $`1/\text{Poly}(n)`$, not $`1/\mathrm{exp}(n)`$, in the case that different qubits have independent decoherence. If the quantum computer were really “doing $`2^n`$ computations”, and the result depended on getting a large proportion of them right, then we would expect it to be sensitive to errors at the level $`1/2^n`$, which it is not.
I feel this point is so strong that it suffices on its own to rule out the concept of “vast parallel computation”.
Background to remark 5. Quantum computing is now a field which has reached a modest degree of maturity, but there are still profound unresolved basic issues, chiefly the nature of entanglement involving more than two parties, and the general problem of constructing algorithms which take advantage of quantum physics. Almost certainly insights into each of these will contribute to understanding the other. Although most work on quantum algorithms uses the model of a quantum register with a network of logic gates, it is well known that other computing models are possible, for example cellular automata. Most such models are close cousins of the network model. Recently a new model was discovered which can be shown to reproduce the results of the network/register model, but which also can produce behaviour outside that model. This new model is the ‘cluster state’ computer, or ‘one-way computer’ discovered by Raussendorf and Briegel . The central elements are the preparation of a special entangled state of many qubits at the outset of the computation (the cluster state), followed by appropriately-chosen measurements of single qubits. No further elements are needed (in particular, no unitary ‘logic gates’, whether on one or more qubits, are needed, nor are joint measurements of two or more qubits needed). The choice of measurements at a given stage depends on the outcome of previous measurements. It can be shown that this model can be used to reproduce the action of any quantum network, with similar resources (qubits and time). However, it can also produce behaviour which has no natural interpretation in terms of networks of logic gates. For example, the number of steps (‘logical depth’) required to accomplish a desired transformation can be much smaller (e.g. a constant rather than a logarithm of the input size), and the temporal ordering of the measurements can be unrelated to the sequence of gates in a network designed to accomplish the same algorithm.
The cluster state prepared at the outset is a fixed state which does not depend on the computation to be performed. The measurements to be implemented at any time are determined from two pieces of classical information: a set of angles given by the algorithm, and the ‘information flow vector’ which is a classical bit-string of length $`2n`$ where $`n`$ is the size of the input information. This bit-string is updated depending on the outcomes of the measurements, and half of it gives the algorithm’s output when all measurements are complete.
> Remark 5. The evolution of the cluster-state computer is not readily or appropriately described as a set of exponentially many computations going on at once. It is readily described as a sequence of measurements whose outcomes exhibit correlations generated by entanglement.
In order to design an algorithm for this or any other computer, it is natural to think in terms of classical information in the first instance, simply because that is the only way we know well. For example one might start from a network model, analyzed in a computational basis, and make use of the “quantum parallelism” concept of eq. (1). This is certainly one good way to think about designing algorithms. However, the actual evolution of the cluster state computer has no ready mapping onto this analysis. The main features are instead the information flow vector, and the cluster state whose entanglement slowly disappears as more and more measurements are made on it. The information being processed must reside in these two, but the qubits play almost a passive role, in that they are prepared at the outset in a standard state, and thereafter simply measured one at a time. Rather than ‘performing computations in superposition’, the role of the quantum information is to provide a resource, namely entanglement, which permits the measurement outcomes to exhibit correlations of a different nature to those which would be possible with a set of classical bits.
Background to remark 6. I have argued that it is not true that a quantum computer accomplishes a vast number of computations all at once. A statement which, by contrast, has a clear meaning, and which I think is more useful, is that a quantum computer can compute a specific desired result, such as the period of a function, using much fewer resources than a classical computer would need. Now, when we examine how it is that some classical algorithms are more efficient than others, we find (as in the ordered search example considered above) that the efficient algorithms do not generate (either temporarily or permanently) unnecessary subsidiary results. It is natural, therefore, to ask whether quantum computers out-perform classical ones for the same reason. In view of the fact that, as I have already argued, the two types of computer are equally efficient, in terms of quantity of computations in a given time, this is probably the only available route for improved efficiency. When we examine an efficient quantum algorithm such as Shor’s, we find that it is indeed essential to the working of the algorithm that the evaluations of $`f(x)`$ in superposition do not individually have any subsequent influence on other parts of the universe. If they did, the resulting entanglement would prevent the algorithm from working. The algorithm only establishes the correlations, such as that between $`f(x)`$ and $`f(x+r)`$ where $`r`$ is the period, not the individual values themselves.
> Remark 6. Whenever one algorithm for a given problem is substantially more efficient than another, the more efficient algorithm generates much less extraneous classical information.
Both memory resources and time must be included when measuring efficiency. The value of this remark is that it applies uniformly to classical and to quantum computing, and to their comparison. It implies that we should understand a gain in computational efficiency as a given result achieved with less processing, not as a given result achieved with the same amount of processing but in parallel.
> Remark 7. The different “strands” or “paths” of a quantum computation, represented by the orthogonal states which at a given time form, in superposition, the state of the computer (expressed in some product basis) are not independent, because the whole evolution must be unitary.
This remarks underlines the fact that in a quantum computer a single process is taking place, not many different ones. One practical result is that quantum computation cannot give an efficient algorithm for the unstructured search problem.
## 2 Entanglement, superposition and correlations
It is undisputed that entanglement plays an important role in quantum computing, though the elucidation of this role is an ongoing research area. By definition, an entangled state cannot be written as a product, so if we want to write it down we will have to write a sum of terms. Owing to the linearity of quantum mechanics, subsequent unitary operations cause these terms to evolve independently, and the attraction of the picture of multiple parallel computations comes from this. However, this feature is no different from what is observed in the Fourier analysis of a classical linear electronic circuit. Each Fourier component of a classical signal will there behave independently of the others, but it does not give any useful insight to talk of the different Fourier components as occupying ‘parallel universes’.
The Fourier example (and others that could be given) emphasizes that superposition is not in itself the essential ingredient in quantum computation. Entanglement is, on the other hand, the essential difference between the states on the right hand side of equation (1) and of equations (2) and (3), and no known efficiency separation between quantum and classical computation does not involve the exploitation of entanglement for computational purposes.
I will now put forward an interpretational view of quantum computing which is in accord with the seven remarks above, and with what is known about entanglement.
> Interpretational view. A quantum computer can be more efficient than a classical one at generating some specific computational results, because quantum entanglement offers a way to generate and manipulate a physical representation of the correlations between logical entities, without the need to completely represent the logical entities themselves.
The ‘logical entities’ will typically be integers. Thus, for example, in a set of qubits described by equation (1), the correlation between $`f(x)`$ and $`x`$ is fully represented, but the values of $`f(x)`$ are not. For, a measurement of the qubits in the computational basis will with certainty give a pair of results such that if one is $`x`$, the other is $`f(x)`$, for any $`x`$ in the superposition, but it will only with low probability give any particular $`x,f(x)`$. Furthermore, if the qubits are to be used in Shor’s period-finding algorithm, then the period of the function which the algorithm extracts is a property of the correlation between values of $`f(x)`$, not of any particular value, and when the algorithm finishes this correlation information is available, but no physical record remains of any value of $`x`$ for a given $`f(x)`$. This is not an insignificant side-effect, because the absence of a record of any $`x`$ arises from an interferometric cancellation which is essential to the success of the algorithm.
Note also that the interferometric cancellation is only possible if the terms in the sum are parts of a single entity, i.e. the single, coherent, state of a system isolated in such a way that it does not leave ‘which path’ information through entanglement with other systems. In common with remark 7 above, this emphasizes that the terms in the superposition do not each have a separate existence, and therefore should not be described as if they did.
The EPR experiment, in the form as analyzed by Bell, emphasizes that entanglement leads to a degree of correlation beyond that which can be explained in terms of local hidden variables. In order that these correlations are consistent with special relativity (i.e. that they cannot be used for faster-than-light signaling) it is necessary that they appear ‘hidden’ in two sets of measurement results which are random when either set is examined without the other. This combination of correlation and randomness is a further example of what I mean by a physical state which can represent correlation without representing information about the correlated entities (except in so far as this is logically necessary to represent information about their correlation).
To conclude, the basic fact which quantum computers take advantage of, is that multi-partite entanglement offers a way to produce some computational results without the need to calculate a lot of ‘spectator’ results. For example, we can find the period of a function without calculating all the evaluations of the function; we can find a specific property of a quantum system (such as an energy level) without also finding the complete wavefunction; we can communicate some shared aspect of distributed information without transmitting as much of the information as we would otherwise need to.
The impression of vast parallel computation in (1) is a false impression engendered by an imperfect mathematical notation. It might be argued that the mathematical notation is the only one we have, and that it carries a lot of insight into what is going on in the algorithm. The latter is true, but since we know for a fact the idea of ‘vast computation’ could only be true in a highly qualified sense here, and since there is other evidence to suggest vast computations are not in fact going on, therefore this impression is merely an artifact of the notation. It is noteworthy that the very fact that we can write the state using a summation symbol, rather than writing out all the components laboriously, indicates that the algorithmic information content of the state is small.
Entanglement does mean the process is of a subtle type not available to any classical system. Therefore the computation process, though not exponentially large, is unavailable to classical computers.
The answer to the question ‘where does a quantum computer manage to perform its amazing computations?’ is, we conclude, ‘in the region of spacetime occupied by the quantum computer’. Nonetheless, the quantum computer’s evolution is a subtle and powerful process, and one might want to convey this fact by invoking the image of an ‘exploration of parallel universes’. However, since the concept of ‘parallel universes’ implies a computational power which is not in fact present in quantum computation, I feel such an image obscures more than it illuminates.
The right way to describe the efficiency of quantum computation is, I have argued, that entanglement provides a way to represent and manipulate correlations directly, rather than indirectly through a manipulation of the correlated entities.
Finally, if the state vector notation of (1) is imperfect, then can we think of a notation giving further insight? A more insightful perspective in many areas of physics is that of operators rather than states. For example, take the Heisenberg picture of quantum mechanics, the creation/destruction operator description of quantum optics, and the stabilizer description of quantum error correction. We have noted that quantum algorithms which cannot be efficiently simulated classically exploit entanglement. A notation which focused on this distinction, i.e. which treated operations on entanglement measures rather than state vectors, may give a useful insight.
I acknowledge helpful correspondence with David Deutsch, Christof Zalka, and Michael P. Frank. This work was supported by EPSRC and by the Research Training and Development and Human Potential Programs of the European Union.
|
no-problem/0003/gr-qc0003043.html
|
ar5iv
|
text
|
# Untitled Document
gr-qc/0003043
SU–GP–00/02–1
Indications of causal set cosmology <sup>*</sup><sup>*</sup>To appear in Int. J. Theor. Phys. as part of the proceedings of the Peyresq IV workshop on Quantum and Stochastic Gravity, String Cosmology and Inflation held June 28-July 3, 1999 in Peyresq, France.
Rafael D. Sorkin
Department of Physics, Syracuse University, Syracuse, NY 13244-1130, U.S.A.
internet address: sorkin@physics.syr.edu
Abstract
Within the context of a recently proposed family of stochastic dynamical laws for causal sets, one can ask whether the universe might have emerged from the quantum-gravity era with a large enough size and with sufficient homogeneity to explain its present-day large-scale structure. In general, such a scenario would be expected to require the introduction of very large or very small fundamental parameters into the theory. However, there are indications that such “fine tuning” is not necessary, and a large homogeneous and isotropic cosmos can emerge naturally, thanks to the action of a kind of renormalization group associated with cosmic cycles of expansion and re-contraction.
Until as recently as a year ago, it could have been said that we had no proven method by which to arrive at a dynamical law for causal sets. That is, the theory remained essentially in a kinematical stage, aside from some considerations of a very general nature about how a sum-over-histories might be formulated for causal sets. What has changed the situation is the discovery of a family of dynamical laws in which the “time-evolution” of the causal set appears as a process of stochastic growth . At a technical level, such a dynamics may be defined in terms of a Markov process with a time-varying state-space — a process that might be described as the law of motion of a “stochastic spacetime”. It turns out that relatively little freedom remains, once one postulates a dynamics of this kind: the picture of sequential growth leads almost uniquely to the dynamical family of provided that one agrees to honor the discrete analogs of general covariance and (classical) causality. I will not try to summarize these developments in any detail here, or even to introduce the causal set idea itself. For that, the reader is referred to and . Rather, I wish to consider briefly the possible implications of some of these developments for cosmology.
It is true that the “sequential growth dynamics” found in are classical (non-quantum), and it is true also that one does not know at present whether any of them leads to something like the Einstein equations, or even to anything resembling a spacetime at all. On the other hand, directions in which one might seek their quantum generalization are not hard to discern, and — still at the classical level — there is available at least one plausible guess at a choice of growth parameters which might reproduce something like classical spacetime. In these circumstances, and given also the accumulation of mathematical knowledge concerning at least one special case of these dynamics, it does not seem out of place to look for indications of how the theory taking shape might offer its own solutions to some of the recognized puzzles of cosmology. Specifically, I am thinking of the unexplained “large numbers” in cosmology related to the large size of the universe and its high degree of homogeneity and isotropy. (Lurking behind these issues is the question of why the cosmological constant $`\mathrm{\Lambda }`$ is so small. Causal sets so far have provided at best vague hints of why this should be so, but they have led to a prediction of fluctuations about $`\mathrm{\Lambda }=0`$, and indeed, fluctuations of a time-dependent magnitude whose predicted value for the current universe is just that which seems to be indicated by the most recent observations.)
If we suppose that the cosmic microwave radiation we see today is descended directly from radiation which was present at the conclusion of the quantum-gravity era, <sup>*</sup><sup>*</sup>This assumption is denied in “inflationary” scenarios according to which all matter visible today was created much later, in a process of “reheating”. then we can straightforwardly evolve present conditions back to describe the universe (as much as we can see of it) as it was just after the “Planck time”, by which I mean the time when the Hubble parameter $`H=\dot{a}/a`$ was near $`1`$ in natural units. One finds (using the $`1/a^4`$ dependence of the energy density of radiation, and barring any conspiracies involving a time-varying cosmological constant) that the temperature at that epoch was also near to unity but the radius of curvature was some 28 orders of magnitude or more above the Planckian value. This “large number” (which corresponds to the large ratio of the present-day Hubble radius $`1/H`$ to the present-day wavelength of the microwave background) is one for which current theory has no convincing explanation.
Only two ways of obtaining such a large number have seemed appealing: either derive it from some other large number of the underlying theory (which then has to be explained in its turn) <sup>*</sup><sup>*</sup>for example, the ratio of the Planck mass to the Higgs mass or relate it to some conjunctural (i.e. historical) number of cosmology whose large size is not in need of explanation, such as the age of the universe or the number of cycles of contraction and re-expansion it has undergone to date. This second way of proceeding is the one to which some of the recent causal set results lend themselves.
To understand why, one must know that, despite being representable formally as a Markov process, a sequential growth dynamics exhibits a long memory, such that the present effective laws of motion are influenced by past behavior. (Indeed the process is formally Markovian only because one includes the entire past in the stochastically evolving “state”.) The passage of time, according to this dynamics consists in a sequence of “births” of new elements of the causal set, each of which comes into being with a definite set of pre-existing “ancestor elements”. The dynamical law is specified by giving the relative probability of each possible choice of ancestor-set (called “the precursor” in ), and this in turn, turns out to be given by a relatively simple expression depending only on the total size $`\varpi `$ of the precursor and the size $`m`$ of its maximal layer, <sup>\**</sup><sup>\**</sup>In other language, $`\varpi `$ is the number of all ancestors and $`m`$ is the number of “immediate ancestors” or “parents”. namely
$$\lambda (\varpi ,m)=\underset{k}{}\left(\genfrac{}{}{0pt}{}{\varpi m}{km}\right)t_k,$$
$`(1)`$
where $`t_0`$, $`t_1`$, $`t_2\mathrm{}`$ is a sequence of non-negative “coupling constants” that completely characterizes the dynamics (and where $`t_01`$). Notice in this formula how the behavior of the $`n^{th}`$ element is influenced not only by the “contemporaneous coupling constant” $`t_n`$, but by the entire history of $`t`$’s up to that “time”.
Now among the possible choices of the $`t_n`$, two may be singled out for special consideration. The first choice,
$$t_n=t^n$$
$`(2)`$
for some fixed $`t`$ ($`0<t<\mathrm{}`$), is known as transitive percolation and describes a simplistic, time-reversal invariant dynamics in which the future of each element is independent of its past and of relatively “spacelike” regions. (See and for a more complete definition of transitive percolation dynamics.) The second choice,
$$t_n=\frac{t^n}{n!},$$
$`(3)`$
has been suggested as a candidate which might yield spacetimes with genuine local degrees of freedom and a more realistic effective law of motion .
Let us consider transitive percolation first, since its properties are much better understood. One knows in particular that, with probability 1, the universe it describes undergoes an infinite succession of cycles of expansion, stasis and contraction punctuated by so called posts , each of which serves as the progenitor of all the elements born in the next cycle. The region issuing from any such post is independent of what preceded it, and has for its effective dynamics that of originary percolation, which is the same as plain percolation, except that no element can be born without having the post among its ancestors . The size to which the region following a post re-expands is governed by the parameter $`t`$, or equivalently the probability $`p=t/(1+t)`$. For $`t1`$, the universe stops expanding at a “spatial volume” of not much more than $`1/t`$, whose value therefore would have to exceed (say) $`(10^{28})^310^{84}`$ in order to do justice to conditions at the time of the “big bang”, assuming, of course, that the dynamics of transitive percolation is at all relevant to the very early universe. <sup>*</sup><sup>*</sup>We will see in a moment why this might be the case. The number $`10^{84}`$ assumes that a spacelike hypersurface in the continuum corresponds to a maximal antichain in the causal set, meaning a maximal set of causally unrelated elements. It assumes also that the spatial volume of such a hypersurface is equal, up to a factor of order unity, to the cardinality of the corresponding antichain. The “fine tuning” or “large number” problem is then why $`t`$ should have such a small magnitude, rather than a value near unity.
It is here that the memory effects embodied in (1) enter. Let us suppose for definiteness that the true dynamics is given by $`t_n=t^n/n!`$, and let us also suppose, for the sake of argument, that an infinite number of posts will occur for this dynamics as well. What then will be the effective dynamics for the portion of the causal set following some given post? (I’ll call this portion the “current era”.) Let $`e_0`$ be the post and let it have $`N_0`$ elements to its past ($`N_0`$ ancestors). Then, by definition, an element $`x`$ born in the current era with $`\varpi `$ current ancestors (including $`e_0`$) will have in reality $`\varpi +N_0`$ ancestors in the full causal set. On the other hand, its number of parents (maximal elements of past($`x`$)) will be unaffected by the region preceding $`e_0`$, since the presence of $`e_0`$ prevents any element in that region from being an immediate ancestor of $`x`$. For the region, future($`e_0`$), we thus acquire an effective dynamics described by weights $`\widehat{\lambda }(\varpi ,m)`$ related to the fundamental weights $`\lambda (\varpi ,m)`$ by the simple equation
$$\widehat{\lambda }(\varpi ,m)=\lambda (\varpi +N_0,m).$$
$`(4)`$
Each cosmic cycle thus acts to renormalize the coupling constants for the next cycle, and the dynamics in any given cycle differs from the original or “bare” dynamics by the action of this cosmological “renormalization group”. It turns out that, when expressed as a transformation of the elementary coupling constants $`t_n`$, this action is very simple. For $`N_0=1`$ we have
$$\widehat{t}_n=t_n+t_{n+1}$$
$`(5)`$
and for $`N_0=2,3,4,\mathrm{}`$ we just iterate this transformation $`N_0`$ times. (For defining the dynamics, only the ratios of the $`t_n`$ matter. Hence, the $`t_n`$ lie in a projective space, and (5), though it appears linear, is really a projective mapping). Equation (5) seems so simple that one could hope to analyze it fully, finding in particular all the attractors and their “basins of attraction”. Potentially such an analysis could pick out as favored dynamical laws those to which the universe tends to evolve under the action of the “cosmic renormalization group”. For now, we can note that the only fixed points of (5) are those of the percolation family, $`t_n=t^n`$. (proof: In order that ratios $`t_n:t_m`$ not be altered by (5), it is necessary and sufficient that $`\widehat{t}_n=ct_n`$ for some constant $`c`$. But this holds iff $`t_{n+1}=t_nt`$ with $`t=c1`$.)
In , Djamel Dou has studied the action of this cosmic renormalization group on (3), as well as on some other choices of the $`t_n`$ which can be regarded as simple “deformations” of (2), like $`t_n=t^np!n!/(n+p)!`$. For the latter cases he finds that the “renormalization group flow” defined by (5) leads back to the fixed point set (2), indicating that percolation is to some degree an “attractor” in the space of all dynamics. For the former case, the story is more interesting. In the limit of large $`N_0`$, and for $`m^2N_0t`$, $`\varpi N_0`$, one finds that $`\widehat{\lambda }(\varpi ,m)`$ corresponds to percolation (2) with an $`N_0`$-dependent parameter $`t`$ given by
$$\widehat{t}=\sqrt{t/N_0}$$
$`(6)`$
The effective dynamics is thus once again transitive percolation, but only for a limited time, The initial phase of effective percolation could not last forever. If it did, we could prove that another post would occur, whereafter, by (6), we’d have to have percolation with a smaller $`t`$, contradicting our original assumption. and with an effective parameter $`t`$ that diminishes from one cosmic cycle to the next.
Now, the germ of a resolution to our cosmological puzzles is contained in these results. Let us adopt the cosmology of (3) with its single free parameter taken to be a number of order unity (i.e. no “fine-tuning”), and let us assume that repeated posts occur. After each post, the ensuing cosmological cycle will begin with a stage governed by the dynamics (2) with a parameter $`t=\widehat{t}`$ which diminishes rapidly from cycle to cycle. During each such stage, the causal set will expand to a spatial volume of at least $`O(\widehat{t}^1)`$, a magnitude which increases rapidly from cycle to cycle. Moreover, it is not difficult to see that the earliest portion of this percolation stage (that for which $`\widehat{n}\widehat{t}^1`$) will be a phase of exponential tree-like growth (a tree being a poset in which every element but the first has precisely one parent.) <sup>*</sup><sup>*</sup>Computer simulations confirm this tree-like character, and also confirm the deduction that its “average branching number” is near to two (i.e. the number of children per element, averaged over all the elements is about two at any fixed stage of the growth process). At the conclusion of each tree-like phase, we will have a homogeneous <sup>\**</sup><sup>\**</sup>and also isotropic, to the extent that the causal set is sufficiently like a manifold that this term has meaning. universe with a “spatial volume” that grows larger with each successive cycle. In other words, by waiting long enough, we will automatically obtain conditions very like those needed for the “big bang” in whose aftermath we live. The “unnaturally” large size with which spacetime began in our particular phase of expansion would then reflect nothing more than the fact that a sufficiently great number of causal set elements had accumulated in previous cosmic cycles.
Before concluding, I would like to thank Chris Stephens and Alan Daughton for numerous early conversations about the cosmology of percolation dynamics. This research was partly supported by NSF grant PHY-9600620 and by a grant from the Office of Research and Computing of Syracuse University.
References
David P. Rideout and Rafael D. Sorkin, “A Classical Sequential Growth Dynamics for Causal Sets”, Phys. Rev. D61:024002 (2000), $``$e-print archive: gr-qc/9904062$``$.
L. Bombelli, J. Lee, D. Meyer and R.D. Sorkin, “Spacetime as a causal set”, Phys. Rev. Lett. 59:521-524 (1987);
R.D. Sorkin, “Spacetime and Causal Sets”, in J.C. D’Olivo, E. Nahmad-Achar, M. Rosenbaum, M.P. Ryan, L.F. Urrutia and F. Zertuche (eds.), Relativity and Gravitation: Classical and Quantum, (Proceedings of the SILARG VII Conference, held Cocoyoc, Mexico, December, 1990), pages 150-173, (World Scientific, Singapore, 1991);
David D. Reid, “Introduction to causal sets: an alternate view of spacetime structure” $``$e-print archive: gr-qc/9909075$``$.
R.D. Sorkin, “First Steps with Causal Sets”, in R. Cianci, R. de Ritis, M. Francaviglia, G. Marmo, C. Rubano, P. Scudellaro (eds.), General Relativity and Gravitational Physics, (Proceedings of the Ninth Italian Conference of the same name, held Capri, Italy, September, 1990), pp. 68-90 (World Scientific, Singapore, 1991);
R.D. Sorkin, “Spacetime and Causal Sets”, in J.C. D’Olivo, E. Nahmad-Achar, M. Rosenbaum, M.P. Ryan, L.F. Urrutia and F. Zertuche (eds.), Relativity and Gravitation: Classical and Quantum (Proceedings of the SILARG VII Conference, held Cocoyoc, Mexico, December, 1990), pages 150-173 (World Scientific, Singapore, 1991);
R.D. Sorkin, “Forks in the Road, on the Way to Quantum Gravity”, talk given at the conference entitled “Directions in General Relativity”, held at College Park, Maryland, May, 1993, Int. J. Th. Phys. 36: 2759–2781 (1997) $``$e-print archive: gr-qc/9706002$``$.
David P. Rideout and Rafael D. Sorkin, “Evidence for a continuum limit in causal set dynamics” (in preparation); see also reference .
Noga Alon, Béla Bollobás, Graham Brightwell, and Svante Janson, “Linear extensions of a random partial order”, Ann. Applied Prob. 4: 108-123 (1994).
Djamel Dou, “Causal Sets, a Possible Interpretation for the Black Hole Entropy, and Related Topics”, Ph. D. thesis (SISSA, Trieste, 1999).
Alan Daughton, Rafael D. Sorkin and C.R. Stephens, “Percolation and Causal Sets: A Toy Model of Quantum Gravity” (in preparation).
|
no-problem/0003/cond-mat0003447.html
|
ar5iv
|
text
|
# Dynamic critical exponent of two-, three-, and four-dimensional 𝑋𝑌 models with relaxational and resistively shunted junction dynamics
## I Introduction
Superconducting films, Josephson junction arrays, and superfluid <sup>4</sup>He are systems where topological defects play an important role close to the phase transition. This is particularly striking in two dimensions (2D) where a phase transition of the Kosterlitz-Thouless (KT) nature is driven by the unbinding of thermally created topological defects, vortex-antivortex pairs. In 3D such topological defects take the form of vortex loops and it has been argued that the physics close to the transition can be associated with these loops. The common feature in these systems is that they can be characterized by a complex order parameter. The $`XY`$ model can be viewed as a discretized version of such systems where only the phase of the complex order parameter plays a significant role. This model is believed to catch the essential features of the topological defects present in <sup>4</sup>He as well as in superconductors in the limit when the magnetic penetration length is much larger than the correlation length; high-$`T_c`$ superconductors fall into this category. All the systems which can be described by the $`XY`$ model belong to the same universality class for the thermodynamic critical properties of the phase transition.
In the present paper we have the connection between the $`XY`$ model and superfluid and superconducting systems in mind. However, the $`XY`$ model per se can equally well be viewed as a simple model of a ferromagnet where the phase angle corresponds to the direction of a 2D spin vector associated with each lattice site.
Our interest in the present paper is the dynamical properties associated with topological defects, which may of course depend on the explicit choice of the dynamics imposed on the model. We here investigate two types of dynamics: One is a simple relaxational dynamics (RD) and the other is the resistively shunted junction dynamics (RSJD). We calculate the dynamic critical exponent $`z`$ using various scaling relations both associated with equilibrium and with the approach to equilibrium when starting from a nonequilibrium configuration. Our main conclusion is that the dynamic critical exponents associated with the topological defects are the same for these two types of dynamics, RD and RSJD. However, this conclusion does depend on the precise treatment of the boundary. We demonstrate that various values of $`z`$ can be obtained by changing the treatment of the boundary, as well as by changing from scaling in equilibrium to scaling for the approach to equilibrium.
This paper is organized as follows: In Sec. II we briefly introduce the $`XY`$ model and explain how the dynamic equations are defined in RSJD and RD taking boundary conditions into account. Section III describes the various scaling relations used to obtain $`z`$. The results from our simulations are given in Sec. IV for spatial dimensions $`d=2`$, 3, and 4, whereas Sec. V contains discussions of the results. Finally Sec. VI gives a short summary of the main conclusions.
## II $`XY`$ Model and Dynamics
### A $`XY`$ model
The $`d`$-dimensional $`XY`$ Hamiltonian on a hypercubic lattice of the size $`\mathrm{\Omega }L^d`$ is defined by
$$H[\theta _𝐫]=J\underset{\mathrm{𝐫𝐫}^{}}{}\mathrm{cos}(\varphi _{\mathrm{𝐫𝐫}^{}}\theta _𝐫\theta _𝐫^{}),$$
(1)
where the summation is over nearest neighboring pairs, $`\theta _𝐫`$ is the phase of the complex order parameter at position $`𝐫`$, and $`J`$ is the coupling strength. The $`XY`$ Hamiltonian is appropriate not only to describe the overdamped Josephson junctions arrays without charging energy, but can also be viewed as a discretized form of the Ginzburg-Landau (GL) free energy
$$F_{GL}[\psi (𝐫)]=𝑑𝐫\left(\alpha |\psi (𝐫)|^2+\frac{\beta }{2}|\psi (𝐫)|^2+\frac{1}{2}|\psi (𝐫)|^2\right),$$
(2)
where the amplitude fluctuations of the complex order parameter $`\psi (𝐫)`$ are neglected: $`\psi (𝐫)=\psi _0e^{i\theta (𝐫)}`$ with $`\psi _0`$ fixed to a constant. When mapping the GL free energy functional onto the $`XY`$ Hamiltonian the coupling strength $`J`$ is found to be proportional to $`|\psi _0|^2`$.
The thermodynamic properties of the $`XY`$ model have been intensely studied for many years and it is well known that the important length scale in the critical region, the correlation length $`\xi `$, diverges at the critical temperature $`T_c`$. In 3D and 4D the divergence is of the standard form of the continuous second-order transition, i.e., $`\xi (T)|TT_c|^\nu `$, whereas in 2D $`\mathrm{ln}\xi (T)(TT_c)^{1/2}`$ as $`T_c`$ is approached from above and $`\xi =\mathrm{}`$ in the whole low-temperature phase where quasi-long-range order exits in the absence of true long-range order. From the point of view of the finite-size scaling, this feature of the 2D KT transition turns the finite system size $`L`$ into the relevant length scale in the low-temperature phase.
### B Boundary condition
Experiments on superconductors and <sup>4</sup>He are usually done on samples with open boundaries. From this perspective it is preferable to use boundary conditions, which reflects this experimental situation also in the simulations. However, simulations of the $`XY`$ model can usually only be well converged on relatively small lattice sizes, and since the surface to volume ratio is inversely proportional to the linear system size $`L`$, the open boundary gives rise to large surface effects, which decay very slowly as the system size is increased. The standard way of reducing these unwanted surface effects is to impose the periodic boundary condition (PBC): $`\theta _{𝐫+L\widehat{\mu }}=\theta _𝐫`$, where $`\widehat{\mu }`$ denotes the basis vectors of the lattice, e.g., $`\widehat{\mu }=\widehat{x},\widehat{y},\widehat{z}`$ in 3D. One drawback of this boundary condition is that it restricts the twist from $`𝐫`$ to $`𝐫+L\widehat{\mu }`$, defined as the sum of the phase differences along a direct path connecting the two positions, to an integer multiple of $`2\pi `$. On the other hand, this twist from one boundary to the opposite for an open system can have any value. It is thus preferable to relax the PBC so as to allow for a continuous twist by changing the boundary condition to a more generalized form: $`\theta _{𝐫+L\widehat{\mu }}=\theta _𝐫+L\mathrm{\Delta }_\mu `$, which has been used in various contexts. In particular the boundary condition where the twist variable $`\mathrm{\Delta }_\mu `$ is not fixed to a constant but allowed to fluctuate has been termed the fluctuating twist boundary condition (FTBC), which was originally introduced for static Monte Carlo (MC) simulations and then extended to Langevin-type dynamics at finite temperatures. Since the FTBC allows for any value of the twist, it is closer to the open boundary condition for a real system. Of course one does not expect the treatment of the boundary to affect the results in the thermodynamic limit. However, as we will show and discuss here, the dynamics at criticality can depend on the boundary condition, in as far as the dynamic critical exponent can be defined in terms of the finite-size scaling. It is worth mentioning that a similar observation, i.e., that an important exponent may depend on boundary conditions, has been made recently in the study of the stiffness exponent of vortex-glass models.
### C Dynamic models
Next we introduce two simple dynamic models widely used to describe behaviors of superfluids, superconducting films, regular Josephson junction arrays, and also bulk high $`T_c`$ superconductors close to the transition temperature.
#### 1 Resistively shunted junction dynamics
A $`d`$-dimensional hypercubic array of size $`\mathrm{\Omega }=L^d`$ ($`L=`$ linear size) of superconducting grains weakly coupled by resistively shunted Josephson junctions is effectively described by the $`XY`$ Hamiltonian (1) when it comes to the static properties. On the other hand, dynamic equations of motion for the corresponding overdamped RSJ model are generated from local conservation of the current on each grain. The total current $`I_{\mathrm{𝐫𝐫}^{}}`$ between neighboring grains ($`𝐫,𝐫^{}`$) is the sum of the supercurrent, the normal resistive current, and the thermal noise current: $`I_{\mathrm{𝐫𝐫}^{}}=I_{\mathrm{𝐫𝐫}^{}}^s+I_{\mathrm{𝐫𝐫}^{}}^n+I_{\mathrm{𝐫𝐫}^{}}^t`$. The supercurrent is given by the Josephson current-phase relation $`I_{\mathrm{𝐫𝐫}^{}}^s=I_c\mathrm{sin}(\varphi _{\mathrm{𝐫𝐫}^{}})`$, where $`I_c=2eJ/\mathrm{}`$ is the critical current for a single junction. The normal resistive current $`I_{\mathrm{𝐫𝐫}^{}}^n=V_{\mathrm{𝐫𝐫}^{}}/R_0`$, where the voltage difference $`V_{\mathrm{𝐫𝐫}^{}}`$ is related to the phase difference by $`V_{\mathrm{𝐫𝐫}^{}}=(\mathrm{}/2e)\dot{\varphi }_{\mathrm{𝐫𝐫}^{}}`$ and $`R_0`$ is the shunt resistance. Finally the thermal noise currents $`I_{\mathrm{𝐫𝐫}^{}}^t`$ in the shunts satisfy $`I_{\mathrm{𝐫𝐫}^{}}^t=0`$ and
$$I_{𝐫_1𝐫_2}^t(t)I_{𝐫_3𝐫_4}^t(0)=\frac{2k_BT}{R_0}\delta (t)\left(\delta _{𝐫_1𝐫_3}\delta _{𝐫_2𝐫_4}\delta _{𝐫_1𝐫_4}\delta _{𝐫_2𝐫_3}\right),$$
(3)
where $`\mathrm{}`$ is the thermal average, and $`\delta (t)`$ and $`\delta _{\mathrm{𝐫𝐫}^{}}`$ are the Dirac and Kronecker deltas, respectively. From local current conservation we obtain
$$\underset{\widehat{n}}{}I_{\mathrm{𝐫𝐫}+\widehat{n}}=I_𝐫^{\mathrm{ext}},$$
(4)
where the $`\widehat{n}`$ summation is over $`2d`$ nearest neighbors of site $`𝐫`$ on a hypercubic lattice in $`d`$ dimensions ($`\widehat{n}=\pm \widehat{\mu }`$), e.g., $`\widehat{n}=\pm \widehat{x},\pm \widehat{y},\pm \widehat{z}`$ in 3D, and $`I_𝐫^{\mathrm{ext}}`$ is an external current source at $`𝐫`$ (in the present work, we only consider the case without external driving: $`I_𝐫^{\mathrm{ext}}=0`$). Introducing the lattice Green’s function $`U_{\mathrm{𝐫𝐫}^{}}`$, which is the inverse of the discrete Laplacian, the RSJD equations of motion in the absence of external currents can be written in dimensionless form as
$$\frac{d\theta _𝐫}{dt}=\underset{𝐫^{}}{}\overline{U}_{\mathrm{𝐫𝐫}^{}}\underset{\widehat{n}}{}\mathrm{sin}(\theta _𝐫^{}\theta _{𝐫^{}+\widehat{n}})+\zeta _𝐫,$$
(5)
where $`\overline{U}_{\mathrm{𝐫𝐫}^{}}U_{\mathrm{𝐫𝐫}^{}}U_{\mathrm{𝐫𝐫}}`$ and from here on we normalize the time, the current, the distance, the energy, and the temperature in units of $`\mathrm{}/2eR_0I_c`$, $`I_c`$, the lattice spacing $`a`$, $`J`$, and $`J/k_B`$, respectively. The on-site noise term $`\zeta _𝐫(t)_𝐫^{}\overline{U}_{\mathrm{𝐫𝐫}^{}}_{\widehat{n}}I_{𝐫^{}𝐫^{}+\widehat{n}}^t(t)`$ is spatially correlated, which is a consequence of the local current conservation, and satisfies $`\zeta _𝐫(t)\zeta _𝐫^{}(0)=2T\overline{U}_{\mathrm{𝐫𝐫}^{}}\delta (t)`$. The RSJD equations (5) can be rewritten in a Langevin-type form
$$\frac{d\theta _𝐫}{dt}=\underset{𝐫^{}}{}\overline{U}_{\mathrm{𝐫𝐫}^{}}\frac{\delta H[\theta _𝐫]}{\delta \theta _𝐫^{}(t)}+\zeta _𝐫,$$
(6)
with the $`XY`$ Hamiltonian $`H`$ in Eq. (1) \[compare with Eq. (12) for relaxational dynamics\].
We now introduce the FTBC for the RSJD. The global twist $`L\mathrm{\Delta }_\mu `$ in the $`\widehat{\mu }`$ direction across the whole system (see Sec. II B) is introduced through the local transformation $`\theta _𝐫\theta _𝐫+𝐫𝚫`$, still keeping $`\theta _𝐫=\theta _{𝐫+L\widehat{\mu }}`$ as the periodic part of the phases. The Hamiltonian in terms of these variables is
$$H[\theta _𝐫,𝚫]=\underset{𝐫\widehat{\mu }}{}\mathrm{cos}(\theta _𝐫\theta _{𝐫+\widehat{\mu }}\widehat{\mu }𝚫),$$
(7)
where the $`\widehat{\mu }`$ summation is over $`d`$ nearest neighbors of $`𝐫`$ in each positive direction (e.g., $`\widehat{\mu }=\widehat{x},\widehat{y},\widehat{z}`$ in 3D). It is straightforward to show that equations of motion for phase variable $`\theta _𝐫`$ are given by Eq. (6) with the substitution $`H`$ given by Eq. (7):
$$\frac{d\theta _𝐫}{dt}=\underset{𝐫^{}}{}\overline{U}_{\mathrm{𝐫𝐫}^{}}\frac{\delta H[\theta _𝐫,𝚫]}{\delta \theta _𝐫^{}(t)}+\zeta _𝐫.$$
(8)
In order to get a closed set of equations we further have to specify the dynamics of the twist variables $`\mathrm{\Delta }_\mu `$, which is simply the average phase difference between opposite faces on the $`d`$-dimensional hypercube. In the absence of external currents, the physical boundary condition, corresponding to an open boundary in real systems, should satisfy the condition that there be no current across the boundary, which leads to
$$\frac{d\mathrm{\Delta }_\mu }{dt}=\mathrm{\Gamma }_\mathrm{\Delta }\underset{𝐫}{}\mathrm{sin}(\theta _𝐫\theta _{𝐫+\widehat{\mu }}\mathrm{\Delta }_\mu )+\zeta _\mu ^\mathrm{\Delta }$$
(9)
or, equivalently,
$$\frac{d\mathrm{\Delta }_\mu }{dt}=\mathrm{\Gamma }_\mathrm{\Delta }\frac{\delta H[\theta _𝐫,𝚫]}{\delta \mathrm{\Delta }_\mu }+\zeta _\mu ^\mathrm{\Delta },$$
(10)
where $`\mathrm{\Gamma }_\mathrm{\Delta }=1/L^d`$. As shown in Ref. the noise term satisfies $`\zeta _\mu ^\mathrm{\Delta }(t)=\zeta _\mu ^\mathrm{\Delta }(t)\zeta _𝐫(t^{})=0`$ and $`\zeta _\mu ^\mathrm{\Delta }(t)\zeta _\nu ^\mathrm{\Delta }(0)=2T\mathrm{\Gamma }_\mathrm{\Delta }\delta _{\mu \nu }\delta (t)`$. We term the dynamics defined in this way \[by Eqs. (8) and (10)\] RSJD with the FTBC, whereas the RSJD with the PBC is given by Eq. (6) with $`H`$ in Eq. (1).
#### 2 Relaxational dynamics
Next we introduce the simpler phenomenological relaxational dynamics called time-dependent Ginzburg-Landau-Langevin dynamics, which represents a nonconserved dynamics, for the complex order parameter $`\psi _𝐫`$ on a discrete lattice:
$$\frac{d\psi _𝐫}{dt}=\mathrm{\Gamma }\frac{\delta F_{GL}[\psi _𝐫]}{\delta \psi _𝐫(t)}+\zeta _𝐫,$$
(11)
where $`\mathrm{\Gamma }`$ is the diffusion constant, $`F_{GL}`$ is the discrete version of the GL free energy functional (2), and the white noise term satisfies $`\zeta _𝐫(t)=0`$ and $`\zeta _𝐫(t)\zeta _𝐫^{}(0)=2k_BT\mathrm{\Gamma }\delta _{\mathrm{𝐫𝐫}^{}}\delta (t)`$. The order parameter relaxes towards a configuration which locally minimizes the free energy, and the noises force the metastable states to decay. In the London limit the system can be described solely by the phase $`\theta _𝐫(t)`$ of the order parameter $`\psi _𝐫=\psi _0e^{i\theta _𝐫}`$ with $`\psi _0`$ fixed to a constant. Hence, by neglecting the amplitude fluctuations and discretizing the time-dependent Ginzburg-Landau equation of motion, we find the phase equations of motion for the RD defined by
$$\frac{d\theta _𝐫}{dt}=\frac{\delta H[\theta _𝐫]}{\delta \theta _𝐫(t)}+\zeta _𝐫,$$
(12)
where $`H`$ is the $`XY`$ Hamiltonian (1) in units of $`J`$, the time unit is $`\mathrm{}/\mathrm{\Gamma }J`$, and the dimensionless thermal noises satisfy $`\zeta _𝐫(t)=0`$ and
$$\zeta _𝐫(t)\zeta _𝐫^{}(0)=2T\delta (t)\delta _{\mathrm{𝐫𝐫}^{}},$$
(13)
with $`T`$ in units of $`J/k_B`$. From Eq. (12), the RD equations for the phases in the case of the PBC are given by
$$\frac{d\theta _𝐫}{dt}=\underset{𝐫\widehat{𝐧}}{}\mathrm{sin}(\theta _𝐫\theta _{𝐫+\widehat{n}})+\zeta _𝐫,$$
(14)
with periodicity on the phase variables: $`\theta _𝐫=\theta _{𝐫+L\widehat{\mu }}`$.
We now proceed to the case of the FTBC for RD. In this case, in addition to the equations of motion for phases (Eq. (12) with substitution $`H[\theta _𝐫]`$ by $`H[\theta _𝐫,𝚫]`$ in Eq. (7) ) we need dynamic equations for the twist variables $`\mathrm{\Delta }_\mu `$. Relaxational dynamics means that these equations are of the form
$$\frac{d\mathrm{\Delta }_\mu }{dt}=\mathrm{\Gamma }_\mathrm{\Delta }\frac{\delta H[\theta _𝐫,𝚫]}{\delta \mathrm{\Delta }_\mu }+\zeta _\mu ^\mathrm{\Delta },$$
(15)
which is identical to the form derived for RSJD \[see Eq. (10), where $`\mathrm{\Gamma }_\mathrm{\Delta }=1/L^d`$ was determined from the requirement that no current flows through the boundary\]. We here define the dynamic equations for $`\mathrm{\Delta }_\mu `$ in the RD case with the same value of $`\mathrm{\Gamma }_\mathrm{\Delta }`$, which makes the equations identical to the corresponding equations in RSJD. Within the same interpretation that $`I_{\mathrm{𝐫𝐫}+\widehat{\mu }}^n=\dot{\varphi }_{\mathrm{𝐫𝐫}+\widehat{\mu }}`$ and $`I_{\mathrm{𝐫𝐫}+\widehat{\mu }}^s=\mathrm{sin}(\varphi _{\mathrm{𝐫𝐫}+\widehat{\mu }})`$ with $`\varphi _{\mathrm{𝐫𝐫}+\widehat{\mu }}=\theta _𝐫\theta _𝐫^{}\mathrm{\Delta }_\mu `$ as for RSJD (see Sec. II C 1), we are again imposing a condition consistent with that there be no current across the boundary.
In the simulations, the coupled equations of motion are discretized in time (we use the discrete time step $`\mathrm{\Delta }t=0.05`$ and 0.01 for RSJD and RD, respectively) and numerically integrated using the second order Runge-Kutta-Helfand-Greenside (RKHG) algorithm, which is much more efficient than the first-order Euler algorithm since it can reduce the effective temperature shift due to the discrete time step significantly. In the case of RSJD we apply the efficient fast Fourier transformation method (see, for example, Ref. ), which makes the overall computing time $`O(L^d\mathrm{log}_2L)`$ in $`d`$ dimensions. \[For comparison, the RD simulation requires $`O(L^d)`$.\] The thermal noises are generated from a uniform distribution, whose width is determined to satisfy the noise correlation (see above) at a given temperature.
## III Scaling Relations
### A Scaling in equilibrium
In order to obtain the dynamic critical exponent $`z`$ from equilibrium fluctuations of the system we use two different scaling relations: One is the finite-size scaling of the time correlations of the supercurrent and the other is the finite-size scaling of the linear resistance. Fisher et al. proposed a general scaling theory of the conductivity for a homogeneous superconductor, which has been studied further explicitly by Dorsey and co-worker. The predictions from this scaling theory are very general and depend only on the dynamic scaling assumption and the existence of a diverging correlation length $`\xi |TT_c|^\nu `$: From a simple dimensional analysis, it is easy to show that the order parameter scales as $`\psi \xi ^{1d/2}`$, and thus the superfluid density scales as $`\rho _s|\psi |^2\xi ^{2d}`$. Below $`T_c`$ one has $`\sigma (\omega )i\rho _s/\omega `$, and accordingly one deduces that the frequency-dependent linear conductivity scales as
$$\sigma (\omega )=\xi ^{2d+z}F_\sigma \left(\omega \xi ^z\right),$$
(16)
where $`F_\sigma `$ is a universal scaling function, the dynamic critical exponent $`z`$ is introduced from $`\tau \xi ^z`$, and $`\tau `$ is the characteristic time scale. Precisely at $`T_c`$, Eq. (16) turns into the finite-size scaling form of the conductivity:
$$\sigma (\omega )=L^{2d+z}F_\sigma \left(\omega L^z\right).$$
(17)
This scaling relation can be put to practical use in the case of the PBC because for this boundary condition $`\rho _s`$ has the required size scaling. On the other hand, it cannot be used for an open boundary condition or for the FTBC because in these cases $`\rho _s=0`$ at any $`L`$ and $`T`$ For the FTBC we will then instead use the finite-size scaling of the linear resistance described below.
#### 1 Scaling of supercurrent correlations
The conductivity $`\sigma (\omega )`$ may be related to the supercurrent correlation function $`G(t)`$, which for the $`XY`$ model in $`d`$ dimensions is given by
$$G(t)=\frac{1}{L^d}F(t)F(0),$$
(18)
where the global supercurrent $`F(t)`$ flowing in a given direction, say, $`\widehat{x}`$, is written as
$$F(t)=\underset{𝐫}{}\mathrm{sin}(\theta _𝐫\theta _{𝐫+\widehat{x}}).$$
(19)
The correlation function $`G(t)`$ is a key quantity in describing the dynamic response of vortex fluctuations and is for $`t=0`$ directly related to the static helicity modulus. The connection between $`\sigma (\omega )`$ and $`G(t)`$ in the RSJD case is expressed as
$$\sigma (\omega )=1+\frac{i\rho _s}{\omega }\frac{1}{T}_0^{\mathrm{}}𝑑te^{i\omega t}G(t),$$
(20)
where the conductivity is measured in units such that the shunt resistance $`R_0=1`$, and the superfluid density $`\rho _s`$ is given by
$$\rho _s=\rho _0\left(1\frac{1}{\rho _0T}G(t=0)\right),$$
(21)
with the bare superfluid density $`\rho _0\mathrm{cos}(\theta _𝐫\theta _{𝐫+\widehat{x}})`$. The dynamic dielectric function $`1/ϵ(\omega )`$ in 2D is also expressed as
$$\mathrm{Re}\left[\frac{1}{ϵ(\omega )}\right]=\frac{1}{ϵ(0)}+\frac{\omega }{\rho _0T}_0^{\mathrm{}}𝑑t\mathrm{sin}\omega tG(t),$$
(22)
$$\mathrm{Im}\left[\frac{1}{ϵ(\omega )}\right]=\frac{\omega }{\rho _0T}_0^{\mathrm{}}𝑑t\mathrm{cos}\omega tG(t),$$
(23)
where
$$\frac{1}{ϵ(0)}=1\frac{1}{\rho _0T}G(0).$$
(24)
The helicity modulus $`\gamma `$ corresponds to the superfluid density $`\rho _s`$ and is given by $`\gamma =\rho _s=\rho _0/ϵ(0)`$. The conductivity $`\sigma (\omega )`$ in RSJD can be further simplified into the form
$$\sigma (\omega )=1\frac{1}{i\omega }\frac{\rho _0}{ϵ(\omega )}.$$
(25)
Expressing the scaling in terms of $`G(t)`$ leads to the scaling form
$$G(t)=\xi ^{2d}F_G(t\xi ^z),$$
(26)
which at $`T_c`$ for 3D turns into the finite-size scaling form (see Appendix A)
$$LG(t)=F_G(tL^z),$$
(27)
while in 2D a logarithmic correction (see Appendix A) needs to be included
$$\mathrm{ln}\left(\frac{L}{c}\right)G(t)=F_G(tL^z),$$
(28)
where $`F_G(x)`$ is the scaling function for $`G(t)`$. In the following we will use the scaling relations Eqs. (26) and (27) in 3D with the PBC and Eq. (28) in 2D with the PBC.
#### 2 Resistance scaling
In order to obtain a finite-size scaling at criticality for the FTBC for which, like for any open boundary condition, $`\rho _s=0`$ at any temperature and any lattice size, we relate the resistance $`R`$ to the fluctuations of the twist over the sample. The voltage across the sample in the $`\widehat{\mu }`$ direction $`V_\mu =L\dot{\mathrm{\Delta }}_\mu `$ (see Ref. ) and the linear resistance $`R_\mu `$ in the same direction are related to the voltage fluctuation by the fluctuation-dissipation theorem
$`R_\mu `$ $`=`$ $`{\displaystyle \frac{1}{2T}}{\displaystyle _{\mathrm{}}^{\mathrm{}}}𝑑tV_\mu (t)V_\mu (0)`$ (29)
$``$ $`{\displaystyle \frac{L^2}{2T}}{\displaystyle \frac{1}{\mathrm{\Theta }}}[\mathrm{\Delta }_\mu (\mathrm{\Theta })\mathrm{\Delta }_\mu (0)]^2,`$ (30)
where the approximation becomes exact for a sufficiently large time $`\mathrm{\Theta }`$, as shown in Appendix B (a similar approximation has been used for RSJD with open boundary condition in Ref. ). In the present simulation we use $`\mathrm{\Theta }=2000`$ and perform average over all $`d`$ directions, i.e., $`R=(_\mu R_\mu )/d`$.
Since $`R_\mu `$ scales as the inverse of the characteristic time scale in the critical region, the finite-size scaling takes the form
$$R=\frac{1}{L^z}F_R\left((TT_c)L^{1/\nu }\right),$$
(31)
where $`\nu `$ is the critical exponent for correlation length ($`\xi |TT_c|^\nu `$) and $`F_R(x)`$ is the scaling function for $`R`$. Precisely at $`T_c`$, $`F_R(x)=F_R(0)`$ becomes a constant independent of $`L`$ and we get
$$RL^z,$$
(32)
which can be used to determine $`z`$, once $`T_c`$ is known. The resistance scaling can also be turned into an intersection method for determining $`z`$ and $`T_c`$ using that
$$\frac{\mathrm{ln}(R_L/R_L^{})}{\mathrm{ln}(L/L^{})}=z+\frac{\mathrm{ln}\left[F_R\left((TT_c)L^{1/\nu }\right)/F_R\left((TT_c)L_{}^{}{}_{}{}^{1/\nu }\right)\right]}{\mathrm{ln}(L/L^{})},$$
(33)
for two different lattice sizes $`L,L^{}`$. Thus, if we plot $`\mathrm{ln}(R_L/R_L^{})/\mathrm{ln}(L/L^{})`$ as a function of temperature for several pairs of sizes ($`L,L^{}`$), all curves intersect at a single unique point $`(T_c,z)`$ Once $`T_c`$ and $`z`$ are determined through the above intersection method, all data can be made to collapse onto a single scaling curve by plotting $`RL^z`$ as a function of the scaling variable $`(TT_c)L^{1/\nu }`$ with the correct value of the exponent $`\nu `$ \[see Eq. (31)\].
### B Scaling of relaxation towards equilibrium: Short-time relaxation
Recently, it has been found that a universal scaling in time can also be constructed for the relaxation towards equilibrium when starting from a nonequilibrium configuration. Since such a relaxation is usually rather fast, it is often referred as the short-time relaxation method. By this method several critical exponents have been successfully determined for the unfrustrated and the fully frustrated Josephson junction array as well as for the Ising model. In these studies Glauber dynamics in MC simulations has been used to obtain time series of measured quantities, such as the magnetization and the Binder’s cumulant. Here we apply this method to the $`XY`$ models with more realistic dynamics, RSJD and RD, both introduced in Sec. II C, in order to determine the value of the dynamic critical exponent $`z`$. For convenience we measure
$$\stackrel{~}{\psi }=\mathrm{sign}\left[\underset{𝐫}{}\mathrm{cos}\theta _𝐫(t)\right],$$
(34)
starting from the initial condition $`\theta _𝐫(0)=0`$. Since $`\stackrel{~}{\psi }(t=0)=1`$ at any system size $`L`$, the finite-size scaling form becomes
$$\stackrel{~}{\psi }(t,T,L)=F_\psi (t/L^z,(TT_c)L^{1/\nu }),$$
(35)
with the scaling function $`F_\psi (x,y)`$ depending on two scaling variables, satisfying $`F_\psi (0,y)=1`$ at any $`y`$. At $`T_c`$, $`z`$ is easily determined from Eq. (35) because in this case the second argument of the scaling function vanishes and $`\stackrel{~}{\psi }(t)`$ curves obtained for different sizes can be collapsed onto a single curve when plotted against the variable $`t/L^z`$. We can also determine $`T_c`$ by an intersection method similar to Eq. (33) as follows: If the first argument of the scaling function is fixed to a constant ($`t/L^z=a`$) for a given system size $`L`$ and $`z`$, then $`\stackrel{~}{\psi }`$ has only one scaling variable $`(TT_c)L^{1/\nu }`$, and can thus be written as
$$\stackrel{~}{\psi }=F_\psi (a,(TT_c)L^{1/\nu }).$$
(36)
Accordingly, if we plot $`\stackrel{~}{\psi }`$ with fixed $`a`$ as a function of $`T`$ for various $`L`$, all curves should intersect at $`T_c`$. However, because $`a`$ depends on the value of $`z`$ which cannot be independently determined by this method we start the intersection method from the $`z`$ value determined from the scaling collapse at $`T_c`$. The values of $`T_c`$ and $`z`$ obtained from this intersection method can be refined by the iterating intersection construction. Finally, to examine the consistency we collapse the data for all temperatures and lattice sizes onto a single scaling curve in the variable $`(TT_c)L^{1/\nu }`$ at fixed $`a=t/L^z`$, which in addition is a check of the consistency against the known value of the static exponent $`\nu `$.
## IV Simulation Results
### A 2D $`XY`$ model
In two dimensions, there has been some controversy over the value of the dynamic critical exponent: There has been a theoretical approach by Ambegaokar, Halperin, Nelson, and Siggia (AHNS) predicting $`z_{\mathrm{AHNS}}=1/2\stackrel{~}{ϵ}T^{\mathrm{CG}}`$, where the Coulomb gas (CG) temperature $`T^{\mathrm{CG}}T/2\pi \rho _0`$ and $`1/\stackrel{~}{ϵ}1/ϵ(0)`$ (see Sec. III A 1). On the other hand, a simple scaling argument has yielded $`z_{\mathrm{scale}}=1/\stackrel{~}{ϵ}T^{\mathrm{CG}}2`$ Also, in numerical simulations, there have been some differences: On the one hand, $`z_{\mathrm{AHNS}}`$ has been observed in Ref. from RSJ simulations, while $`z_{\mathrm{scale}}`$ has been concluded for RSJD and RD (Refs. and ), for Langevin dynamics of CG gas particles (Ref. ), and for the MC simulation of lattice CG (Ref. ). Although the question is not completely resolved yet, we strongly believe that when the fluctuating twist boundary condition (see Ref. for a comparison between a conventional boundary condition and the FTBC) is used, $`z_{\mathrm{scale}}`$ is the correct result. Although the above mentioned two $`z`$ values are different below the KT transition, they give the same value of 2 at the KT transition. In Ref. , however, $`z1`$ was concluded from a simulation of RSJ dynamics with the PBC, while in Refs. and a very large value $`z5`$ has been suggested from a scaling analysis of existing experimental data and from an analytic calculation using Mori’s technique, respectively.
In the low-temperature phase of the 2D $`XY`$ model, we can alternatively derive $`z_{\mathrm{scale}}`$ in the following way: The potential barrier, which a bound vortex-antivortex pair should overcome in order to escape, is given by
$`\mathrm{\Delta }V={\displaystyle \frac{T}{\stackrel{~}{ϵ}T^{\mathrm{CG}}}}\mathrm{ln}L,`$
and the escape ratio $`\mathrm{\Gamma }\mathrm{exp}(\mathrm{\Delta }V/T)`$ for one pair is simply related to the total probability of escape, $`P`$, by
$`P=L^2n\mathrm{\Gamma },`$
where $`n`$ is the vortex pair density. The time scale $`\tau `$ of the system is inversely proportional to $`P`$ and thus is given by
$`\tau {\displaystyle \frac{\mathrm{exp}(\mathrm{\Delta }V/T)}{L^2n}}L^{1/\stackrel{~}{ϵ}T^{\mathrm{CG}}2}L^z`$
and we obtain the dynamic critical exponent
$`z={\displaystyle \frac{1}{\stackrel{~}{ϵ}T^{\mathrm{CG}}}}2`$
in accordance with Ref. , where $`z`$ has been obtained from a simple scaling argument and the observed $`1/t`$ behavior of the correlation function $`G(t)`$.
In this section, we investigate the dynamic critical exponent of the 2D $`XY`$ model with RSJD and RD at, below, and above the KT transition. We use the FTBC as well as the conventional PBC and use various methods such as the resistance scaling, the scaling of the supercurrent correlation function, and the short-time relaxation method. The results are summarized in Table I. As seen from Table I only the FTBC gives results in accordance with the expected value: $`z_{\mathrm{scale}}=z_{\mathrm{AHNS}}2`$ at $`T=0.90(T_c)`$ whereas $`z_{\mathrm{scale}}3.4`$ and $`z_{\mathrm{AHNS}}2.8`$ at $`T=0.80`$ Furthermore, this is the case both for RD and RSJD. In contrast, the results for the PBC are inconsistent both with $`z_{\mathrm{scale}}`$ and $`z_{\mathrm{AHNS}}`$. From this we conclude that the FTBC is an adequate boundary condition in the context of open systems like superfluid and superconducting films. It is also interesting to note that also the short-time relaxation method for RSJD with the FTBC gives results consistent with $`z_{\mathrm{scale}}`$. The results in Table I will be further discussed in Sec. V. In the following we present the simulation results on which Table I is based.
#### 1 Critical temperature
First we fix the temperature to $`T=0.90T_c`$ and focus on the dynamic critical behaviors at the KT transition. The results from the resistance scaling $`RL^z`$ for the FTBC (see Sec. III A 2) are displayed in Fig. 1 (the data points are taken from Ref. ), where the slopes of the lines in the log-log plot correspond to $`z2`$ for both RSJD and RD. Consequently, our result $`z2.0`$ is in accordance with other existing theoretical predictions while it contradicts recently suggested, very large values in Refs. and .
In order to determine $`z`$ at the KT transition for the PBC we use the following finite-size scaling form Eq. (28) (see Sec. III A 1) of the supercurrent correlation function $`G(t)`$
$`\mathrm{ln}\left({\displaystyle \frac{L}{c}}\right)G(t)=F_G(tL^z).`$
Figure 2 shows the corresponding scaling plot at $`T=0.90T_c`$ both for RSJD and RD \[Figs. 2(a) and (b), respectively\]. Very good scaling collapses are obtained in both cases with $`z=1.5`$ for RSJD and $`z=2.0`$ for RD. This clearly demonstrates that the value of $`z`$ for RSJD with the PBC is different from the expected value of $`2`$ which was obtained with the FTBC. In these scaling collapses one should note that the relaxation is much faster for RD than for RSJD, as is apparent by comparing the scales on the horizontal axes \[note that vertical axis is in a logarithmic scale in Fig. 2(a) and in a linear scale in Fig. 2(b)\]. It is also interesting to note that the value of the constant $`c`$ in the logarithmic correction of Eq. (28) comes from the static properties, as described in Appendix A, and consequently should be independent of the dynamics. In accordance with this expectation the good scalings in Fig. 2 are achieved with the same value of $`c`$: Both for RSJD in Fig. 2(a) and for RD in Fig. 2(b), we found that $`c=0.60`$ gives a good collapse.
In Fig. 3, we next show the decay of $`\stackrel{~}{\psi }`$ (see Sec. III B for details) at $`T=0.90T_c`$ for RSJD with the (a) FTBC and (b) PBC, which demonstrates that $`z2.0`$ (for the FTBC) and $`z1.2`$ (for the PBC) result in good data collapses to scaling curves. However, only the FTBC leads to the expected value $`z2.0`$. One should also note that the PBC results in an extremely slow decay of $`\stackrel{~}{\psi }`$. Similarly Fig. 4 shows the decay of $`\stackrel{~}{\psi }`$ for RD at $`T=0.90`$ with the (a) FTBC and (b) PBC. In both cases good data collapses are obtained for $`z2.0`$. In this case of RD, both boundary conditions have the same magnitude of the decay time scale. A possible interpretation is discussed in Sec. V.
#### 2 Low-temperature phase
The 2D $`XY`$ model is special in that the whole low-temperature phase is “quasi” critical. This means that each temperature in the low-temperature phase is characterized by a temperature-dependent dynamic critical exponent $`z`$. Just as in the previous section, this temperature-dependent $`z`$ can be determined from the size scaling of the linear resistance, i.e., $`RL^z`$. Figure 5 shows the finite-size scaling of resistance at $`T=0.8(<T_c)`$ for 2D RSJD and RD with the FTBC (all data are from Ref. ) and we find $`z3.3`$ for both types of dynamics. In Ref. , this value has been compared with $`z_{\mathrm{scale}}`$ and $`z_{\mathrm{AHNS}}`$ at this temperature and it has been concluded that the observed value $`3.3`$ is very close to $`z_{\mathrm{scale}}3.4`$.
In the same way as in the previous section the temperature-dependent $`z`$ in the low-temperature phase can also be probed by the short-time relaxation method described in Sec. III B. The divergence of the correlation length in the whole low-temperature phase in 2D turns the finite-size scaling form (35) into the simpler form
$$\stackrel{~}{\psi }=F_\psi (t/L^z),$$
(37)
with the temperature-dependent $`z`$. Figures 6 and 7 show the finite-size scaling of the short-time relaxation at $`T=0.80`$ with the (a) FTBC and (b) PBC. The value $`z3.2`$ found in Fig. 6(a) for RSJD with the FTBC is in agreement with $`z3.3`$ obtained from the resistance scaling in Fig. 5 within numerical accuracy. We interpret this as an evidence that $`z`$ for the RSJD with the FTBC is indeed given by $`z_{\mathrm{scale}}`$. This is in contrast to the RSJD with the PBC for which at the same temperature ($`T=0.80`$) $`z1.4`$ is determined from short-time relaxation as shown in Fig. 6(b). As will be discussed in Sec. V, we interpret this as further evidence that, in case of the 2D RSJD, $`z`$ does depend on the boundary condition. The short-time relaxation for RD gives a quite different result: $`z2`$ is obtained for $`T=0.80`$ for both the FTBC and PBC, as shown in Fig. 7. If one compares this with the results at $`T_c`$ in Fig. 4, where $`z2`$ is also obtained, the implication is that the result $`z2`$ for the short-time relaxation is expected at any temperature in the low-temperature phase both for the FTBC and PBC. As will be discussed further in Sec. V, this suggests that the short-time relaxation for RD does not probe the true equilibrium critical dynamics.
#### 3 High-temperature phase
In the high-temperature phase there is a finite screening length $`\xi `$ which diverges as $`T_c`$ is approached from above. Close to $`T_c`$ one then expects that the characteristic time scales as
$`\tau \xi ^z.`$
In case of the PBC, we can estimate $`\xi `$ and $`\tau `$ following the method in Ref. : $`\xi `$ is obtained from the wavevector dependence of the static dielectric function $`1/ϵ(0)`$ introduced in Eq. (24). The characteristic frequency $`\omega _01/\tau `$ is determined from the frequency dependence of $`1/ϵ(\omega )`$; $`\omega _0`$ is the position of the dissipation peak in $`|\mathrm{Im1}/ϵ(\omega )|`$. The result for RSJD with the PBC is shown in Fig. 8, where $`z2`$ is found from $`\omega _0\xi ^z`$. It should be noted that since this result is obtained for a temperature range where $`\xi /L1`$ it is expected to be independent of the treatment of the boundary and hence applies to both the PBC and FTBC. The same method applied to RD also gives $`z2.0`$ as shown in Ref. . In Fig. 8 the dotted line with slope $`1`$ is also shown and corresponds to the result in Ref. , where $`z1`$ was obtained for RSJD with the PBC in the same temperature range. Consequently, Fig. 8 implies, in contrast to Ref. , that $`z=2`$ is the correct value for the PBC as well as for the FTBC, when $`z`$ is determined from $`\tau \xi ^z`$.
### B 3D $`XY`$ model
Next we turn to the 3D $`XY`$ model with current conserving RSJD and nonconserving RD, respectively. Both dynamic models have been used to describe the dynamic properties of high $`T_c`$ superconductors. Whereas it is generally agreed upon that the static critical properties are those of the 3D $`XY`$ model in a region close to $`T_c`$ with the corresponding static critical exponents, there is less consensus on the dynamic critical properties. Several seemingly mutually inconsistent experimental and simulational results have been reported. Similarly to the 2D case described above in Sec. IV A, we will here arrive at a somewhat entangled picture by comparing values of $`z`$ obtained from the scalings in equilibrium for the two types of dynamics (RSJD and RD) with the two types of boundary conditions (the PBC and FTBC), as well as from the short-time relaxation method by observing the time evolution towards equilibrium when starting from a nonequilibrium configuration. For convenience, the results of the simulations for the 3D $`XY`$ model are summarized in Table II.
#### 1 Resistance scaling
We start with the determination of $`z`$ for the FTBC using the finite-size scaling of the linear resistance, which is calculated from the equilibrium fluctuation of twist variable $`\mathrm{\Delta }`$ \[see Eq. (30)\] A shorter presentation of these results has also been given in Ref. . In 3D the correlation length diverges as $`\xi |TT_c|^\nu `$, making the extended scaling form of Eq. (31), as well as the intersection method in Eq. (33) applicable in addition to the relation $`RL^z`$ at $`T_c`$.
We first present the result for the scaling of the linear resistance for RSJD with the FTBC. By using the intersection method explained in Sec. III A 2 \[see Eq. (33)\] we determine $`T_c`$ and $`z`$ simultaneously from the unique intersection point, as shown in the inset of Fig. 9, which yields $`T_c2.200`$ and $`z1.46`$. We then display in the main part of Fig. 9 the scaling plot of the linear resistance \[see Eq. (31)\], $`RL^z`$ as a function of the scaling variable $`(TT_c)L^{1/\nu }`$, at $`T=2.17`$, 2.19, 2.20, 2.21, and 2.23 for $`L=4`$, 8, and 16. Here we used $`z`$ and $`T_c`$ found from the intersection method, i.e., $`z=1.46`$ and $`T_c=2.200`$, respectively, and the known value of the static exponent $`\nu 0.67`$ We also tried to vary the values of $`T_c`$, $`z`$, and $`\nu `$ in the scaling plot and concluded that $`z=1.46\pm 0.06`$ for the case of 3D RSJD with the FTBC. It is noteworthy that $`T_c2.200`$ from the intersection method is very close to the known value of $`T_c`$ \[$`2.2018`$ (Ref. )\] from the MC simulation.
For RD with the FTBC we only focus on the scaling relation $`RL^z`$ at $`T_c`$, since it is found that the resistance for the RD case is harder to converge due to a sensitivity of the result on the discrete time step in the numerical integration of dynamic equations even when using the second-order RKHG algorithm. In contrast, we did not observe any significant sensitivity to the time step in RSJD and we fix $`\mathrm{\Delta }t=0.05`$ throughout the present work for RSJD. In order to overcome the problem in RD due to the finite-time step we obtain data for two different time steps ($`\mathrm{\Delta }t=0.05,0.01`$) and linearly extrapolate to $`\mathrm{\Delta }t=0`$, as shown in Fig. 9(b) for $`L=4`$, 6, 8, 10, 12, and 16. The slope of the line in the log-log plot of $`R`$ versus $`L`$ in Fig. 9(b) gives $`z1.5`$ also for 3D RD with the FTBC.
#### 2 Supercurrent scaling
For the PBC we use the scaling of the supercurrent correlation function $`G(t)`$ introduced in Sec. III A 1 in order to obtain $`z`$. In Fig. 10(a) we use the finite-size scaling form in Eq. (27) and plot $`LG`$ as a function of the scaling variable $`t/L^z`$ for (a) $`L=8`$, 16, and 24 for RSJD and (b) $`L=6`$, 8, 12, 16, and 24 for RD, respectively: Optimal data collapse is achieved for $`z=1.5`$ (RSJD), and $`z=2.05`$ (RD), respectively. In the insets of Fig. 10 we use (a) $`z=2.0`$ for RSJD and (b) $`z=1.5`$ for RD, respectively, and show that the data collapse becomes significantly worse and consequently conclude that the $`z`$ values obtained by this data collapse method are well determined \[see the main parts of Figs. 10(a) and 10(b)\]. One notes that for RSJD $`z1.5`$ is obtained for both the FTBC and PBC, whereas for RD $`z1.5`$ and $`z2`$ are obtained for the FTBC and PBC, respectively.
In the critical region above $`T_c`$ where $`\xi L`$ we instead have $`\xi G(t)`$ as a scaling function with the scaling variable $`t/\xi ^z`$ \[see Eq. (26)\]. Figure 11 shows this scaling results at temperatures above $`T_c`$ ($`T=2.25`$, 2.30, and 2.40) for $`L=24`$. By comparing with the results for $`L=32`$, it is explicitly checked that there remain no significant finite-size effects in the current temperature range. In Fig. 11 the corresponding values of $`\xi `$ are taken from high precision MC simulations. As shown in Fig. 11, the optimal value $`z=1.4(2)`$ is found for RSJD and $`z=1.9(2)`$ for RD, respectively, which is consistent with the finite-size scaling of $`G(t)`$ at $`T_c`$. However, we note that the determination of $`z`$ in the case of the finite-$`\xi `$ scaling above $`T_c`$ yields a somewhat larger uncertainty. Furthermore, $`z2`$ found for RD with the PBC is particularly intriguing to understand since we expect that this result should be independent of boundary condition in this high-temperature regime where $`\xi L`$: Thus one expects the same value $`z2`$ for the FTBC in this high-temperature regime. This in turn suggests a discontinuous jump in the $`z`$ value from $`z2`$ to $`z1.5`$ as $`T_c`$ is approached from above, since $`z1.5`$ at $`T_c`$ was observed in the scaling of the linear resistance in Sec. IV B 1. This possibility is also discussed in Sec. V.
#### 3 Short-time relaxation scaling
The short-time relaxation method described in Sec. III B probes the relaxation towards equilibrium from a nonequilibrium configuration. We start with the presentation of the results obtained for RSJD and RD with the FTBC. Using the scaling form of $`\stackrel{~}{\psi }`$ in Eq. (35) at $`T_c`$, where the scaling function has only one scaling variable $`t/L^z`$, we first show in Fig. 12 the scaling plot of $`\stackrel{~}{\psi }`$ at $`T=2.20`$ for (a) RSJD with $`L=4`$, 8, and 16 and (b) RD with $`L=6`$, 8, and 10, respectively. All the data can be made to collapse onto a single curve in a broad range of the scaling variable for $`z1.5`$ and $`z2.0`$ for RSJD and RD, respectively. However, the above method presumes a priori knowledge of $`T_c`$. To circumvent this, one can alternatively use an intersection method with a fixed value of $`a=tL^z`$ in the first argument of the scaling form in Eq. (36) (see Sec. III B). In insets of Fig. 13 we display data points at $`T=2.17`$, 2.19, 2.20, 2.21, and 2.23 for (a) RSJD with $`L=4`$, 6, 8, and 16 and (b) RD with $`L=4`$, 6, 8, and 10, and show the results of the iterative intersection method. We obtain again $`z1.5`$ and $`z2.0`$, as well as the estimations of the critical temperatures $`T_c2.200`$ and $`T_c2.194`$ for (a) RSJD and (b) RD, respectively. We believe that the existence of an unique intersection point in each dynamics with the value $`T_c2.200`$ obtained for RSJD, which is in very good agreement with $`T_c2.200`$ obtained previously from the resistance scaling, and with $`T_c2.2018`$ from MC simulations, make this short-time relaxation method very reliable. One notes that the slight temperature shift for RD is again the effect of the finite time step, as already observed in the calculation of the linear resistance. We have also checked the dependence on $`a`$ values and observed no significant changes in resulting values of $`T_c`$ and $`z`$ in a broad range where $`0.4<\stackrel{~}{\psi }<0.9`$. Using $`z`$ and $`T_c`$ found from the intersection method, we in Fig. 13 confirm that the full scaling form is borne out to high precision with $`\nu =0.67`$ determined from MC simulations.
We next consider the short-time relaxation for PBC, and show in Fig. 14 the scaling plot at $`T=2.20`$, $`\stackrel{~}{\psi }=F_\psi (t/L^z)`$, for (a) RSJD and (b) RD. Treating $`z`$ as a free parameter, we obtain $`z1.2`$ and $`z2`$ for RSJD and RD, respectively. This suggests that the value for RSJD with the PBC is lower than $`z1.5`$ obtained from the same short-time relaxation method for RSJD with the FTBC, whereas for RD a value $`z2`$ is obtained both for the FTBC and PBC. As already observed in 2D, RSJD with the PBC has a very large decay time scale in contrast to RSJD with the FTBC as well as to RD with both the PBC and FTBC.
### C 4D $`XY`$ model
For completeness we also determine $`z`$ in 4D. As a prerequisite we first estimate $`T_c`$ through the use of MC simulations in conjunction with the finite-size scaling analysis of the Binder’s fourth-order cumulant $`U`$, which is independent of $`L`$ precisely at $`T_c`$,
$`U(L,T)=1{\displaystyle \frac{|m|^4}{3|m|^2^2}},`$
with the order parameter $`m=_𝐫e^{i\theta _𝐫}/L^4`$. The results are shown in Fig. 15 and $`T_c3.31`$ is found from MC simulation with the PBC, which is consistent with earlier reports but has a higher accuracy. From the MC simulations we also verified that $`\nu `$ in 4D has the expected mean-field value $`\nu =1/2`$ (see, e.g., Ref. ). Since, as noted in the previous section, the size of the discrete time step in the integration of the dynamic equations of motion can lead to an effective increase of temperature, we explicitly determine the effective $`T_c`$ for RSJD and RD with the time step $`\mathrm{\Delta }t=0.05`$ from the crossing point of $`U(L,T)`$: Figure 15 shows that there is no significant difference between the effective and nominal temperature for RSJD, leading to $`T_c(\mathrm{RSJD})T_c(\mathrm{MC})3.31`$. On the other hand, for RD we from the crossing point obtain $`T_c3.25`$ at the same time step $`\mathrm{\Delta }t=0.05`$, in parallel with what was found for RD in 3D. It is to be noted that although the above critical temperatures have been obtained only with the PBC, the same critical temperature is expected also for the FTBC since all static quantities such as $`T_c`$ should not depend on boundary conditions used.
Once $`T_c`$ is known from the calculation of the Binder’s cumulant, we can use the simple finite-size scaling form Eq. (32) for the linear resistance calculated with the FTBC by Eq. (30) (we use $`\mathrm{\Theta }=2000`$ for both RSJD and RD). In Fig. 16(a), we plot the linear resistance $`R`$ versus $`L`$ at $`T=3.31`$ (RSJD) and $`T=3.25`$ (RD), and from the least-square fit we find $`z2.1`$ for RSJD and $`z2`$ for RD, respectively. In addition, we also measure the short-time relaxation with the FTBC and present the result for RD at $`L=6`$ and 8 in the inset of Fig. 16(a) by using the simple scaling form at $`T=T_c=3.25`$, i.e., $`\stackrel{~}{\psi }=F_\psi (tL^z)`$, which yields $`z2.0`$ in accordance with the result from the resistance scaling. For RSJD with the FTBC, we construct the intersection plot for the short-time relaxation (similar to Fig. 13 for 3D) as displayed in the inset of Fig. 16(b), and get $`z2.0`$ and $`T_c3.31`$ from the unique crossing point. It is interesting to note that the critical temperature obtained here for RSJD with the FTBC is in a perfect agreement with $`T_c`$ found from the Binder’s cumulant method for the other boundary condition, the PBC. We then make the full scaling plot for $`\stackrel{~}{\psi }`$ in the main part of Fig. 16(b) with the mean-field value $`\nu =0.5`$ and the estimated values $`T_c=3.31`$ and $`z=2.0`$ above, resulting in a very smooth collapse.
In short, we get $`z2`$ in 4D with the FTBC regardless of the dynamics we use (see Table III for a summary of results); this is reassuring since the value $`z=2`$ is usually expected in 4D where the phase transition acquires a mean-field nature.
## V Discussions and Comparisons
As is clear from the simulation results presented in Sec. IV for two-, three-, and four-dimensional $`XY`$ models, a very entangled picture emerges as regards to the dynamic critical exponent $`z`$. In this section we discuss the main features.
### A Discussions of the 2D $`XY`$ model
We start our discussion with 2D (see Table I for summary of results) and first focus on RSJD at the KT transition. For a 2D superfluid and superconductor the most widely expected value is $`z=2`$ although there have been a few different suggestions (Refs. ). The value $`z=2`$ can be inferred from the observed nonlinear current-voltage ($`I`$-$`V`$) exponent $`a=3`$ (Refs. , and ) together with the scaling argument that $`a=z+1`$ It may also be directly obtained from the simple argument of the escape over the boundary presented in Sec. IV A with the result $`z=1/\stackrel{~}{ϵ}T^{CG}2`$, combined with the universal jump condition at the KT transition $`1/\stackrel{~}{ϵ}T^{CG}=4`$ which leads to $`z=2`$ at the transition. For the 2D $`XY`$ model, the KT transition temperature is $`T_c0.9`$ (Ref. ) and as seen from Table I, RSJD with the FTBC does give the expected value. However, RSJD with the conventional PBC does in fact not give the expected value: the supercurrent scaling gives $`z1.5`$ and the short-time relaxation gives $`z1.2`$.
In order to understand the role played by the boundary conditions we consider a system with an open boundary, which is appropriate to describe a superconducting film and a film of <sup>4</sup>He in usual experiments. In such a case, when a vortex-antivortex pair is introduced into the ground state and then annihilated across the boundary, the system relaxes back to the original ground state. The FTBC has been designed to keep the advantage of the PBC, which reduces the finite-size effect compared to the free boundary condition, as much as possible, while allowing this relaxing back. This relaxing back is, however, prohibited by the conventional PBC. One may note that the escape-over-barrier argument in Sec. IV A implicitly presumes this relaxation back as a part of the escape process. One should also note that, when comparing to experiments with open boundaries, the FTBC has to be used in simulations instead of the PBC whenever the relaxation process across the boundary is important. This perspective suggests that the observed difference between the FTBC and PBC at the KT transition for RSJD is due to the additional constraint on the physics caused by the PBC.
This can be substantiated somewhat further by studying the low-temperature phase in 2D, where an ubiquitous “quasi” criticality with a diverging correlation length makes the critical finite-size scaling method applicable. In Ref. , $`z3.3`$ at $`T=0.80`$ was found for the FTBC from the resistance scaling in agreement with the expected value $`z=1/\stackrel{~}{ϵ}T^{CG}23.4`$ within numerical errors. However, an estimate of the equilibrium scaling for the PBC at $`T=0.85`$ in Ref. gave $`z1.6`$ instead of the expected result $`z2.8`$ (see Fig. 3 in Ref. ). Thus in this low-temperature phase $`z`$ determined with the PBC appears to be smaller ($`z<2`$) than the one with the FTBC ($`z>2`$). However, the value for the FTBC is the relevant one when comparing with experiments.
The situation above $`T_c`$ is as follows: The finite linear resistance $`R`$ calculated from the fluctuations of $`𝚫`$ for the FTBC \[see Eq. (30)\] can be related to the conductivity calculated for the PBC through the connection $`R=\mathrm{Re}\left[1/\sigma (\omega =0)\right]`$ with $`\sigma (\omega )`$ in Eq. (20). We have explicitly checked this relation in our simulations at $`T=1.4`$, by comparing the two values for the FTBC and PBC, respectively, and found good agreement. From this observation, we expect that in this high-temperature phase where the correlation length is smaller than the linear size of the system, $`R`$ and $`z`$ are independent of boundary conditions. Furthermore, in this temperature regime, transport properties like the linear resistance are dominated by free vortices (with density $`n_F`$) and accordingly we expect $`Rn_F\xi ^2`$ (Ref. ), leading to $`z=2`$ for both boundary conditions. However, from a computational point of view, the calculation of the size-converged $`R`$ for the FTBC in the high-temperature phase becomes difficult as we approach $`T_c`$ from above, due to the diverging correlation length. On the other hand, if we instead focus on the scaling of the characteristic frequency $`\omega _0`$, which is expected to be proportional to $`R`$ and can be calculated for the PBC, then we do indeed find an indication of the expected behavior, $`\omega _0\xi ^2`$, as seen in Fig. 8 for the PBC. For the FTBC this result is consistent with our observation $`z2.0`$ at $`T_c`$, whereas for the PBC the scaling at $`T_c`$ gives $`z1.5`$, which differs from the expectation. Why is there then a difference in the PBC between $`z`$ values at and above $`T_c`$? The point is that the long-time relaxation above $`T_c`$ is governed by the thermally created free vortices, whose density satisfies $`n_F\xi ^2`$, whereas the behavior precisely at $`T_c`$, where $`n_F=0`$, is instead dominated by the bound pairs of vortices and antivortices. The conclusion is then that the constraint imposed by the PBC on the vortex-antivortex escape gives rise to this peculiar discontinuity of $`z`$ precisely at $`T_c`$. This is in contrast to the FTBC case where $`z`$ appears to be a continuous function of $`T`$.
Next we compare the results from the dynamic scaling in equilibrium and the short-time relaxation method which probes the relaxation when the system approaches equilibrium. For RSJD with the FTBC there is no difference: the resistance scaling and the short-time relaxation method yield the same $`z`$ at and below $`T_c`$ (see Table I). However, for RSJD with the PBC the equilibrium scaling and the short-time relaxation scaling lead to different results, $`z1.5`$ and $`z1.2`$, respectively. In fact by comparing Figs. 2(a) and 3(b) one realizes that the approach to equilibrium from the chosen starting nonequilibrium configuration is much slower than the equilibrium relaxation. Apparently the constraint imposed on the relaxation by the PBC in combination with the nonequilibrium starting configuration is causing the difference.
We now turn to the discussions for RD, where for the FTBC we find from the resistance scaling the same $`z`$ at and below $`T_c`$ as for RSJD (see Table I). In this context it is interesting to note that the 2D $`XY`$ model with the FTBC is dual to the lattice CG model with the PBC (see Ref. for the mapping between two models), where the same values of the dynamic critical exponent ($`z=z_{\mathrm{scale}}=1/\stackrel{~}{ϵ}T^{\mathrm{CG}}2`$) have been found in MC dynamics. Also, the continuum CG model with Langevin dynamics of the pure relaxational form has been found to give the same values of $`z`$ Accordingly it is tempting to conclude that the result presented in this work for the 2D $`XY`$ model with the FTBC is associated with the vortices and that it is essential to define the model so as to allow for a proper relaxation of vortex-antivortex annihilation across the boundary, which is not the case for the PBC. Furthermore, the result that $`z=z_{\mathrm{scale}}`$ appears to be universal in the sense that it does not matter whether or not the underlying dynamics is purely relaxational (such as RD in this work, MC dynamics in Ref. , and Langevin dynamics in Ref. ), or it has an additional constraint like local current conservation in the RSJD case.
Although the short-time relaxation method applied to RD gives the expected value $`z2`$ at $`T_c`$, it fails to yield the equilibrium size scaling value below $`T_c`$. In addition, if we compare the decay behaviors at and below $`T_c`$, shown in Figs. 4 and 7, respectively, we notice that the time scale of the relaxation does not depend significantly on the temperature or on the boundary condition, in sharp contrast to RSJD. We suggest the following reason: In RD the relaxations of spin waves and vortices are effectively decoupled and the short-time relaxation in this case only probes the spin-wave degrees of freedom, which follow the purely relaxational dynamics with the trivial exponent $`z=2`$ at any $`T`$, while the resistance scaling probes the vortex degrees of freedom. This is then in contrast to RSJD with the FTBC where both degrees of freedom are strongly coupled, leading to the same relaxation time (and accordingly the same $`z`$ value) for $`\stackrel{~}{\psi }`$ and $`R`$. It is also interesting to note that $`z2`$ was also found in Ref. from the MC simulations of the 2D $`XY`$ model with the PBC at and below $`T_c`$ by using a similar short-time relaxation method.
### B Discussions of the 3D $`XY`$ model
We next turn to the 3D $`XY`$ model (see Table II for a summary of results). The discussion for the 2D case in Sec. V A regarding the boundary conditions carries over to 3D, and we expect that the FTBC has to be used whenever the relaxation process associated with the expansion and the subsequent annihilation of a vortex loop across the boundary is important because the conventional PBC prevents this relaxation.
For RSJD, $`z1.5`$ is found from the linear resistance and the short-time relaxation method for the FTBC, as well as from the scaling of the supercurrent correlation at and above $`T_c`$ for the PBC. In addition, the same value $`z1.5`$ is also found for RD with the FTBC from the finite-size scaling of the linear resistance. We note here that the MC simulations of the lattice vortex loop model in 3D (Ref. ) also have found the same value. The agreement between the $`z`$ values for the three different dynamic models (RSJD and RD with the FTBC, and MC dynamics of the vortex loop model with the PBC) was also found in 2D. This value $`z1.5`$ obtained in 3D is consistent with $`z=d/2`$ (with $`d=3`$ in 3D) for model E and model F describing critical dynamics of superfluid systems, in the classification scheme of Hohenberg and Halperin. Consequently, it is again tempting to conclude that the result for the 3D $`XY`$ model can be associated with the vortex loops and that the critical dynamics of RSJD and RD are equivalent as long as the boundary condition allows for the proper vortex loop escape over the boundary.
As in 2D, we find that the short-time relaxation method for RSJD with the PBC (with the result $`z1.2`$) does not reflect the true equilibrium relaxation corresponding to $`z1.5`$, and we again suggest that this is due to the constraint imposed by the PBC. On the other hand, we find that the short-time relaxation method for RD with the FTBC gives $`z=2.0(1)`$ \[see Figs. 12(b) and 13(b)\]. We propose the same explanation as we did for 2D: The short-time relaxation in RD at criticality does not reflect the true long time relaxation because the vortex loop configurations are still out of equilibrium even when $`\stackrel{~}{\psi }0`$ is reached. In this respect it is interesting to note that the values $`T_c=2.20`$ and $`\nu =0.67`$ used in the scaling collapse for $`\stackrel{~}{\psi }`$ in Fig. 13(b) with $`z=2.0`$ agree with the value expected for the 3D $`XY`$ model and that the same $`T_c=2.20`$ was used in the resistance scaling in Fig. 9(b) and yielded $`z1.5`$.
We next discuss the results for RD with the PBC. The scalings of the supercurrent correlation both at and above $`T_c`$ (corresponding to the finite-$`L`$ scaling and the finite-$`\xi `$ scaling, respectively), as well as the short-time relaxation at $`T_c`$, consistently give $`z2`$. This value corresponds to model A of relaxational dynamics in the Hohenberg-Halperin classification scheme. The most striking feature in RD is that the scalings at $`T_c`$ for the FTBC (resistance scaling) and PBC (supercurrent scaling) correspond to different values, i.e., $`z1.5`$ and $`z2.0`$, respectively.
In the high-temperature phase in 3D where $`\xi L`$, one expects that $`z`$ is independent of boundary conditions. Consequently, $`z2`$ found for RD with the PBC at temperatures above $`T_c`$ implies $`z2`$ also for RD with the FTBC in the same high-temperature regime, again consistent with model A. In contrast, $`z`$ determined from the resistance scaling at $`T_c`$ instead gives $`z1.5`$ for RD with the FTBC. We propose the same explanation for this discontinuity of $`z`$ at $`T_c`$ in the RD case with the FTBC as we did in 2D: Above $`T_c`$ where $`\xi L`$, the finite value of the resistivity reflects the physics of dissociated vortex loops whereas precisely at $`T_c`$, where the resistivity vanishes as $`L`$ is increased, the physics is dominated by the large nondissociated vortex loops.
We now compare our results in 3D with earlier studies. Values consistent with $`z1.5`$ have also been found in earlier simulations: $`z=1.5(5)`$ was obtained from the $`I`$-$`V`$ characteristics of the current-driven RSJ model with an open boundary (Ref. ), and $`z=1.5(1)`$ was concluded from the scaling of the linear resistance for the MC simulations of the $`XY`$ model in the vortex representation with the PBC (Refs. and ) which corresponds to the FTBC in the phase representation as mentioned above and explained in Ref. . Finally, MC spin dynamics applied to the three component $`XY`$ model gave $`z=1.38(5)`$ in Ref. . On the other hand, the experimental situation for high-$`T_c`$ superconductors is less clear: From several zero field dc conductivity experiments $`z1.5`$ has been found on single YBCO-123 crystals and a similar result $`z=1.6(1)`$ was also obtained for a Bi-2212 crystal. However, from the scaling of the magnetoconductivity of a thick YBCO-123 film $`z=1.25(5)`$ was found in Ref. , whereas a similar experiment reported $`z2`$ in Ref. . From a theoretical point of view the renormalization group methods applied to the relaxational model (model A) yield the result $`z=2+c\eta `$ with $`\eta 0.02`$ and $`c0.7261`$, leading to $`z2.0`$. However, as far as we know, no corresponding calculation has been made for the 3D RSJ model. One may argue that since the 3D RSJ model is a bona fide model of a superconductor the critical dynamics should belong the dynamic universality class of model F which describes superfluids. This gives $`z=d/2=1.5`$ for a model with the static properties given by the 3D $`XY`$ model.
### C Discussions of the 4D $`XY`$ model
In case of the 4D $`XY`$ model both resistance scaling at $`T_c`$ and short-time relaxation give $`z2`$ for RSJD as well as for RD (see Table III for summary of results). This is in perfect accordance with the Hohenberg-Halperin classification scheme where the RSJD case should be related to models E and F with $`z=d/2=2`$ and the RD case with the model A value $`z=2`$. This in turn just reflects that 4D is the upper critical dimension.
## VI Summary
We have found that the size scaling of the resistance for the $`XY`$ model with the FTBC gives the dynamic critical exponent $`z`$ related to superfluid and superconducting systems with an open boundary. This is the case in two, three and four dimensions both for relaxational and RSJ dynamics. In 2D this applies for $`TT_c`$, whereas in 3D and 4D the dynamics is critical only at $`T=T_c`$. However, the 3D case with relaxational dynamics has a discontinuity in the $`z`$ value since the relaxation time $`\tau `$ scales as $`\tau L^{1.5}`$ at $`T_c`$, whereas it scales as $`\tau \xi ^2`$ just above $`T_c`$.
The short-time relaxation method, which probes the relaxation from a nonequilibrium configuration, does give the same result, except for the 3D case with relaxational dynamics where $`z2`$ is obtained. This discrepancy shows that although the short-time relaxation method very often is reliable and efficient, it cannot always be trusted as a determination of the critical equilibrium dynamics.
The $`XY`$ model with the PBC has a different dynamical size scaling behavior than with the FTBC. In 2D the $`XY`$ model with the PBC and RSJ dynamics gives smaller values of $`z`$ ($`z<2`$) both at and below $`T_c`$. This demonstrates that the boundary condition influences the size scaling properties of the dynamics. Also in this case there is a discontinuity in $`z`$ since $`\tau L^{1.5}`$ at $`T_c`$ but $`\tau \xi ^2`$ just above. This is similar to the discontinuity found in 3D for relaxational dynamics. The short-time relaxation fails to give the equilibrium $`z`$ at $`T_c`$ for the 2D $`XY`$ model with the PBC and RSJ dynamics.
In 3D the $`XY`$ model with RSJ dynamics and the PBC gives the same result as for the FTBC. Thus in this case the boundary condition does not influence the size scaling. This is in contrast to the 3D $`XY`$ model with relaxational dynamics which gives different result for the PBC and FTBC.
In 4D all determinations give the simple relaxational value $`z2`$ independent of boundary condition and dynamics.
The actual values determined are consistent with the following sequences: the $`XY`$ model with RSJ dynamics and the PBC at $`T_c`$ is consistent with $`z=1.5`$, $`1.5`$, and $`2`$ for the 2D, 3D and 4D cases, whereas the FTBC is consistent with $`z=2`$, $`1.5`$, and $`2`$. Similarly for relaxational dynamics the PBC gives the sequence $`z=2`$, $`2`$, and $`2`$ whereas the FTBC gives $`z=2`$, $`1.5`$, and $`z=2`$ for the 2D, 3D, and 4D cases, respectively.
The $`z`$ values for a superconductor can be related to the nonlinear $`I`$-$`V`$ exponent $`a`$ through the scaling relation $`a=1+z`$ The $`I`$-$`V`$ measurements correspond to an open boundary and simulations with the FTBC are consistent with the scaling relation, as shown for the 2D case in Ref. . On the other hand, the $`z`$ values calculated with the PBC in 2D do not fulfill the relation because $`a>1+z`$ for $`TT_c`$. In this sense, the boundary condition has direct physical significance.
###### Acknowledgements.
The authors thank P. Olsson for useful discussions and for providing his unpublished high-precision results of Monte Carlo simulations. This work was supported by the Swedish Natural Research Council through Contract No. FU 04040-332.
## A Scaling form of the supercurrent correlation function
For supercurrent correlation scaling in 2D, the superfluid density $`\rho _s`$ is proportional to the vortex dielectric function $`1/ϵ(0)`$ which in turn is related to the conductivity by $`\sigma (\omega )1/[i\omega ϵ(\omega )]`$. Precisely at the KT transition $`1/ϵ(0)`$ has a logarithmic size scaling,
$`{\displaystyle \frac{1}{ϵ_L(0)}}{\displaystyle \frac{1}{ϵ_{\mathrm{}}(0)}}{\displaystyle \frac{1}{\mathrm{ln}(L/c)}}`$
, where $`c`$ is a constant. This is consistent with the functional form in terms of a scaling function with a logarithmic correction for the frequency dependence
$`{\displaystyle \frac{1}{ϵ(\omega )}}{\displaystyle \frac{1}{ϵ(0)}}{\displaystyle \frac{1}{\mathrm{ln}(L/c)}}F_ϵ(\omega \tau ).`$
The supercurrent correlation function $`G(t)`$ is related to $`\sigma (\omega )`$ by a Fourier transform so that
$`\mathrm{Im}\left[{\displaystyle \frac{1}{ϵ(\omega )}}\right]={\displaystyle _0^{\mathrm{}}}𝑑t\omega \mathrm{cos}\omega tG(t)={\displaystyle \frac{\stackrel{~}{F}_ϵ(\omega \tau )}{\mathrm{ln}L/c}},`$
where $`\stackrel{~}{F}(x)=\mathrm{Im}[F(x)]`$. From this it follows that
$`\mathrm{ln}(L/c)G(t){\displaystyle _{\mathrm{}}^{\mathrm{}}}𝑑\omega {\displaystyle \frac{1}{\omega }}\stackrel{~}{F}_ϵ(\omega \tau )e^{i\omega \tau t/\tau }={\displaystyle _{\mathrm{}}^{\mathrm{}}}𝑑\omega \tau {\displaystyle \frac{1}{\omega \tau }}\stackrel{~}{F}_ϵ(\omega \tau )e^{i\omega \tau t/\tau }=F_G(t/\tau ),`$
so that $`\mathrm{ln}(L/c)G(t)`$ has the scaling form $`F_G(t/\tau )`$.
In 3D we have instead $`1/ϵ_L(0)\rho _s1/L`$ at $`T_c`$ and $`1/ϵ_{\mathrm{}}(0)=0`$ so that this time $`1/ϵ_L(0)1/ϵ_{\mathrm{}}(0)1/L`$. This means that going through the same steps as above for the 3D case gives the scaling form $`LG(t)=F_G(t/\tau )`$.
## B Approximation made in the Nyquist formula for linear resistance
The step from Eq. (29) to Eq. (30) is equivalent to showing that
$`{\displaystyle _0^\mathrm{\Theta }}𝑑t\dot{\mathrm{\Delta }}(t)\dot{\mathrm{\Delta }}(0){\displaystyle \frac{1}{2\mathrm{\Theta }}}\left(\mathrm{\Delta }(\mathrm{\Theta })\mathrm{\Delta }(0)\right)^2`$
in the limit of large $`\mathrm{\Theta }`$. The left-hand side can, due to translational invariance, be written as
$$\frac{1}{\mathrm{\Theta }}_0^\mathrm{\Theta }𝑑s_0^\mathrm{\Theta }𝑑t\dot{\mathrm{\Delta }}(t+s)\dot{\mathrm{\Delta }}(s)=\frac{1}{\mathrm{\Theta }}_0^\mathrm{\Theta }𝑑s\left[_s^{\mathrm{\Theta }s}_s^0+_{\mathrm{\Theta }s}^\mathrm{\Theta }\right]𝑑t\dot{\mathrm{\Delta }}(t+s)\dot{\mathrm{\Delta }}(s).$$
(B1)
The first term on the right-hand side is $`(\mathrm{\Delta }(\mathrm{\Theta })\mathrm{\Delta }(0))^2/\mathrm{\Theta }`$, and the second term reduces to
$`{\displaystyle \frac{1}{\mathrm{\Theta }}}{\displaystyle _0^\mathrm{\Theta }}𝑑s{\displaystyle _s^0}𝑑t\dot{\mathrm{\Delta }}(t+s)\dot{\mathrm{\Delta }}(s)`$ $`=`$ $`{\displaystyle \frac{1}{\mathrm{\Theta }}}{\displaystyle _0^\mathrm{\Theta }}𝑑s[\mathrm{\Delta }(s)\mathrm{\Delta }(0)]\dot{\mathrm{\Delta }}(s)`$ (B2)
$`=`$ $`{\displaystyle \frac{\mathrm{\Delta }^2(\mathrm{\Theta })\mathrm{\Delta }^2(0)2\mathrm{\Delta }(0)\mathrm{\Delta }(\mathrm{\Theta })+2\mathrm{\Delta }^2(0)}{2\mathrm{\Theta }}}`$ (B3)
$`=`$ $`{\displaystyle \frac{\left[\mathrm{\Delta }(\mathrm{\Theta })\mathrm{\Delta }(0)\right]^2}{2\mathrm{\Theta }}},`$ (B4)
where we have used $`2\mathrm{\Delta }(s)\dot{\mathrm{\Delta }}(s)=d\mathrm{\Delta }^2/ds`$. Thus the sum of the first two terms on the right-hand side of Eq. (B1) is equal to $`\left[\mathrm{\Delta }(\mathrm{\Theta })\mathrm{\Delta }(0)\right]^2/2\mathrm{\Theta }`$ and it remains to prove that the third term vanishes in the limit $`\mathrm{\Theta }\mathrm{}`$. This can be realized by changing the order of integration:
$`{\displaystyle \frac{1}{\mathrm{\Theta }}}{\displaystyle _0^\mathrm{\Theta }}𝑑s{\displaystyle _s^0}𝑑t\dot{\mathrm{\Delta }}(\mathrm{\Theta }+t+s)\dot{\mathrm{\Delta }}(s)`$ $`=`$ $`{\displaystyle \frac{1}{\mathrm{\Theta }}}{\displaystyle _\mathrm{\Theta }^0}𝑑t{\displaystyle _t^\mathrm{\Theta }}𝑑s\dot{\mathrm{\Delta }}(\mathrm{\Theta }+t)\dot{\mathrm{\Delta }}(0)`$ (B5)
$`=`$ $`{\displaystyle \frac{1}{\mathrm{\Theta }}}{\displaystyle _\mathrm{\Theta }^0}𝑑t(\mathrm{\Theta }+t)\dot{\mathrm{\Delta }}(\mathrm{\Theta }+t)\dot{\mathrm{\Delta }}(0)`$ (B6)
$`=`$ $`{\displaystyle \frac{1}{\mathrm{\Theta }}}{\displaystyle _0^\mathrm{\Theta }}𝑑xx\dot{\mathrm{\Delta }}(x)\dot{\mathrm{\Delta }}(0).`$ (B7)
A finite relaxation time $`\tau `$ means that $`\dot{\mathrm{\Delta }}(t)\dot{\mathrm{\Delta }}(0)\mathrm{exp}(t/\tau )`$, which means that the last integral is finite for any finite $`\tau `$. This is the case at $`T_c`$ whenever the system size is finite since $`\tau L^z`$. Consequently, the third term vanished in the limit $`\mathrm{\Theta }\mathrm{}`$ for any finite $`L`$ and the step from Eq. (29) to Eq. (30) follows.
|
no-problem/0003/gr-qc0003109.html
|
ar5iv
|
text
|
# Naked Singularities in Higher Dimensional Inhomogeneous Dust Collapse
## 1 Introduction
The cosmic censorship hypothesis (CCH) is an important source of inspiration for research in general relativity. It states that the singularities produced by gravitational collapse must be hidden behind an event horizon. Moreover, according to the strong version of the CCH, such singularities are not even locally naked, i.e., no non-spacelike curve can emerge from such singularities. Since the conjecture has as yet no precise mathematical formulation, a significant amount of attention has been given to studying examples of gravitational collapse which lead to naked singularities with matter content that satisfies the energy conditions (see for recent reviews). In particular, the collapse of spherical matter in the form of dust forms naked shell focusing singularities violating the CCH -.
Recently, significant efforts have been expended to study gravitational collapse models in higher dimensional spacetime. llha et al have constructed Oppenheimer-Snyder models in higher dimensional spacetime. The self-similar solution of spherically symmetric gravitational collapse of a scalar field in higher dimensions is obtained in . Gravitational collapse of a perfect fluid in higher dimensional spacetime is studied by Rocha and Wang . In particular, one would like to understand the role played by extra dimensions in the formation and the nature of singularities. The solution for the higher dimensional spherically symmetric dust collapse is obtained in , which reduces to the well known Tolman-Bondi solution when the dimension of the spacetime becomes four. We shall call it the higher dimensional Tolman spacetime. They have shown the occurrence of naked singularities for the self-similar case. However, self similarity is a strong geometric condition on the spacetime and thus gives rise to a possibility that the naked singularity could be the artifact of a geometric condition rather than the gravitational dynamics of matter therein.
The objective of this paper is to analyze in some detail the collapse of an inhomogeneous dust cloud in higher dimensions to cover both non self-similar and self-similar models. We also access the curvature strength of central shell focusing singularities. We find that gravitational collapse of a non self-similar higher dimensional spacetime gives rise to a naked strong-curvature shell-focusing singularity, providing an explicit counter-example to the CCH.
## 2 Higher Dimensional Tolman Solution
The idea that spacetime should be extended from four to higher dimensions was introduced by Kaluza and Klein to unify gravity and electromagnetism. Five-dimensional ($`5D`$) spacetime is particularly more relevant because both $`10D`$ and $`11D`$ supergravity theories yield solutions where a $`5D`$ spacetime results after dimensional reduction . The metric for the $`5D`$ case, in comoving coordinates coordinates, assumes the form:
$$ds^2=dt^2+\frac{R^2}{1+f\left(r\right)}dr^2+R^2d\mathrm{\Omega }^2$$
(1)
where $`d\mathrm{\Omega }^2=d\theta _1^2+sin^2\theta _1\left(d\theta _2^2+sin^2\theta _2d\theta _3^2\right)`$ is the metric of a $`3`$ sphere, $`r`$ is the comoving radial coordinate, $`t`$ is the proper time of freely falling shells, $`R`$ is a function of $`t`$ and $`r`$ with $`R>0`$ and a prime denotes a partial derivative with respect to r. The energy momentum tensor is of the form:
$$T_{ab}=ϵ(t,r)u_au_b$$
(2)
where $`u_a`$ is the five velocity. The function $`R(t,r)`$ is the solution of
$$\dot{R}^2=\frac{F\left(r\right)}{R^2}+f\left(r\right)$$
(3)
where an overdot denotes the partial derivative with respect to t. The functions $`F\left(r\right)`$ and $`f\left(r\right)`$ are arbitrary, and result from the integration of the field equations. They are referred to as the mass and energy functions, respectively. Since in the present discussion we are concerned with gravitational collapse, we require that $`\dot{R}(t,r)<0`$.
The energy density $`ϵ(t,r)`$ is given by
$$ϵ(t,r)=\frac{3F^{}}{2R^3R^{}}$$
(4)
We have used units which fix the speed of light and the gravitational constant via $`8\pi G=c^4=1`$. For physical reasons, one assumes that the energy density $`ϵ`$ is everywhere nonnegative $`\left(0\right)`$. Eq. (3) can easily be integrated to
$$tt_c\left(r\right)=\frac{R^2}{\sqrt{F}}G\left(fR^2/F\right)$$
(5)
where $`G\left(x\right)`$ is the function given by
$$G\left(x\right)=\{\begin{array}{cc}\frac{\sqrt{1+x}}{x},\hfill & x0,\hfill \\ \frac{1}{2}\hfill & x=0.\hfill \end{array}$$
(6)
and where $`t_c\left(r\right)`$ is a function of integration which represents the time taken by the shell with coordinate $`r`$ to collapse to the centre. This is unlike the $`4D`$ case, where the functional form of $`G`$ is rather complicated . As it is possible to make an arbitrary relabeling of spherical dust shells by $`rg\left(r\right)`$, without loss of generality, we fix the labeling by requiring that, on the hypersurface $`t=0`$, $`r`$ coincide with the radius
$$R(0,r)=r$$
(7)
This corresponds to the following choice of $`t_c\left(r\right)`$
$$t_c\left(r\right)=\frac{r^2}{\sqrt{F}}G\left(fr^2/F\right)$$
(8)
We denote by $`\rho \left(r\right)`$ the initial density:
$$\rho \left(r\right)ϵ(0,r)=\frac{3F^{}}{2r^3}F\left(r\right)=\frac{2}{3}\rho \left(r\right)r^3𝑑r$$
(9)
Given a regular initial surface, the time for the occurrence of the central shell-focussing singularity for the collapse developing from that surface is reduced as compared to the $`4D`$ case for the marginally bound collapse. The reason for this stems from the form of the mass function in Eq. (9). In a ball of radius $`0`$ to $`r`$, for any given initial density profile $`\rho \left(r\right)`$, the total mass contained in the ball is greater than in the corresponding $`4D`$ case. In the $`4D`$ case, the mass function $`F\left(r\right)`$ involves the integral $`\rho \left(r\right)r^2𝑑r`$ , as compared to the factor $`r^3`$ in the $`5D`$ case. Hence, there is relatively more mass-energy collapsing in the spacetime as compared to the $`4D`$ case, because of the assumed overall positivity of mass-energy (energy condition). This explains why the collapse is faster in the $`5D`$ case.
The easiest way to detect a singularity in a spacetime is to observe the divergence of some invariant of the Riemann tensor. Next we calculate one such quantity, the Kretschmann scalar ($`K=R_{abcd}R^{abcd}`$, $`R_{abcd}`$ the Riemann tensor). For the metric (1), it reduces to
$$K=7\frac{F^2}{R^6R^2}36\frac{FF^{}}{R^7R^{}}+78\frac{F^2}{R^8}$$
(10)
The Kretschmann scalar and energy density both diverge at $`t=t_c\left(r\right)`$ confirming the presence of a scalar polynomial curvature singularity . Thus the time coordinate and radial coordinate are respectively in the ranges $`\mathrm{}<t<t_c\left(r\right)`$ and $`0r<\mathrm{}`$. It has been shown that shell crossing singularities (characterized by $`R^{}=0`$ and $`R>0`$) are gravitationally weak and hence such singularities cannot be considered seriously. Christodoulou pointed out in the $`4D`$ case that the non-central singularities are not naked. Hence, we shall confine our discussion to the central shell focusing singularity.
## 3 Existence and Nature of Naked Singularity
It is known that, depending upon the inhomogeneity factor, the $`4D`$ Tolman-Bondi metric admits a central shell focusing naked singularity in the sense that outgoing geodesics emanate from the singularity. Here we wish to investigate the similar situation in our higher dimensional spacetime. In what follows, we shall confine ourselves to the marginally bound case $`\left(f=0\right)`$. Eq. (5), by virtue of eq. (7), leads to
$$R^2=r^22\sqrt{F}t$$
(11)
and the energy density becomes
$$ϵ(t,r)=\frac{3/2}{\left[t\frac{2r\sqrt{F}}{F^{}}\right]\left[t\frac{r^2}{2\sqrt{F}}\right]}$$
(12)
We are free to specify $`F\left(r\right)`$ and we consider a class of models which are non self-similar in general, and as a special case, the self-similar models can be constructed from them. In particular, we suppose that $`F\left(r\right)=r^2\lambda \left(r\right)`$ and $`\lambda \left(0\right)=\lambda _0>0`$ (finite). With this choice of $`F\left(r\right)`$, the density behaves as inversely proportional to the square of time at the centre, and $`F\left(r\right)r^2`$ in the neighborhood of $`r=0`$. For spacetime to be self-similar, we require that $`\lambda \left(r\right)=const.`$ This class of models for $`4D`$ spacetime is discussed in refs. . From eq. (4) it is seen that the density at the centre (r=0) behaves with time as $`ϵ=3/2t^2`$. This means that the density becomes singular at $`t=0`$ and finite at any time $`t=t_0<0`$. Thus the singularity arises from dust collapse which had a finite density distribution in the past on an initial epoch. At this initial nonsingular epoch, all the physical parameters, including the density, are finite and well-behaved. Thus our collapse starts from regular initial data.
We wish to investigate if the singularity, when the central shell with comoving coordinate $`\left(r=0\right)`$ collapses to the centre at time $`t=0`$, is naked. The singularity is naked iff there exists a null geodesic which emanates from the singularity. Let $`K^a=dx^a/dk`$ be the tangent vector to the radial null geodesic, where $`k`$ is an affine parameter. Then we derive the following equations
$$\frac{dK^t}{dk}+\dot{R}^{}K^rK^t=0$$
(13)
$$\frac{dt}{dr}=\frac{K^t}{K^r}=R^{}$$
(14)
The last equation, upon using eq. (3), turns out to be
$$\frac{dt}{dr}=\frac{2r\frac{tF^{}}{\sqrt{F}}}{2\sqrt{r^22t\sqrt{F}}}$$
(15)
Clearly this differential equation becomes singular at $`(t,r)=(0,0)`$. We now wish to put eq. (15) in a form that will be more useful for subsequent calculations. To this end, we define two new functions $`\eta =rF^{}/F`$ and $`P=R/r`$. From eq. (11), for $`f=0`$, we have $`\dot{R}=\sqrt{F}/R`$, and we can express $`F`$ in terms of $`r`$ by $`F\left(r\right)=r^2\lambda \left(r\right)`$. Eq. (15) can thus be re-written as
$$\frac{dt}{dr}=\left[\frac{t}{r}\eta 2\sqrt{\lambda }\right]\dot{R}=\left[\frac{t}{r}\eta 2\sqrt{\lambda }\right]\frac{\sqrt{\lambda }}{P}$$
(16)
It can be seen that the functions $`\eta \left(r\right)`$ and $`P(r,t)`$ are well defined when the singularity is approached.
The nature (a naked singularity or a black hole) of the singularity can be characterised by the existence of radial null geodesics emerging from the singularity. The singularity is at least locally naked if there exist such geodesics, and if no such geodesics exist, it is a black hole. Let us define $`X=t/r`$. If the singularity is naked, then there exists a real and positive value of $`X_0`$ as a solution to the algebraic equation
$$X_0=\underset{t0r0}{lim}X=\underset{t0r0}{lim}\frac{t}{r}=\underset{t0r0}{lim}\frac{dt}{dr}=R^{}$$
(17)
We insert eq. (16) into (17) and use the result $`lim_{r0}\eta =2`$ to get
$$X_0=\frac{2}{Q_0}\left[\lambda _0X_0\sqrt{\lambda _0}\right]$$
(18)
where $`Q\left(X\right)=P(X,0)`$. From the definitions of $`P`$ and $`X`$, and eq. (11), we can derive the following equation
$$X\frac{1}{2\sqrt{\lambda }}=\frac{P^2}{2\sqrt{\lambda }}$$
(19)
from which it is clear that $`X\sqrt{\lambda }<1/2`$, as $`P`$ is a positive function. Since $`Q\left(X\right)=P(X,0)`$, from eq. (19) we get $`Q_0=\sqrt{12X_0\sqrt{\lambda _0}}`$. Substituting this into eq. (18) leads to the cubic equation
$$2z^3+\left(4\lambda _01\right)z^28\lambda _0^2z+4\lambda _0^3=0$$
(20)
where $`z=X_0\sqrt{\lambda _0}`$.
We are interested in positive roots of eq. (20) subject to the constraint that $`z<1/2`$, in which case outgoing null geodesics terminate at the singularity in the past. It is verified numerically that that for $`\lambda _00.5480`$ (correct to four decimal places) eq. (20) has two positive real roots which satisfy the constraint that $`z=X_0\sqrt{\lambda _0}<1/2`$. For example, if $`\lambda _0=0.25`$, then eq. (20) has the two roots $`z=0.1348`$ and $`0.4188`$ (which correspond to two values $`X_0=0.2696`$ and $`0.8376`$). Thus it follows that the singularity will be at least locally naked when $`\lambda _00.5480`$. On the other hand, if the inequality is reversed, i.e., $`\lambda _0>0.5480`$, no naked singularity occurs and gravitational collapse of the dust cloud must result in a black hole. In the analogous $`4D`$ case, one gets a quartic equation and the shell focusing singularity is naked iff $`\lambda _0<0.1809`$ . The global nakedness of the singularity can then be seen by making a junction onto the higher dimensional Schwarzschild spacetime, analogously to the $`4D`$ case (see ). Jhingan and Magli have pointed out that if locally naked singularities occur in dust spacetimes, then these spacetimes can be matched to spacetimes containing globally visible spacetimes.
### 3.1 Self-Similar Case
To support our analysis, we now specialise to the case of self-similar spacetime, which has been analyzed earlier by a different approach. As already mentioned, for the spacetime to be self-similar, we require that $`F\left(r\right)=\lambda r^2`$ ($`\lambda =const.`$) so that
$$R=r\sqrt{12\sqrt{\lambda }\frac{t}{r}}$$
and $`R^{}`$ can be expressed in terms of the quantity $`X=t/r`$ as
$$R^{}=\frac{1\sqrt{\lambda }X}{\sqrt{12\sqrt{\lambda }X}}$$
Eq. (17) with this results in:
$$2y^3+\left(\lambda 1\right)y^22\lambda y+\lambda =0$$
(21)
where $`y=X\sqrt{\lambda }`$. Eq. (21) has positive roots, subject to constraint that $`y<1/2`$, if $`\lambda 0.0901`$. (The two roots $`y=0.2349`$ and $`0.4679`$ of Eq. (21) correspond to $`\lambda =0.05`$). Thus referring to our above discussion, self-similar collapse lead to a naked singularity for $`\lambda 0.0901`$ and to formation of a black hole otherwise. It is well known that the formation of a naked singularity can be understand in terms of the inhomogeneity of the collapse. If collapse is homogeneous no naked singularity occurs. On the other hand a naked singularity occurs if the collapse is sufficiently inhomogeneous, i.e., if the outer shells collapse much later than the central shells . The parameter $`B`$ which gives a measure of the inhomogeneity of the collapse is usually defined as $`t_c\left(r\right)=Br`$. In our case, on comparing this with eq. (8) (with $`f=0`$), $`B=1/2\sqrt{\lambda }`$. So for the singularity to be naked we must have $`B>1.6657`$. This is in agreement with earlier work .
### 3.2 Strength of Naked Singularity
An important aspect of a singularity is its gravitational strength . A singularity is gravitationally strong or simply strong if it destroys by crushing or stretching any object which fall into it. It is widely believed that a spacetime does not admit an extension through a singularity if it is a strong curvature singularity in the sense of Tipler . Clarke and Królak have shown that a sufficient condition for a strong curvature singularity as defined by Tipler is that for at least one non-space like geodesic with affine parameter $`k`$, in the limiting approach to the singularity, we must have
$$\underset{k0}{lim}k^2\psi =\underset{k0}{lim}k^2R_{ab}K^aK^b>0$$
(22)
where $`R_{ab}`$ is the Ricci tensor. Our purpose here is to investigate the above condition along future directed radial null geodesics which emanate from the naked singularity. Now $`k^2\psi `$, with the help of eqs. (2) can be expressed as
$$k^2\psi =k^2\frac{3F^{}\left(K^t\right)^2}{2R^3R^{}}=\frac{3F^{}}{2rPR^{}}\left[\frac{kK^t}{R}\right]^2$$
(23)
Using our previous results, we find that
$$\underset{k0}{lim}k^2\psi =\frac{3\lambda _0X_0}{Q_0\left(X_0\sqrt{\lambda _0}\right)^2}>0$$
(24)
as the singularity is naked for $`X_0\sqrt{\lambda _0}<1/2`$. Thus along radial null geodesics coming out from singularity, the strong curvature condition is satisfied.
## 4 Conclusion
The occurrence and curvature strength of a shell focusing naked singularity in a non self-similar higher dimensional spherically symmetric collapse of a dust cloud has been investigated. We found that naked singularities in our case occur for a slightly higher value of the inhomogeneity parameter in comparison to the analogous situation in the $`4D`$ case. Along the null ray emanating from the naked singularity, the strong curvature condition (22) is satisfied. The models constructed here are non self-similar in general and for the special case $`\lambda =const.`$, reduce to the self-similar case. Whereas we have applied this formalism to the $`5D`$ case, there is no reason to believe that it cannot be extended to a spacetime of any dimension $`\left(n4\right)`$. The formation of these naked singularities violates the strong CCH. We do not claim any particular physical significance to the $`5D`$ metric considered. Nevertheless we think that the results obtained here have some interest in the sense that they do offer the opportunity to explore properties associated with naked singularities.
Acknowledgment: SGG would like to thank the University of Zululand for hospitality, the NRF (South Africa) for financial support, Science College, Congress Nagar, Nagpur (India) for granting leave and IUCAA, Pune for a visit where part of this work was done. AB thanks the Mehta Research Institute, Allahabad, India for kind hospitality. The authors are grateful to an anonymous referee for constructive criticism.
|
no-problem/0003/physics0003076.html
|
ar5iv
|
text
|
# Nonlinear Modes of Liquid Drops as Solitary Waves
## Abstract
The nolinear hydrodynamic equations of the surface of a liquid drop are shown to be directly connected to Korteweg de Vries (KdV, MKdV) systems, giving traveling solutions that are cnoidal waves. They generate multiscale patterns ranging from small harmonic oscillations (linearized model), to nonlinear oscillations, up through solitary waves. These non-axis-symmetric localized shapes are also described by a KdV Hamiltonian system. Recently such “rotons” were observed experimentally when the shape oscillations of a droplet became nonlinear. The results apply to drop-like systems from cluster formation to stellar models, including hyperdeformed nuclei and fission.
A fundamental understanding of non-linear oscillations of a liquid drop (NLD), which reveals new phenomena and flows more complicated than linear theory suggests, is needed in diverse areas of science and technology. Besides their direct use in rheological and surfactant theory , such models apply to cluster physics , super- and hyper-deformed nuclei , nuclear break-up and fission , thin films , radar and even stellar masses and supernova . Theoretical approaches are usually based on numerical calculations within different NLD models, and explain/predict axis-symmetric, non-linear oscillations that are in very good agreement with experiment . However, there are experimental results which show non-axis-symmetric modes; for example, traveling rotational shapes that can lead to fission, cluster emission, or fusion .
In this letter the existence of analytic solutions of NLD models that give rise to traveling solutions which are solitary waves is proven. Higher order non-linear terms in the deviation of the shape from a sphere produce surface oscillations that are cnoidal waves . By increasing the amplitude of these oscillations, the non-linear contribution grows and the drop’s surface, under special conditions (non-zero angular momentum), can transform from a cnodial wave form into a solitary wave. This same evolution can occur if there is a non-linear coupling between the normal modes. Thus this approach leads to a unifying dynamical picture of such modes; specifically, the cnoidal solution simulates harmonic oscillations developing into anharmonic ones, and under special circumstances these cnoidal wave forms develop into solitary waves. Of course, in the linear limit the theory reproduces the normal modes of oscillation of a surface.
Two approaches are used: Euler equations , and Hamiltonian equations, which describe the total energy of the system . We investigate finite amplitude waves, for which the relative amplitude is smaller than the angular half-width. These excitations are also “long” waves, important in the cases of externally driven systems, where the excited wavelength depends by the driving frequency. The first original observations of travelling waves on liquid drops are described in . Similar travelling or running waves are also discussed or quoted in . These results suggest that higher amplitude non-linear oscillations can lead to a traveling wave that originates on the drop’s surface and developes towards the interior. This is shown to be related in a simply way to special solitary wave solutions, called “rotons” in the present analysis. Recent experiments and numerical tests suggest the existence of stable traveling waves for a non-linear dynamics in a circular geometry, re-enforcing the theory.
A new NLD model for describing an ideal, incompressible fluid drop exercising irrotational flow with surface tension, is employed in the analysis. Series expansion in terms of spherical harmonics are replaced by localized, nonlinear shapes shown to be analytic solutions of the system. The flow is potential and therefore governed by Laplace’s equation for the potential flow, $`\mathrm{}\mathrm{\Phi }=0`$, while the dynamics is described by Euler’s equation,
$$\rho (_t\stackrel{}{v}+(\stackrel{}{v})\stackrel{}{v})=P+\stackrel{}{f},$$
(1)
where $`P`$ is pressure. If the density of the external force field is also potential, $`\stackrel{}{f}=\mathrm{\Psi }`$ where $`\mathrm{\Psi }`$ is proportional to the potential (gravitational, electrostatic, etc.), then Eq. (1) reduces to Bernoulli’s scalar equation. The boundary conditions (BC) on the external free surface of the drop, $`\mathrm{\Sigma }1`$, and on the inner surface $`\mathrm{\Sigma }2`$, , are $`\dot{r}|_{\mathrm{\Sigma }1}=(r_t+r_\theta \dot{\theta }+r_\varphi \dot{\varphi })|_{\mathrm{\Sigma }1}`$ and $`\dot{r}|_{\mathrm{\Sigma }2}=0`$, respectively. $`\mathrm{\Phi }_r=\dot{r}`$ is the radial velocity, $`\mathrm{\Phi }_\theta =r^2\dot{\theta }`$, $`\mathrm{\Phi }_\varphi =r^2\mathrm{sin}\theta \dot{\varphi }`$ are the tangential velocities. The second BC occurs only in the case of fluid shells or bubbles. A convenient geometry places the origin at the center-of-mass of the distributon $`r(\theta ,\varphi ,t)=R_0[1+g(\theta )\eta (\varphi Vt)]`$ and introduces for the dimensionless shape function $`g\eta `$ a variable denoted $`\xi `$. Here $`R_0`$ is the radius of the undeformed spherical drop and $`V`$ is the tangential velocity of the traveling solution $`\xi `$ moving in the $`\varphi `$ direction and having a constant transversal profile $`g`$ in the $`\theta `$ direction. The linearized form of the first BC, $`\dot{r}|_{\mathrm{\Sigma }1}=r_t|_{\mathrm{\Sigma }1}`$, allows only radial vibrations and no tangential motion of the fluid on $`\mathrm{\Sigma }1`$, . The second BC restricts the radial flow to a spherical layer of depth $`h(\theta )`$ by requiring $`\mathrm{\Phi }_r|_{r=R_0h}=0`$. This condition stratifies the flow in the surface layer, $`R_0hrR_0(1+\xi )`$, and the liquid bulk $`rR_0h`$. In what follows the flow in the bulk will be considered negligible compared to the flow in the surface layer. This condition does not restrict the generality of the argument because $`h`$ can always be taken to be $`R_0`$. Nonetheless, keeping $`h<R_0`$ opens possibilities for the investigation of more complex fluids, e.g. superfluids, flow over a rigid core, multilayer systems or multiphases, etc. Instead of an expansion of $`\mathrm{\Phi }`$ in term of spherical harmonics, consider the following form
$$\mathrm{\Phi }=\underset{n=0}{\overset{\mathrm{}}{}}(r/R_01)^nf_n(\theta ,\varphi ,t).$$
(2)
The convergence of the series is controlled by the value of the small quantity $`ϵ=max|\frac{rR_0}{R_0}|`$, . The condition $`max|h/R_0|ϵ`$ is also assumed to hold in the following development. Laplace’s equation introduces a system of recursion relations for the functions $`f_n`$, $`f_2=f_1\mathrm{}_\mathrm{\Omega }f_0/2`$, etc., where $`\mathrm{}_\mathrm{\Omega }`$ is the $`(\theta ,\varphi )`$ part of the Laplacean. Hence the set of unknown $`f_n`$’s reduces to $`f_0`$ and $`f_1`$. The second BC, plus the condition $`\xi _\varphi =V\xi _t`$, for traveling waves, yields to second order in $`ϵ`$,
$$f_{0,\varphi }=VR_0^3\mathrm{sin}^2\theta \xi (1+2\xi )/h+𝒪_3(\xi ),$$
(3)
i.e., a connection between the flow potential and the shape, which is typical of nonlinear systems. Eq.(3) together with the relations $`f_1R_0^2\xi _t\frac{2h}{R_0}f_2\frac{h\mathrm{}_\mathrm{\Omega }f_0}{R_0+2h}`$, which follow from the BC and recursion, characterize the flow as a function of the surface geometry. The balance of the dynamic and capillary pressure across the surface $`\mathrm{\Sigma }1`$ follows by expanding up to third order in $`\xi `$ the square root of the surface energy of the drop ,
$$U_S=\sigma R_0^2_{\mathrm{\Sigma }1}(1+\xi )\sqrt{(1+\xi )^2+\xi _\theta ^2+\xi _\varphi ^2/\mathrm{sin}^2\theta }𝑑\mathrm{\Sigma },$$
(4)
and by equating its first variation with the local mean curvature of $`\mathrm{\Sigma }1`$ under the restriction of the volume conservation. The surface pressure, in third order, reads
$$P|_{\mathrm{\Sigma }1}=\frac{\sigma }{R_0}(2\xi 4\xi ^2\mathrm{}_\mathrm{\Omega }\xi +3\xi \xi _\theta ^2ctg\theta ),$$
(5)
where $`\sigma `$ is the surface pressure coefficient and the terms $`\xi _{\varphi ,\theta },\xi _{\varphi ,\varphi }`$ and $`\xi _{\theta ,\theta }`$ are neglected because the relative amplitude of the deformation $`ϵ`$ is smaller than the angular half-width $`L`$, $`\xi =\xi _{\varphi \varphi }ϵ^2/L^21`$, as most of the experiments concerning traveling surface patterns show. Eq.(5) plus the BC yield, to second order in $`ϵ`$,
$`\mathrm{\Phi }_t|_{\mathrm{\Sigma }1}`$ $`+`$ $`{\displaystyle \frac{V^2R_0^4\mathrm{sin}^2\theta }{2h^2}}\xi ^2`$ (6)
$`=`$ $`{\displaystyle \frac{\sigma }{\rho R_0}}(2\xi +4\xi ^2+\mathrm{}_\mathrm{\Omega }\xi 3\xi ^2\xi _\theta ctg\theta ).`$ (7)
The linearized version of Eq. (7) together with the linearized BC, $`\mathrm{\Phi }_r|_{\mathrm{\Sigma }1}=R_0\xi _t`$, yield a limiting case of the model, namely, the normal modes of oscillation of a liquid drop with spherical harmonic solutions . Differentiation of Eq. (7) with respect to $`\varphi `$ together with Eqs.(3,5) yields the dynamical equation for the evolution of the shape function $`\eta (\varphi Vt)`$:
$$A\eta _t+B\eta _\varphi +Cg\eta \eta _\varphi +D\eta _{\varphi \varphi \varphi }=0,$$
(8)
which is the Korteweg-de Vries (KdV) equation with coefficients depending parametrically on $`\theta `$
$$A=V\frac{R_0^2(R_0+2h)\mathrm{sin}^2\theta }{h},B=\frac{\sigma }{\rho R_0}\frac{(2g+\mathrm{}_\mathrm{\Omega }g)}{g},$$
$$C=8\left(\frac{V^2R_0^4\mathrm{sin}^4\theta }{8h^2}\frac{\sigma }{\rho R_0}\right),D=\frac{\sigma }{\rho R_0\mathrm{sin}^2\theta }.$$
(9)
In the case of a two-dimensional liquid drop, the coefficients in Eq. (9) are all constant. Eq. (8) has traveling wave solutions in the $`\varphi `$ direction if $`Cg/(BAV)`$ and $`D/(BAV)`$ do not depend on $`\theta `$. These two conditions introduce two differential equations for $`g(\theta )`$ and $`h(\theta )`$ which can be solved with the boundary conditions $`g=h=0`$ for $`\theta =0,\pi `$. For example, $`h_1=R_0\mathrm{sin}^2\theta `$ and $`g_1=P_2^2(\theta )`$ is a particular solution which is valid for $`hR_0`$. It represents a soliton with a quadrupole transvere profile, being in good agreement with . The next higher order term in Eq. (7), $`3\xi ^2\xi _\theta ctg\theta `$, introduces a $`\eta ^2\eta _\varphi `$ nonlinear term into the dynamics and transforms the KdV equation into the modified KdV equation . The traveling wave solutions of Eq. (8) are then described by the Jacobi elliptic function
$$\eta =\alpha _3+(\alpha _2\alpha _3)sn^2(\sqrt{\frac{C(\alpha _3\alpha _2)}{12D}}(\varphi Vt);m),$$
(10)
where the $`\alpha _i`$ are the constants of integration introduced through Eq. (8) and are related through the velocity $`V=C(\alpha _1+\alpha _2+\alpha _3)/3A+B/A`$ and $`m^2=\frac{\alpha _3\alpha _2}{\alpha _3\alpha _1}`$. $`m[0,1]`$ is the free parameter of the elliptic $`sn`$ function. This result for Eq. (10) is known as a cnoidal wave solution with angular period $`T=K[m]\sqrt{C(\alpha _3\alpha _1)/3D}`$ where $`K(m)`$ is the Jacobi elliptic integral. If $`\alpha _2\alpha _10`$, then $`m1,T\mathrm{}`$ and a one-parameter ($`\eta _0`$) family of traveling pulses (solitons or anti-solitons) is obtained,
$$\eta _{sol}=\eta _0sech^2[(\varphi Vt)/L],$$
(11)
with velocity $`V=\eta _0C/3A+B/A`$ and angular half-width $`L=\sqrt{12D/C\eta _0}`$. Taking for the coefficients $`A`$ to $`D`$ the values given in Eq.(8) for $`\theta =\pi /2`$ (the equatorial cross section) and $`h_1`$, $`g_1`$ from above, one can calculate numerical values of the parameters of any roton excitation function of $`\eta _0`$ only.
The soliton, among other wave patterns, has a special shape-kinematic dependence $`\eta _0V1/L`$; a higher soliton is narrower and travels faster. This relation can be used to experimentally distinguish solitons from other modes or turbulence. When a layer thins ($`h0`$) the coefficient $`C`$ in eq.(8) approaches zero on average, producing a break in the traveling wave solution ($`L`$ becomes singular) because of the change of sign under the square root, eq.(9). Such wave turbulence from capillary waves on thin shells was first observed in . For the water shells described there, eq.(8) gives $`h(\mu m)20\nu /k`$, that is $`h`$=15-25$`\mu `$m at $`V`$=2.1-2.5m$`s^1`$ for the onset of wave turbulence, in good agreement with the abrupt transition experimentaly noticed. The cnoidal solutions provide the nonlinear wave interaction and the transition from competing linear wave modes ($`C0`$) to turbulence ($`C0`$). In the KdV eq.(7), the nonlinear interaction balances or even dominates the linear damping and the cnoidal (roton) mode occurs as a bend mode ($`h`$ small and coherent traveling profile) in agreement with . The condition for the existence of a positive amplitude soliton is $`gCD0`$ which, for $`g0`$, limits the velocity from below to the value $`Vh\omega _2/R_0`$ where $`\omega _2`$ is the Lamb frequency for the $`\lambda =2`$ linear mode . This inequality can be related to the “independent running wave” described in , which lies close to the $`\lambda =2`$ mode. Moreover, since the angular group velocity of the $`(\lambda ,\mu )`$ normal mode, $`V_{\lambda ,\mu }=\omega _\lambda /\mu `$, has practically the same value for $`\lambda =2`$ ($`\mu =0,\pm 1`$, tesseral harmonics) and for $`\lambda =\mu `$, any $`\lambda `$ (sectorial harmonics) this inequality seems to be essential for any combination of rank 2 tesseral or sectorial harmonics, in good agreement with the conclusions in . The periodic limit of the cnoidal wave is reached for $`m0`$, that is, $`\alpha _2\alpha _30`$, and the shape is characterized by harmonic oscillations ($`sn\mathrm{sin}`$ in Eq. (10)) which realize the quadrupole mode of a linear theory $`Y_2^\mu `$ limit or the oscillations of tesseral harmonics , Fig. 1.
The NLD model introduced in this paper yields a smooth transition from linear oscillations to solitary traveling solutions (“rotons”) as a function of the parameters $`\alpha _i`$; namely, a transition from periodic to non-periodic shape oscillations. In between these limits the surface is described by nonlinear cnoidal waves. In Fig.1 the transition from a periodic limit to a solitary wave is shown, in comparison with the corresponding normal modes which can initiate such cnoidal nonlinear behavior. This situation is similar to the transformation of the flow field from periodic modes at small amplitude to traveling waves at larger amplitude . The solution goes into a final form if the volume conservation restriction is enforced: $`_\mathrm{\Sigma }(1+g(\theta )\eta (\varphi ,t))^3𝑑\mathrm{\Omega }=4\pi `$ and requires $`\eta (\varphi ,t)`$ to be periodic. The periodicity condition, $`nK[(\alpha _3\alpha _2)/(\alpha _3\alpha _1)]=\pi \sqrt{\alpha _3\alpha _1}`$ for any positive integer $`n`$, is only fulfilled for a finite number of $`n`$ values, and hence a finite number of coresponding cnoidal modes. In the roton limit the periodicity condition becomes a quasi-periodic one because the amplitude decays rapidly. This approach could be extended to describe elastic modes of surface as well as their nonlinear coupling to capillary waves. The double-periodic structure of the elliptic solutions could describe the new family of normal wave modes predicted in .
The development up to this point was based on Euler’s equation. The same result will now be shown to emerge from a Hamiltonian analysis of the NLD system. Recently, Natarajan and Brown showed that the NLD is a Lagrangian system with the volume conservation condition being a Lagrange multiplier. In the third order deviation from spherical, the NLD becomes a KdV infinite-dimensional Hamiltonian system described by a nonlinear Hamiltonian function $`H=_0^{2\pi }𝑑\varphi `$. In the linear approximation, the NLD is a linear wave Hamiltonian system . If terms depending on $`\theta `$ are absorbed into definite integrals (becoming parameters) the total energy is a function of $`\eta `$ only. Taking the kinetic energy from , $`\mathrm{\Phi }`$ from Eq. (2) and using the BC, the dependence of the kinetic energy on the tangential velocity along $`\theta `$ direction, $`\mathrm{\Phi }_\theta `$, becomes negligible and the kinetic energy can be expressed as a $`T[\eta ]`$ functional. For traveling wave solutions $`_t=V_\varphi `$, to third order in $`ϵ`$, after a tedious but feasible calculus, the total energy is:
$$E=_0^{2\pi }(𝒞_1\eta +𝒞_2\eta ^2+𝒞_3\eta ^3+𝒞_4\eta _\varphi ^2)𝑑\varphi ,$$
(12)
where $`𝒞_1=2\sigma R_0^2S_{1,0}^{1,0}`$, $`𝒞_2=\sigma R_0^2(S_{1,0}^{1,0}+S_{0,1}^{1,0}/2)+R_0^6\rho V^2C_{2,1}^{3,1}/2`$, $`𝒞_3=\sigma R_0^2S_{1,2}^{1,0}/2+R_0^6\rho V^2(2S_{1,2}^{3,1}R_0+S_{2,3}^{5,2}+R_0S_{2,3}^{6,2})/2`$, $`𝒞_4=\sigma R_0^2S_{2,0}^{1,0}/2`$, with $`S_{i,j}^{k,l}=R_0^l_0^\pi h^lg^ig_\theta ^j\mathrm{sin}^k\theta d\theta `$. Terms proportional to $`\eta \eta _\varphi ^2`$ can be neglected since they introduce a factor $`\eta _0^3/L^2`$ which is small compared to $`\eta _0^3`$, i.e. it is in the third order in $`ϵ`$. If Eq. (12) is taken to be a Hamiltonian, $`EH[\eta ]`$, then the Hamilton equation for the dynamical variable $`\eta `$, taking the usual form of the Poisson bracket, gives
$$_0^{2\pi }\eta _t𝑑\varphi =_0^{2\pi }(2𝒞_2\eta _\varphi +6𝒞_3\eta \eta _\varphi 2𝒞_4\eta _{\varphi \varphi \varphi })𝑑\varphi .$$
(13)
Since for the function $`\eta (\varphi Vt)`$ the LHS of Eq.(12) is zero, the integrand in the RHS gives the KdV equation. Hence, the energy of the NLD model, in the third order, is interpreted as a Hamiltonian of the KdV equation . This is in full agreement with the result finalized by Eq. (8) for an appropriate choice of the parameters and the Cauchy conditions for $`g,h`$. The dependence of $`E(\alpha _1,\alpha _2)|_{Vol=constant}`$, Eq.(11), shows an energy minimum in which the solitary waves are stable, .
The nonlinear coupling of modes in the cnoidal solution could explain the occurence of many resonances for the $`l=2`$ mode of rotating liquid drops, at a given (higher) angular velocity, . The rotating quadrupole shape is close to the soliton limit of the cnoidal wave. On one hand, the existence of many resonances is a consequence of by the multi-valley profile of the effective potential energy for the KdV, (MKdV) equation: $`\eta _x^2=a\eta +b\eta ^2+c\eta ^3+(d\eta ^4)`$, . The frequency shift predicted by Busse in can be reproduced in the present theory by choosing the solution $`h_1=R_0sin\theta /2`$. It results the same additional pressure drop in the form of $`V^2\rho R_0^2sin^2\theta /2`$ like in , and hence a similar result. For a roton emerged from a $`l=2`$ mode, by calculating the half-width ($`L_2`$) and amplitude ($`\eta _{max,2}`$) which fitt the quadrupole shape it results a law for the frequency shift: $`\mathrm{\Delta }\omega _2/\omega _2=(1\pm 4L^2(\alpha _3\alpha _2)/3R_0)^1V/\omega _2`$, showing a good agreement with the observations of Annamalai et al in , i.e. many resonances and nonlinear dependence of the shift on $`\mathrm{\Omega }=V`$. The special damping of the $`l=2`$ mode for rotating drops could also be a consequence of the existence of the cnoidal solution. An increasing in the velocity $`V`$ produces a modification of the balance of the coefficients $`C/D`$ which is equivalent with an increasing in dispersion.
The model introduced in this article proves that traveling analytic solutions exist as cnoidal waves on the surface of a liquid drop. These traveling deformations (“rotons”) can range from small oscillations (normal modes), to cnoidal oscillations, and on out to solitary waves. The same approach can be applied to bubbles as well, except that the boundary condition on $`\mathrm{\Sigma }_2`$ is replaced by a far-field condition (recently important in the context of single bubble sonoluminiscence). Nonlinear phenomena can not be fully investigated with normal linear tools, e.g. spherical harmonics. Using analytic non-linear solutions sacrifices the linearity of the space but replaces it with multiscale dynamical behavior, typical for non-linear systems (solitons, wavelets, compactons ). They can be applied to phenomena like cluster formation in nuclei, fragmentation or cold fission, the dynamics of the pellet surface in inertial fusion, stellar models, and so forth.
Supported by the U.S. National Science Foundation through a regular grant, No. 9603006, and a Cooperative Agreement, No. EPS-9550481, that includes matching from the Louisiana Board of Regents Support Fund.
|
no-problem/0003/cond-mat0003037.html
|
ar5iv
|
text
|
# Comment on “Kagomé Lattice Antiferromagnet Stripped to Its Basics”
\[
\]
In a recent letter, Azaria et al studied a 3-spin wide strip of the Kagome-lattice spin-$`\frac{1}{2}`$ Heisenberg model, with the goal of understanding the large number of low-lying singlet states observed in 2D Kagome clusters. Using a number of approximate field-theoretical mappings, they concluded that this system had a nondegenerate, undimerized ground state, with a gap to spin excitations, but with gapless singlet excitations. The Lieb-Schultz-Mattis theorem , which requires there to be at least one additional zero-energy state in the thermodynamic limit, allows this never-before-seen possibility.
A subsequent study , using the numerical density matrix renormalization group (DMRG), verified the existence of a spin-gap, but was inconclusive about the key issues of degenerate ground states and gapless singlet excitations. Here, also using DMRG, we study much larger systems to examine these issues. We find that, contrary to the results of Azaria et al, the ground state of this system is spontaneously dimerized, with degenerate ground states. There is a very small spin-gap in the system but also a gap to singlet excitations. Above the ground states, the gap to the singlet excitations is larger than for the triplets. These results imply that this system is more analogous to the Majumdar-Ghosh model, rather than to a novel spin liquid. Thus, the underlying field theory needs to be reexamined.
We studied systems up to length $`1024\times 3`$, keeping up to 400 states per block, using open boundary conditions. We found that the unmodified open ends of the strip have low lying triplet end excitations, making it difficult to observe the bulk gaps. Therefore, we terminated the ends using a $`2\times 2`$ cluster of spins, as shown on the left side of Fig. 1, which served to push all end excitations above the bulk gaps. Here, all exchange couplings on the ends and in the bulk have identical values $`J`$. In Fig. 2 we show the gap to the lowest lying triplet state, with the modified ends, as a function of the system length. We are able to resolve a very small triplet gap of $`\mathrm{\Delta }/J=0.0104(5)`$. Details of the fit will be given elsewhere.
We find that the bulk is dimerized. In Fig. 1, we show the local bond strengths, with a clearly visible dimerization pattern, on one end of a small $`32\times 3`$ system. Results for systems as large as $`1024\times 3`$ demonstrate that this dimerization pattern persists in the bulk. For example, in the bulk we find that the value of $`\stackrel{}{S}_i\stackrel{}{S}_j`$ with $`i`$ and $`j`$ taking sequential values along the first leg follows the pattern: -0.071, -0.529, -0.071, -0.635, -0.071, -0.529, etc. These values are well-converged both in the length of the system and in the number of states kept. The singlet state representing the shifted dimerization pattern ground state is visible using periodic boundary conditions, where we found a single very low lying singlet excited state below the triplet gap on systems as large as $`48\times 3`$. In open systems, the boundaries push this state above the triplet gap. The entire pattern of states is very similar to that of the Majumdar-Ghosh model. These Kagome strips do not provide insight into the large number of singlet states observed in 2D Kagome clusters.
We thank Ian Affleck for discussions. This work is supported in part by the NSF under grants DMR98-70930, PHY94-07194, and DMR96-16574.
|
no-problem/0003/cond-mat0003170.html
|
ar5iv
|
text
|
# Determination of the complex microwave photoconductance of a single quantum dot
\[
## Abstract
A small quantum dot containing approximately 20 electrons is realized in a two-dimensional electron system of an AlGaAs/GaAs heterostructure. Conventional transport and microwave spectroscopy reveal the dot’s electronic structure. By applying a coherently coupled two-source technique, we are able to determine the complex microwave induced tunnel current. The amplitude of this photoconductance resolves photon-assisted tunneling (PAT) in the non-linear regime through the ground state and an excited state as well. The out-of-phase component (susceptance) allows to study charge relaxation within the quantum dot on a time scale comparable to the microwave beat period.
preprint: to be submitted to Phys. Rev. B
\]
Spectroscopy on quantum dots is commonly performed either by non-linear transport or by microwave measurements . Ordinary linear transport, i.e. under a small forward bias $`V_{ds}`$ between source and drain contacts and without microwave irradiation, only involves quantum dot ground states. In the non-linear case, by applying a finite bias across the ‘artificial atom’ , also excited quantum dot states can participate in transport. Alternatively, in the presence of a microwave field electrons can absorb or emit photons and thus reach excited quantum dot states otherwise not available in linear transport – a phenomenon known as photon-assisted tunneling (PAT) . In a combination of the two methods described, we use two coherently coupled microwave sources with a slight frequency offset and detect the complex photoconductance signal (microwave induced tunneling current) at the difference frequency. In this way, we are not limited by the broadening of the conductance resonances due to the finite bias and thus are able to resolve PAT in the non-linear regime as well. Furthermore, the detected photoconductance contains the in-phase part (conductance) and out-of-phase part (susceptance). The variation of these two different responses indicates the different dynamics of the involved transport processes through the artificial atom.
For the observation of PAT through excited states the size of the quantum dot system is crucial: First, the dot has to be small enough to have a mean energy level spacing $`\overline{\mathrm{\Delta }}`$ large compared to the intrinsic or thermal broadening of the conductance resonances, i.e. $`\mathrm{\Gamma },k_BT<\overline{\mathrm{\Delta }}=2\mathrm{}^2/m^{}r^2`$, where $`\mathrm{\Gamma }`$ denotes the intrinsic level broadening, $`T`$ the temperature, $`r`$ the radius of the dot and $`m^{}0.067m_e`$ the effective electron mass. Second, the excited state must be attainable via absorption of one or a few photons, i.e. $`hf\overline{\mathrm{\Delta }}`$, where $`f`$ is the microwave frequency. In order to form such a small laterally confined quantum dot, patterned split gates are adopted to selectively deplete the two-dimensional electron system (2DES) of an AlGaAs/GaAs heterostructure. The split gates are fabricated on the surface of the heterostructure using electron beam lithography. A schematic drawing of the structure is shown in the inset of Fig. 1. The gate structure separates a small electronic island (with a lithographic radius of about 100 nm) from the 2DES via tunneling barriers. The 2DES itself is located 50 nm below the surface of the heterostructure and has a carrier density of $`n_s2\times 10^{11}`$ cm<sup>-2</sup> and a low temperature mobility of $`\mu 8\times 10^5`$ cm<sup>2</sup>/Vs.
In order to characterize the electronic structure of the artificial atom, at first standard direct current (dc) transport measurements without high frequency irradiation are performed. The measurements are conducted in a dilution refrigerator at $`140`$ mK bath temperature which is higher than its possible minimum value of $`20`$mK. This is due to heat leakage through the coaxial lines used to couple the high frequency radiation to our sample. In the tunneling regime at $`V_{ds}=0`$ the conductance of the quantum dot is normally zero due to Coulomb blockade (CB) . By tuning one of the gate voltages, however, the potential of the dot can be varied to align a discrete quantum dot state with the chemical potentials of the leads which results in a conductance resonance. The gate voltage range over which the CB is lifted can be increased by applying a finite bias across the quantum dot. Changing gate and bias voltage simultaneously therefore leads to a diamond-shaped conductance pattern in the $`V_{ds}V_g`$-plane. The result is displayed in Fig. 1, where the differential conductance $`dI_{ds}/dV_{ds}`$ in the vicinity of a resonance is shown as a function of forward bias $`V_{ds}`$ and gate voltage $`V_g`$. For convenience, the gate voltage is rescaled to $`\mathrm{\Delta }E=e\alpha \mathrm{\Delta }V_g`$, which is the energetic distance from the ground state resonance at $`V_{ds}=0`$. Here, $`\alpha =C_g/C`$ is the ratio of gate capacitance $`C_g`$ to the total capacitance $`C`$ and is deduced from the slopes of the resonance lines in the $`V_{ds}V_g`$-plane. The transformation to $`\mathrm{\Delta }E`$ allows for a direct extraction of excitation energies from the conductance plot.
From the zero-bias distance between adjacent conductance peaks the total capacitance of the quantum dot is determined to be $`C=85`$aF. The quantum dot radius is thus estimated to be $`r=70`$nm, i.e. the quantum dot contains only about 20 electrons. As expected, for non-zero bias the ground state resonance (marked $`ϵ`$ in Fig. 1 for comparison with the excited state resonance $`ϵ^{}`$) splits by $`eV_{ds}`$. For $`V_{ds}>0`$ an additional conductance resonance due to an excited state at $`ϵ^{}`$ develops, which is $`\mathrm{\Delta }_+=(ϵ^{}ϵ)=390\mu `$eV above the ground state. Correspondingly, for $`V_{ds}<0`$ a resonance is detected at a distance $`\mathrm{\Delta }_{}=280\mu `$eV from the ground state. These excitation energies are in good agreement with the mean level spacing $`\overline{\mathrm{\Delta }}465\mu `$eV estimated from the dot radius. Hence, two different excited states take part in transport for $`V_{ds}<0`$ and $`V_{ds}>0`$. Furthermore, as we can see from Fig. 1, the ground state resonance for $`\mathrm{\Delta }E>0`$ is almost suppressed for $`V_{ds}<0`$, whereas the excited state resonance for $`\mathrm{\Delta }E>0`$ is much stronger. The origin of these ‘$`\mathrm{\Delta }E>0`$’-resonances are the alignment of the dot’s ground state or the excited state with the chemical potential of the drain reservoir. The strength of these resonances are related to the overlap between the wavefunction of the corresponding quantum dot state and the wavefunctions in the reservoirs. Hence, the variation in conductance indicates that the coupling of the ground state to the drain reservoir is much smaller than that of the excited state. This phenomenon was also observed in Ref. . In our case, we find that the coupling of the excited state to the reservoirs is about $`5.3`$ times the coupling of the ground state.
Two different techniques are applied to study the transport properties under microwave irradiation. For low forward bias $`V_{ds}0`$, the direct current through the dot is measured using a single microwave source. Alternatively, we employ two phase-locked microwave sources which are slightly offset in frequency. This second technique allows to detect photon-induced transport also in the nonlinear regime $`|V_{ds}|>0`$. Furthermore, the relative phase of the photon-induced current with respect to the incoming microwave beat can be determined.
Results obtained with the first technique are shown in Fig. 2, where the current through the quantum dot for small bias ranging from $`19.3\mu `$V to $`+19.3\mu `$V, is displayed under microwave irradiation at frequency $`f=36.16`$GHz. To this end, millimeter waves with frequency $`18.08`$GHz are generated by a microwave synthesizer (HP 87311A), then frequency-doubled (MITEQ MX 2V260400) and filtered using a band pass filter (QUINSTAR QFA-3715-BA) with center frequency at $`32`$GHz. The microwave signal is coupled into the cryostat using coaxial lines and irradiated onto the sample using an antenna formed out of a conducting loop. The coupling proved to be best at the chosen frequency $`36.16`$ GHz. For small positive bias the original main peak from the ground state resonance (M) as well as a sideband (G) in a distance $`hf0.15`$meV are detected. This sideband in the current signal is due to PAT through the ground state. Quite differently, for negative bias additional features in the current signal are induced by the microwaves. These features can be attributed to photon-induced pumping (P) and resonant tunneling through an excited quantum dot state (E). The processes involved are schematically depicted in Fig. 3: At low positive bias only the ground state transition Fig. 3(a) occurs. As found in the preceding paragraphs, the first excited state for this bias direction is too far above the ground state to be accessible by a one- or two-photon process. The other possible photon-induced ground state transition ($`\mathrm{\Delta }E>0`$) shown in Fig. 3(b) is not detected in the low-bias current signal. However, it is resolved for larger bias values applying the two-source detection scheme (see below). For negative bias $`V_{ds}<0`$ the excited state at $`ϵ^{}=ϵ+\mathrm{\Delta }_{}`$ can participate in transport when the ground state is depopulated by a two-photon absorption process (Fig. 3(c)) ($`2hf\mathrm{\Delta }_{}`$), a process analogous to photo-ionization . Normally, this process has a much smaller probability than the one- and two-photon PAT processes. In our case, however, since the coupling of the excited state to the reservoirs is more than four times stronger than the coupling of the ground state, this process might turn out to be comparable to the pure two-photon PAT in amplitude. This will be discussed in further detail below. Furthermore, a pumping current flows opposite to the bias direction for $`\mathrm{\Delta }E>0`$, where the ground state is $`hf`$ above the chemical potentials in the source reservoir (Fig. 3 (d)). This only happens when the microwave absorption across the right tunnel barrier is larger than that of the left tunneling barrier. In this case, the ground state $`ϵ`$ is permanently populated with electrons from the source contact which then partly decays into the drain region. From the power dependence (see below), we confirm that this pumping current results from PAT.
Photon-induced features similar to our results have been reported before and explained theoretically using, e.g. nonequilibrium Green-function techniques . However, to ensure that the observed features are not adiabatic effects of the microwave irradiation (e.g. rectification effects) commonly both their frequency and power dependence are determined. The inset of Fig. 2 shows the power dependence of the photon-induced features for $`V_{ds}=5.1\mu `$V. The output power of the microwave synthesizers is changed in steps of $`0.5`$ dBm from trace to trace. Over this wide power range the microwave-induced features do not change in position showing that they are indeed induced by single photons. We find that the observed dependence of peak heights on microwave power roughly agrees with the Bessel function behavior: The tunneling current induced by absorbing/emitting $`n`$ photons is proportional to $`J_n^2(x)`$, where $`x=eV_{ac}/hf`$ and $`V_{ac}`$ is the microwave amplitude across the tunnel barriers. This behavior was theoretically derived in Ref. and was experimentally observed in Ref. . For even higher microwave powers the PAT-like features considerably broaden due to heating effects until they are finally completely washed out. Due to the limited bandwidth of our high frequency setup, we are not able to study the freuency dependence to identify the photon-induced peaks. Studying the power dependence only is not sufficient to reveal the origin of peak (E). By determining the complex photoconductance, however, we will show that the out-of-phase component indicates some aspects of the origin (see below).
A more subtle spectroscopic tool applied in this work is the two-source setup displayed in Fig. 4: Two microwave synthesizers are phase-locked and tuned to slightly different frequencies $`f_1=18.08`$GHz and $`f_2=18.08`$GHz$`+\delta f`$ with $`\delta f=2.1`$kHz. The two signals are added, frequency-doubled and filtered with a band pass as described before. Due to the band pass only microwaves with frequencies $`2f_1`$, $`f_1+f_2`$ and $`2f_2`$ are irradiated upon the quantum dot. As these frequency components have a rigid phase relation, their superposition leads to a modulated microwave signal with modulation frequency $`\delta f`$ (see upper inset of Fig. 4). We have thus produced a flux of photons with energy $`2hf_1=0.15`$ meV whose intensity varies periodically in time with frequency $`2.1`$ kHz. Electronic transport induced by these photons can be detected with a lock-in amplifier at the frequency of the microwave beat. Thus, the detected signal is solely due to the irradiation and contains no dc contribution. It is therefore possible to observe PAT even in the non-linear regime, where the broadening of the ordinary conductance resonances normally masks the photon-induced features. Another advantage of this technique is the possibility of heterodyne detection which allows for determination of both amplitude and relative phase of the signal . This is not possible using a single microwave source and a simple modulation technique with a PIN-diode.
With the lock-in amplifier the in-phase and out-of-phase photoconductance signals $`\gamma _0,\gamma _{\pi /2}`$ with respect to the reference are measured. From these we obtain the total photoconductance amplitude $`|A|=\sqrt{\gamma _0^2+\gamma _{\pi /2}^2}`$ and the relative phase $`\mathrm{\Phi }`$ which equals $`\mathrm{arctan}(\gamma _{\pi /2}/\gamma _0)`$ for $`\gamma _00`$ and $`\pi +\mathrm{arctan}(\gamma _{\pi /2}/\gamma _0)`$ for $`\gamma _0<0`$, respectively. In Fig. 5 the photoconductance amplitude at $`f=2f_1=36.16`$ GHz and $`\delta f=2.1`$kHz is displayed for the same parameter region as the dc measurement shown in Fig. 1. With respect to Fig. 1, for $`V_{ds}>0`$ the conductance window is enlarged by $`2hf`$. The resonances are each shifted by the photon energy $`hf`$ which can readily be explained by photon-assisted tunneling processes as in Fig. 3(a) and (b). This is also the case for the ‘$`\mathrm{\Delta }E>0`$’-conductance resonances for negative bias. However, the resonance for $`\mathrm{\Delta }E<0`$ and small negative bias is clearly shifted by $`\mathrm{\Delta }_{}`$, thus enlarging the conductance window to $`eV_{ds}+hf+\mathrm{\Delta }_{}`$. The process involved is took as the finite bias version of the transition depicted in Fig. 3(c): An electron leaves the quantum dot’s ground state for the source reservoir via absorption of two photons. Now, electrons can either refill the ground state or tunnel through the excited state as long as the ground state is depopulated. Transport through the excited state stops when an electron decays to the ground state, or an electron enters the quantum dot’s ground state from the leads. With $`\mathrm{\Delta }E<0`$ and larger negative bias, the photoconductance peak is apparently broadened. The broadening is partly due to other tunneling processes possible at large bias, e.g. one-photon PAT through the ground state. In fact, even at small negative bias there is small tunneling current in between the peak (M) and (E), which is most probably the one-photon PAT. In our case, the tunneling process for $`\mathrm{\Delta }E<0`$ and negative bias is more intricate than the ideal PAT.
In Fig. 6, phase traces as well as their respective amplitude signals are displayed for small positive and negative bias ($`V_{ds}=+10\mu `$V and $`V_{ds}=30\mu `$V, respectively), corresponding to the central region of Fig. 5. For $`V_{ds}>0`$ the phase signal remains approximately constant at $`\mathrm{\Phi }=0`$ which means that the out-of-phase photoconductance $`\gamma _{\pi /2}`$ is equal to zero. The response of the quantum dot to the microwaves is similar for both of the tunneling processes (G). In fact, the two peaks (G) in the amplitude signal stem from ground state resonances as depicted in Fig. 3(a) and (b). The situation is considerably different for $`V_{ds}<0`$ where a strong pumping signal (P) is observed which is caused by a process as in Fig. 3(d). At the position where the photocurrent changes its direction, the amplitude drops to zero and the phase changes trivially by $`\pi `$ (this corresponds to crossing zero in the $`\gamma _0\gamma _{\pi /2}`$-plane). The second peak (E) stems from the photon-induced tunneling through the excited state as in Fig. 3(c). Moving away from this second resonance to more negative $`\mathrm{\Delta }E`$, the phase continuously returns to its original value.
This continuous phase change shows that this transport process results in a finite out-of-phase signal $`\gamma _{\pi /2}`$. In contrast to the other transport scenarios described above (only the ground state is involved), photon-induced tunneling through the excited state is not a purely conductive transport process but also has capacitive and inductive contributions. This behavior is due to the complicated charging dynamics of the quantum dot for this particular process. The processes involved are PAT from the ground state to the source reservoir, resonant tunneling through the excited state, recharging of the ground state by the drain reservoir and relaxation from the excited state to the ground state. All these processes have different time constants which additionally depend on the gate voltage (i.e. $`\mathrm{\Delta }E`$). The interplay of these processes results in the observed phase lag. Thus one has a method at hand to determine the admittance of a mesoscopic system in the PAT regime which is related to the average relaxation time of the system. In the current setup, for $`V_{ds}<0`$ the ground state broadening, due to the coupling to the drain reservoir, is about $`400`$ MHz, while the level broadening from the coupling to the source reservoir is around $`2`$ GHz. The broadening of the excited state coupling to the reservoirs is found to be of the same width of $`2`$ GHz. Hence, the bare tunneling time through the ground state, excluding other time constants, would be less than $`2.5`$ ns. However, the inverse modulation frequency $`1/\delta f500\mu s`$, which is the time separation between two microwave beat minima, is much larger than the tunneling time. In the few-electron limit, this indicates that it takes the electron a much longer time to relax within the dot than to tunnel through the barriers. An extension of the measurements to modulation frequencies on the order of $`10100`$MHz corresponding to a time scale of $`10100`$ns would therefore be desirable. With a shorter microwave beat period we will be able to probe both the fast tunneling event and the slow relaxation process. We conclude that with the frequency $`f`$ the photon energy $`hf`$ for the photon-induced process can be adjusted, whereas the modulation frequency $`\delta f`$ determines the time scale on which the electronic dynamics of the quantum dot is probed.
In summary, we have presented complex photoconductance measurements in the non-linear transport regime of a few-electron quantum dot using phase-locked microwave sources. The electronic structure of the dot is first characterized by conventional conductance measurements without microwave radiation. Photon-assisted tunneling through the ground state as well as through excited states of the system is observed. The two-source method allows to perform PAT measurements even in the non-linear transport regime. Most importantly, the relative phase of the photocurrent with respect to the incoming microwave beat signal can be obtained from the two-source measurement. This phase is related to the susceptance of the quantum dot at very high frequencies. Non-trivial values for this quantity can be attributed to the long charge relaxation times in the quantum dot. In future work this can be exploited for an accurate determination of the relaxation times of excited quantum dot states.
We like to thank Q. F. Sun, A. W. Holleitner and S. Manus for helpful discussions. This work was funded in part by the Deutsche Forschungsgemeinschaft within project SFB 348 and the Defense Advanced Research Projects Agency (DARPA) Ultrafast Electronics Program. H. Q. gratefully acknowledges support by the Volkswagen Stiftung.
$``$: present address: Bell Laboratories, Lucent Technologies, 600 Mountain Ave, Murray Hill, NJ 07974, USA
$``$ new address: Universität Regensburg, Universitätsstr. 31, D-93040 Regensburg, Germany.
|
no-problem/0003/quant-ph0003053.html
|
ar5iv
|
text
|
# Fidelity and information in the quantum teleportation of continuous variables
## I introduction
Quantum teleportation is a process by which the quantum state of a system A can be transfered to a remote system B by exploiting the entanglement between system B and a reference system R. Ideally, no information is obtained about system A, even though the exact relationship between A and R is determined by measuring a set of joint properties of A and R. While the original state of A is lost in this measurement, it can be recovered by deducing the relationship between A and B from the original entanglement between B and R and the measured entanglement between A and B.
The original proposal of quantum teleportation assumed maximal entanglement between B and R. However, it is also possible to realize quantum teleportation with non-maximal entanglement. In particular, such a teleportation scheme has been applied to the quantum states of light field modes , where maximal entanglement is impossible since it would require infinite energy. A schematic setup of this scheme is shown in figure 1. This approach to quantum teleportation has inspired a number of investigations into the dependence of the teleportation process on the details of the physical setup . In this context, it is desirable to develop compact theoretical formulations describing the effects of this teleportation scheme on the transferred quantum state.
Originally, the teleportation process for continuous variables has been formulated in terms of Wigner functions . Recently, a description in the discrete photon number base has been provided as well . In the following, the latter approach will be reformulated using the concept of displaced photon number states, and a general transfer operator $`\widehat{T}(x_{},y_+)`$ will be derived for the quantum teleportation of a state associated with a measurement result of $`x_{}`$ and $`y_+`$. This transfer operator describes the modifications which the quantum state suffers in the teleportation as well as the information obtained about the quantum state due to the finite entanglement. It is shown that this type of quantum teleportation resembles a non-destructive measurement of light field coherence with a measurement resolution given by the entanglement of B and R.
## II Measuring the entanglement of unrelated field modes
The initial step in quantum teleportation requires a measurement of the entanglement between input system A and reference system R. Ideally, this projective measurement does not provide any information about properties of A by itself.
In the case of continuous field variables , the measured variables are the difference $`\widehat{x}_{}=\widehat{x}_A\widehat{x}_R`$ and the sum $`\widehat{y}_+=\widehat{y}_A+\widehat{y}_R`$ of the orthogonal quadrature components. The eigenstates of these two commuting variables may be expressed in terms of the photon number states $`n_A;n_R`$ as
$`\beta (A,R)`$ $`=`$ $`{\displaystyle \frac{1}{\sqrt{\pi }}}{\displaystyle \underset{n=0}{\overset{\mathrm{}}{}}}\widehat{D}_A(\beta )n;n`$ (1)
with $`\widehat{x}_{}\beta (A,R)=\text{Re}(\beta )\beta (A,R)`$ (2)
and $`\widehat{y}_+\beta (A,R)=\text{Im}(\beta )\beta (A,R),`$ (3)
where the operator $`\widehat{D}_A(\beta )`$ is the displacement operator acting on the input field A, such that
$`\widehat{D}_A(\beta )`$ $`=`$ $`\mathrm{exp}\left(2i\text{Im}(\beta )\widehat{x}_A2i\text{Re}(\beta )\widehat{y}_A\right)`$ (4)
with $`\widehat{D}_A^{}(\beta )\widehat{x}_A\widehat{D}_A(\beta )=\widehat{x}_A+\text{Re}(\beta )`$ (5)
and $`\widehat{D}_A^{}(\beta )\widehat{y}_A\widehat{D}_A(\beta )=\widehat{y}_A+\text{Im}(\beta ).`$ (6)
Of course, the coherent shift could also be applied to field R instead of field A. However, in the representation given by equation (1), it is easy to identify the quantum state associated with a photon number of the reference field R.
If the quantum state $`\psi _R`$ of the reference field R is known, the measurement result provides information on the quantum state $`\psi _A`$ of field A through the probability distribution $`P(\beta )`$ given by
$`P(\beta )`$ $`=`$ $`{\displaystyle \frac{1}{\pi }}|{\displaystyle \underset{n=0}{\overset{\mathrm{}}{}}}\psi _A\widehat{D}_A(\beta )n\psi _Rn|^2`$ (7)
$`=`$ $`{\displaystyle \frac{1}{\pi }}|\psi _A\widehat{D}_A(\beta )\psi _R^{}|^2,`$ (8)
where $`\psi _R^{}={\displaystyle \underset{n=0}{\overset{\mathrm{}}{}}}n\psi _R^{}n.`$ (9)
Effectively, the measurement of entanglement projects the quantum state of field A onto a complete non-orthogonal measurement basis given by the displaced reference states $`\psi _R^{}`$. The completeness of this measurement basis is given by
$$\frac{1}{\pi }d^2\beta \widehat{D}(\beta )\psi _R^{}\psi _R^{}\widehat{D}^{}(\beta )=\widehat{1}.$$
(10)
In the case of “classical” teleportation, the reference field R is in the quantum mechanical vacuum state $`n=0`$. Therefore, the measurement of the field entanglement given by $`\beta `$ projects the incoming signal field A directly onto a displaced vacuum state.
## III Quantum teleportation
In the general case of quantum teleportation, the quantum state of the reference field R cannot be determined locally because of its entanglement with the remote field B. This indicates that the type of measurement performed is unknown until the remote system is measured as well. The meaning of the measurement result $`\beta `$ depends on the unknown properties of the remote field B. The entangled reference field R thus provides the means to choose between complementary measurement types even after the measurement interaction between input field A and reference field R has occurred.
Within the quantum state formalism, the initial state of the entangled fields R and B may be written as
$$q(R,B)=\sqrt{1q^2}\underset{n=0}{\overset{\mathrm{}}{}}q^nn;n.$$
(11)
Thus, the photon numbers of the reference field R and the remote field B are always equal. However, low photon numbers are more likely than high photon numbers, so the two mode entanglement is limited by the information available about the photon number of each mode. In a measurement of the entanglement between field A and field R, this information about R is converted into measurement information about A, thus causing a decrease in fidelity as required by the uncertainty principle.
A measurement of the entanglement between field A and field R projects the product state $`\psi _Aq(R,B)`$ into a quantum state of the remote field B given by
$$\psi _B(\beta )=\sqrt{\frac{1q^2}{\pi }}\underset{n=0}{\overset{\mathrm{}}{}}q^nnn\widehat{D}_A(\beta )\psi _A,$$
(12)
where the measurement probability $`P(\beta )`$ is given by $`\psi _B(\beta )\psi _B(\beta )`$. Thus the measurement determines the displacement $`\beta `$ between field A and field B, resulting in a quantum state $`\psi _B(\beta )`$ that appears to be a copy of the input state $`\psi _A`$, displaced by $`\beta `$. However, the measurement information obtained because low photon numbers are more likely than high photon numbers in both R and B causes a statistical modification of the probability amplitudes of the photon number states in the remote field B.
The final step in quantum teleportation is the reconstruction of the initial state from the remote field by reversal of the displacement. The output state then reads
$`\psi _{\text{out}}(\beta )`$ $`=`$ $`\widehat{T}(\beta )\psi _A`$ (13)
with $`\widehat{T}(\beta )=\sqrt{{\displaystyle \frac{1q^2}{\pi }}}{\displaystyle \underset{n=0}{\overset{\mathrm{}}{}}}q^n\widehat{D}_A(\beta )nn\widehat{D}_A(\beta ).`$ (14)
Note that this output state is not normalized because $`\psi _{\text{out}}(\beta )\psi _{\text{out}}(\beta )`$ defines the probability of the measurement results $`\beta `$. The complete process of quantum teleportation is thus summed up by the transfer operators $`\widehat{T}(\beta )`$.
## IV Transfer operator properties
The transfer operator $`\widehat{T}(\beta )`$ determines not only the properties of the quantum state after the teleportation process following a measurement result of $`\beta `$ for the field entanglement between A and R, but also the probability of obtaining the result $`\beta `$ itself. The probability distribution $`P(\beta )`$ is given by
$`P(\beta )`$ $`=`$ $`\psi _A\widehat{T}^2(\beta )\psi _A`$ (15)
$`=`$ $`{\displaystyle \frac{1q^2}{\pi }}{\displaystyle \underset{n=0}{\overset{\mathrm{}}{}}}q^{2n}|n\widehat{D}_A(\beta )\psi _A|^2.`$ (16)
Since the prefactor $`q^n`$ is larger for small n, a measurement result of $`\beta `$ is more likely if the photon number of the displaced state $`\widehat{D}_A(\beta )\psi _A`$ is low. It is possible to identify the displaced photon number with the square of the field difference between $`\beta `$ and the actual field value of A. Therefore, a measurement result of $`\beta `$ makes large deviations of the field A from this value of $`\beta `$ unlikely.
The transfer operator $`\widehat{T}(\beta )`$ also determines the relationship between the input state and the output state. Since it is the goal of quantum teleportation is to achieve identity between the input state and the output state, the overlap between the two states may be used as a measure of the fidelity of quantum teleportation. For a single teleportation event associated with a measurement result of $`\beta `$, this fidelity is given by
$$F(\beta )=\frac{1}{P(\beta )}|\psi _A\widehat{T}(\beta )\psi _A|^2.$$
(17)
For instance, the fidelity of quantum teleportation for a photon number state displaced by $`\beta `$ is exactly 1. However, it is unlikely that $`\beta `$ will be exactly equal to the displacement of the photon number state to be teleported, so the average fidelity will be much lower. The average fidelity $`F_{\text{av.}}`$ is given by
$`F_{\text{av.}}`$ $`=`$ $`{\displaystyle d^2\beta P(\beta )F(\beta )}`$ (18)
$`=`$ $`{\displaystyle d^2\beta |\psi _A\widehat{T}(\beta )\psi _A|^2}.`$ (19)
Since the transfer operator $`\widehat{T}(\beta )`$ is different for each teleportation event, the output field states show unpredictable fluctuations. These fluctuations may be expressed in terms of a density matrix,
$$\widehat{\rho }_{\text{out}}=d^2\beta \widehat{T}(\beta )\psi _A\psi _A\widehat{T}(\beta ).$$
(20)
In terms of this mixed state density matrix, the average fidelity reads
$$F_{\text{av.}}=\psi _A\widehat{\rho }_{\text{out}}\psi _A.$$
(21)
However, the measurement information $`\beta `$ is available as classical information, so the density matrix $`\widehat{\rho }_{\text{out}}`$ actually underestimates the information available about the output field. In particular, a verifier checking the fidelity of the transfer in B can know the exact output state based on the knowledge of the input state and the measurement result $`\beta `$.
## V Fidelity and information
The transfer operator $`\widehat{T}(\beta )`$ describes how the information $`\beta `$ obtained about the properties of the input state makes contributions from displaced photon number states less likely as the displaced photon number increases. The changes in the quantum state which are responsible for a fidelity below one correspond to this change in the statistical weight of the quantum state components. This situation is typical for quantum mechanical measurements. It is impossible to obtain information beyond the uncertainty limit without introducing a corresponding amount of noise into the system, because the probability amplitudes correspond to both statistical information and to physical fact . Simply by making one quantum state component more likely than another, the coherence between the two quantum state components is diminished and thus the noise in any variable depending on this coherence is increased.
In order to clarify the measurement information obtained, it is convenient to represent the transfer operator $`\widehat{T}(\beta )`$ in terms of coherent states. This representation is easy to obtain by using the formal analogy of $`\widehat{T}(\beta )`$ with a thermal photon number distribution. The result reads
$$\widehat{T}(\beta )=\sqrt{\frac{1q^2}{\pi ^3q^2}}d^2\alpha \mathrm{exp}\left(\left(\frac{1q}{q}\right)|\alpha \beta |^2\right)\alpha \alpha .$$
(22)
In the limit of $`q0`$, the transfer operator thus corresponds to a projection operator on the coherent state $`\beta `$. As $`q`$ increases, the operator corresponds to a mixture of weighted projections which prefer coherent states with field values close to $`\beta `$, distorting the field distribution of the input state.
This result highlights the epistemological nature of the quantum state. If one were to attribute physical reality to the quantum state, the change of the quantum state given by $`\widehat{T}(\beta )`$ would be a real physical effect and the statistical nature of the measurement result $`\beta `$ appears to be a rather arbitrary limitation. Why is it then not possible to control the force which molds the remote wave function? If one assumes that only observable physical properties are real, however, then the quantum mechanical probabilities behave in close analogy to classical ones, permitting a much clearer understanding of both possibilities and limitations. The information gained about the input field $`A`$ due to the finite fluctuations in the reference field $`R`$ can then be considered as a valid measurement of the coherent field components. Since the resolution of the field information obtained is limited, the output state retains some of the quantum coherent properties of the original input such as squeezing or cat state coherence. By performing measurements of physical properties other than the coherent field on the output state, one then obtains a mix of information relating back to the original input field.
In the light of this interpretation, quantum teleportation represents a delayed choice measurement, where the final selection of measurement variables is performed when the remote field $`B`$ is measured, thus indirectly determining the physical properties of the reference field $`R`$ before the measurement of $`\beta `$. The “teleportation” effect is then a purely classical information transfer made possible by the statistical correlations of the relevant physical properties. The quantum nature of the procedure only emerges when the noise properties of the initial and final states are compared. In particular, this viewpoint may help to illustrate the nature of the dividing line between “quantum teleportation” and “classical teleportation” .
## VI Verification of quantum state statistics
The output of a single teleportation process involving a pure state input $`\psi _A`$ results in a well defined pure state output $`\widehat{T}(\beta )\psi _A`$. Since the statistical properties of this state are modified by the information obtained in the measurement of $`\beta `$, the output state is different from the input state, as given by the fidelity $`F(\beta )`$ defined by equation (17). However, this difference shows only in the statistics obtained by measuring the output of an ensemble of identical input states. In other words, there is no single measurement to tell us whether the output quantum state is actually identical to the input state. Non-orthogonal quantum states may always produce the same measurement results. The verification process following the quantum teleportation is therefore a non-trivial process requiring the comparison of measurement statistics which are generally noisy, even for a fidelity of one.
In the experimentally realized teleportation of continuous variables reported in , the verification is achieved by measuring one quadrature component of the light field using homodyne detection and comparing the result with the quadrature noise of the coherent state input. This type of verification can be generalized as a projective measurement on a set of states $`V`$ satisfying the completeness condition for positive valued operator measures,
$$\underset{V}{}VV=\widehat{1}.$$
(23)
The probability of obtaining a verification result $`V`$ is given by
$$P(V)=d^2\beta |V\widehat{T}(\beta )\psi _A|^2.$$
(24)
This probability distribution is then compared with the input distribution of the verification variable $`V`$. However, the total process of teleportation and verification may be summarized in a single measurement of $`\beta `$ and $`V`$. If the information inherent in the measurement result $`\beta `$ is retained, the complete measurement performed on the input state is defined by the projective measurement basis $`\beta ,V`$ given by
$`\beta ,V=\widehat{T}(\beta )V.`$ (25)
The probability distribution over measurement results $`\beta `$ and verification results $`V`$ then reads
$`P(\beta ,V)`$ $`=`$ $`|\beta ,V\psi _A|^2`$ (26)
$`=`$ $`|V\widehat{T}(\beta )\psi _A|^2.`$ (27)
The quantum measurement effectively performed on the input state is thus composed of the measurement step of quantum teleportation and the verification step. The fidelity is determined by the difference in the statistics over $`V`$ between this two step measurement and a direct measurement of $`V`$ only. However, the information lost in quantum teleportation is actually less than is suggested by the average Fidelity. If the teleportation result $`\beta `$ is considered as well, the combination of teleportation and verification extract the maximal amount of measurement information permitted in quantum mechanics and consequently allows a complete statistical characterization of the original input state.
A particularly striking example can be obtained for a verification scheme using eight port homodyne detection. In this case, the verification variable is the coherent field $`\alpha `$ and the verification states $`V=1/\sqrt{\pi }\alpha `$ are the associated coherent states. The effective measurement basis is then given by
$`\beta ,\alpha `$ $`=`$ $`{\displaystyle \frac{1}{\sqrt{\pi }}}\widehat{T}(\beta )\alpha `$ (28)
$`=`$ $`{\displaystyle \frac{\sqrt{1q^2}}{\pi }}\mathrm{exp}\left((1q^2){\displaystyle \frac{|\alpha \beta |^2}{2}}\right)\gamma =\beta +q(\alpha \beta ).`$ (29)
The measurement still projects on a well defined coherent state, but the coherent field $`\gamma `$ is a function of both the teleportation measurement $`\beta `$ and the verification measurement $`\alpha `$. It is therefore possible to reconstruct the correct measurement statistics of the input state by referring to the teleportation results $`\beta `$ as well as to the verification results $`\alpha `$.
## VII Conclusions
In conclusion, the quantum teleportation of an input state $`\psi _A`$ can be described by a measurement dependent transfer operator $`\widehat{T}(\beta )`$ which modifies the quantum state statistics according to the information obtained about the input state $`\psi _A`$ in the measurement of $`\beta `$. The statistics of subsequent verification measurements may be derived by directly applying the transfer operator $`\widehat{T}(\beta )`$ to the states $`V`$ describing the projective verification measurement. Quantum information is only lost because the effective measurement basis $`\beta ,V`$ does not usually correspond to the eigenstate basis in which the information has been encoded.
While the type of physical information which can be obtained about the original input field is restricted because some information necessarily “leaks out” in quantum teleportation with limited entanglement, the total information obtained after the verification step still corresponds to the information obtained in an ideal projective measurement. The limitations imposed on quantum teleportation by a fidelity less than one are thus a direct consequence of the measurement information obtained about the transfered state in the measurement of $`\beta `$ and can be applied directly to the measurements performed after the transfer of the quantum state.
## Acknowledgements
One of us (HFH) would like to acknowledge support from the Japanese Society for the Promotion of Science, JSPS.
|
no-problem/0003/hep-th0003228.html
|
ar5iv
|
text
|
# 1 Duality Invariant Born-Infeld Lagrangians
## 1 Duality Invariant Born-Infeld Lagrangians
In this note we prove the conjecture made in regarding the form of $`Sp(2n,\mathrm{IR})`$ or $`U(n,n)`$ duality invariant Born-Infeld Lagrangians. See for a more extensive list of references regarding the duality invariance of Born-Infeld theory. In , inspired by , we exploited the fact that the square root in the $`U(1)`$ gauge group Born-Infeld Lagrangian can be eliminated using auxiliary fields. In the auxiliary field formulation one can generalize the theory to a higher rank abelian gauge group $`U(1)^{2n}`$ such that the duality group becomes $`U(n,n)`$. One complication discussed in is that one has to introduce complex gauge fields. However in we also showed that after the elimination of the auxiliary fields one can impose a reality condition which preserves an $`Sp(2n,\mathrm{IR})`$ subgroup of the duality group. For higher order matrices the elimination of the auxiliary fields is more complicated since the algebraic second order equation for the auxiliary field becomes a matrix second order equation.
The Born-Infeld Lagrangian introduced in with auxiliary fields is given by
$$L=\mathrm{Re}\mathrm{Tr}[\chi +i\lambda (\chi \frac{1}{2}\chi \chi ^{}+\alpha i\beta )],$$
where $`\alpha `$ and $`\beta `$ are given by the following Lorentz invariant hermitian matrices
$$\alpha ^{ab}\frac{1}{2}F^a\overline{F}^b,\beta ^{ab}\frac{1}{2}\stackrel{~}{F}^a\overline{F}^b.$$
Here $`\stackrel{~}{F}`$ is the Hodge dual of $`F`$ and a bar denotes complex conjugation. The auxiliary fields $`\chi `$ and $`\lambda `$ are $`n`$ dimensional complex matrices. For simplicity we have set the field $`S`$ to the constant value $`i`$ since as discussed in it can be easily reintroduced. With this choice the duality group reduces to the maximal compact subgroup $`U(n)\times U(n)`$ of $`U(n,n)`$.
The equation of motion obtained by varying $`\lambda `$ gives an equation for $`\chi `$
$$\chi \frac{1}{2}\chi \chi ^{}+\alpha i\beta =0,$$
(1)
and after solving this equation the Lagrangian reduces to
$$L=\mathrm{Re}\mathrm{Tr}\chi .$$
Let $`\chi =\chi _1+i\chi _2`$ where $`\chi _1`$ and $`\chi _2`$ are hermitian. The anti-hermitian part of (1) implies $`\chi _2=\beta `$ , thus $`\chi ^{}=\chi 2i\beta `$. This can be used to eliminate $`\chi `$ from (1) and obtain a quadratic equation for $`\chi ^{}`$. Following , it is convenient to define $`Q=\frac{1}{2}\chi ^{}`$ which then satisfies
$$Q=q+(pq)Q+Q^2,$$
(2)
where
$$p\frac{1}{2}(\alpha i\beta ),q\frac{1}{2}(\alpha +i\beta ).$$
The Lagrangian is then
$$L=2\mathrm{Re}\mathrm{Tr}Q.$$
(3)
If the degree of the matrices is one, we can solve for $`Q`$ in the quadratic equation (2) and then (3) reduces to the Born-Infeld Lagrangian.
For matrices of higher degree, equation (2) can be solved perturbatively and by analyzing the first few terms in the expansion we conjectured in that the trace of $`Q`$ can be obtained as follows. First, find the perturbative solution of equation (2) assuming $`p`$ and $`q`$ commute. Then the trace of $`Q`$ is the trace of the symmetrized expansion
$$\mathrm{Tr}Q=\frac{1}{2}\mathrm{Tr}\left[\mathrm{\hspace{0.17em}1}+qp𝒮\sqrt{12(p+q)+(pq)^2}\right],$$
(4)
where the symmetrization operator $`𝒮`$ will be discussed in the next section. In the appendix of we have also guessed an explicit formula for the coefficients of the expansion of the trace of $`Q`$
$$\mathrm{Tr}Q=\mathrm{Tr}\left[q+\underset{r,s1}{}\left(\begin{array}{c}r+s2\\ r1\end{array}\right)\left(\begin{array}{c}r+s\\ r\end{array}\right)𝒮(p^rq^s)\right].$$
(5)
In the next section we will prove that for a unilateral matrix equation of order $`N`$, the perturbative solution is a sum of terms which are symmetrized in all the matrix coefficients and of terms which are commutators. Since equation (2) is a unilateral matrix equation the trace of $`Q`$ will be symmetrized in the matrix coefficients $`q`$ and $`pq`$. Since this is equivalent to symmetrization in $`q`$ and $`p`$ our conjecture (4) follows.
## 2 Unilateral Matrix Equations
In this section we prove a theorem regarding certain solutions of unilateral matrix equations. These are $`N^{\mathrm{th}}`$ order matrix equations for the variable $`\varphi `$ with matrix coefficients $`A_i`$ which are all on one side, e.g. on the left
$$\varphi =A_0+A_1\varphi +A_2\varphi ^2+\mathrm{}+A_N\varphi ^N.$$
(6)
The matrices are all square and of arbitrary degree. We may equally consider the $`A_i`$’s as generators of an associative algebra, and $`\varphi `$ an element of this algebra which satisfies the above equation. We will prove that the formal perturbative solution of (6) around zero is a sum of symmetrized polynomials in the $`A_i`$ and of terms which are commutators<sup>a</sup><sup>a</sup>aIf the degree of the matrices is one the perturbative solution is convergent if $`A_0`$ and $`A_1`$ are sufficiently small.. The same is true for all the positive powers of the solution.
By repeatedly inserting $`\varphi `$ from the left hand side of (6) into the right hand side we obtain the perturbative expansion of $`\varphi `$ as a sum
$$\varphi =\underset{M}{}D_M,$$
where each $`D_M`$ is a product of the $`A_i`$ matrices. Any ordered product of these matrices will be referred to as a word. However not every word appears in the perturbative expansion of $`\varphi `$. We reserve the letter $`D`$ for words that do appear<sup>b</sup><sup>b</sup>bThis notation originated from an earlier version of the proof where the perturbative expansion of $`\varphi `$ was calculated diagrammatically and the diagrams were denoted by $`D`$. Although we will not use diagrams here, note that they are very useful in calculating the perturbative expansion of the solution..
Next we obtain the condition that a word must satisfy in order to be in the expansion. First note that because of (6) any word $`D_M`$ can be written as the following product
$$D_M=A_sD_{M_1}\mathrm{}D_{M_s}$$
(7)
for some value of $`s`$, where the $`D_{M_i}`$’s are also words in the expansion. Conversely, if all the $`D_{M_i}`$’s are words in the expansion, $`D_M`$ defined in equation (7) is also a word in the expansion. By iterating (7) we obtain the following equivalent statement: for every splitting of $`D_M`$ into two words $`D_M=W_1W_2`$ the second word can be written as a product of terms in the expansion of $`\varphi `$
$$D_M=W_1D_{N_1}\mathrm{}D_{N_k}.$$
It is convenient to assign to every matrix a dimension $`d`$ such that $`d(\varphi )=1`$. Using (6), the dimension of the matrix $`A_i`$ is given by $`d(A_i)=i1`$ and $`d(D_M)=1`$. Then we obtain the following intrinsic characterization of a word in the expansion of $`\varphi `$. It is a word $`D`$ such that for every splitting into two words $`D=W_1W_2`$, where $`W_2`$ has at least one letter, we have
$$d(W_1)0\mathrm{and}d(D)=1.$$
(8)
Note that (8) is a necessary and sufficient condition for a word to be in the expansion of $`\varphi `$ .
Suppose that $`W`$ is an arbitrary word such that $`d(W)=1`$. Then, as we will show, there is a unique cyclic permutation $`D`$ of $`W`$ such that $`D`$ is a term in the expansion of $`\varphi `$. Let us write $`W=D_{N_1}D_{N_2}\mathrm{}D_{N_k}W_1`$, where $`D_{N_1}`$ is the shortest word starting from the first letter such that $`d(D_{N_1})=1`$. $`D_{N_i}`$ is defined in the same way, except we start from the first letter after the word $`D_{N_{i1}}`$. Finally $`W_1`$ is whatever is left over. We use the notation $`D_{N_i}`$ since they correspond to terms in the $`\varphi `$ expansion. To see this, note that the total dimension of a word can increase or decrease when a letter is added on the right, but if it decreases it can only do so by one unit. This is when the letter added is $`A_0`$. Combining this with the fact that $`D_{N_i}`$ is the shortest word which satisfies $`d(D_{N_i})=1`$ then implies that if $`D_{N_i}`$ is a product of two words the dimension of the first word is greater than or equal to zero. This is just the condition (8). Then using the fact that $`d(W_1)=k1`$ one can check that the cyclic permutation of $`W`$ defined as $`D=W_1D_{N_1}\mathrm{}D_{N_k}`$ satisfies (8), thus it belongs to the expansion of $`\varphi `$. Note that all the other cyclic permutations lead to words that are not in the expansion. Assuming the converse implies that two distinct terms in the expansion can be related by a cyclic permutation. But this is impossible: if we write $`D=W_1W_2`$, then $`d(W_1)0`$ and thus $`d(W_2)1`$, so that its cyclic permutation $`W_2W_1`$ does not satisfy (8). A similar argument can be used to show that all different cyclic permutations of a term in the expansion of $`\varphi `$ lead to distinct words.
Consider the trace of the sum of all distinct words of dimension $`d=1`$ and of order $`a_i`$ in $`A_i`$. We can group together all words that are cyclic permutations of each other, and replace each group by a single word with coefficient $`_{i=0}^Na_i`$. Using the result of the previous paragraph, we can choose this word to satisfy (8). Thus we have
$$\mathrm{Tr}\left(\underset{\mathrm{order}\{a_i\}}{}D_M\right)=\left(\underset{i=0}{\overset{N}{}}a_i\right)^1\mathrm{Tr}\left(\underset{\mathrm{order}\{a_i\}}{}W\right),$$
(9)
where the sum in the right hand side is over all distinct words of some fixed order $`\{a_i\}`$ and of dimension $`d(W)=1`$.
We define the symmetrization operator $`𝒮`$ as a linear operator acting on monomials as
$$𝒮(A_0^{a_0}A_1^{a_1}\mathrm{}A_N^{a_N})=\frac{a_0!a_1!\mathrm{}a_N!}{\left(_{i=0}^Na_i\right)!}\left(\underset{\mathrm{order}\{a_i\}}{}W\right),$$
(10)
where the sum is over distinct words of fixed order $`\{a_i\}`$. Equivalently, a word can be symmetrized by averaging over all permutations of its letters. Not all permutations give distinct words and this accounts for the numerator on the right side of equation (10). The normalization of $`𝒮`$ is such that on commutative $`A_i`$’s $`𝒮`$ acts as the identity.
Combining (9) and (10), we can obtain the solution for the trace of $`\varphi `$ to all orders
$$\mathrm{Tr}\varphi =\underset{\stackrel{\left\{a_i\right\}}{{\scriptscriptstyle (i1)a_i}=1}}{}\frac{\left(_{i=0}^Na_i1\right)!}{a_0!a_1!\mathrm{}a_N!}\mathrm{Tr}𝒮(A_0^{a_0}A_1^{a_1}\mathrm{}A_N^{a_N}),$$
(11)
where the sum is over all sets $`\{a_i\}`$ restricted to words of dimension $`d=1`$ . More generally, if the $`A_i`$’s are considered to be the generators of an associative algebra, we can replace the trace in (11) with the cyclic average operator which was defined in . This is true since in the proof we only used the cyclic property of the trace which also holds for the cyclic average operator. Therefore, the solution $`\varphi `$ can be written as a sum of symmetric polynomials and terms which are commutators. This is the statement we set out to prove. Notice that our derivation implies that the coefficients in (11) are all integers.
Using the same kind of arguments we used to derive equation (11), we can also prove that the trace of positive powers of $`\varphi `$ is given by
$$\mathrm{Tr}\varphi ^r=r\underset{\stackrel{\left\{a_i\right\}}{{\scriptscriptstyle (i1)a_i}=r}}{}\frac{\left(_{i=0}^Na_i1\right)!}{a_0!a_1!\mathrm{}a_N!}\mathrm{Tr}𝒮(A_0^{a_0}A_1^{a_1}\mathrm{}A_N^{a_N}).$$
(12)
Furthermore we can write a generating function for (12)
$$\mathrm{Tr}\mathrm{log}(1\varphi )=\mathrm{Tr}\mathrm{log}(1\underset{i=0}{\overset{N}{}}A_i)|_{d<0}.$$
(13)
On the right hand side of (13) one must expand the logarithm and restrict the sum to words of negative dimension. Since $`d(\varphi ^r)=r`$ we can obtain (12) by extracting the dimension $`d=r`$ terms from the right hand side of (13). Note that all the terms in the expansion of $`\mathrm{Tr}\mathrm{log}(1_{i=0}^NA_i)`$ are automatically symmetrized.
It is possible to give a simple proof of (13) without going through the combinatoric arguments above, which however give a construction of the solution and its powers themselves, not only their trace. First note that we can rewrite equation (6) as
$$1\underset{i=0}{\overset{N}{}}A_i=1\varphi \underset{k=1}{\overset{N}{}}A_k(1\varphi ^k)$$
The right hand side factorizes
$$1\underset{i=0}{\overset{N}{}}A_i=(1\underset{k=1}{\overset{N}{}}\underset{m=0}{\overset{k1}{}}A_k\varphi ^m)(1\varphi ).$$
Under the trace we can use the fundamental property of the logarithm, even for noncommutative objects, and obtain
$$\mathrm{Tr}\mathrm{log}(1\underset{i=0}{\overset{N}{}}A_i)=\mathrm{Tr}\mathrm{log}(1\underset{k=1}{\overset{N}{}}\underset{m=0}{\overset{k1}{}}A_k\varphi ^m)+\mathrm{Tr}\mathrm{log}(1\varphi ).$$
Using $`d(A_k)=k1`$ and $`d(\varphi )=1`$ we have $`d(A_k\varphi ^m)=km1`$ and we see that all the words in the argument of the first logarithm on the right hand side have semi-positive dimension. Since all the words in the expansion of the second term have negative dimension we obtain (13).
If the coefficient $`A_N`$ is unity, we have the following identity for the symmetrization operator
$$𝒮(A_0^{a_0}A_1^{a_1}\mathrm{}A_N^{a_N})|_{A_N=1}=𝒮(A_0^{a_0}A_1^{a_1}\mathrm{}A_{N1}^{a_{N1}}).$$
This is obviously true up to normalization; the normalization can be checked in the commutative case.
The trace of the solution of (2) can now be obtained from (11) by taking $`N=2`$ and setting $`A_2`$ to unity. The restriction on the sum of (11) in this case reads $`a_0a_2=1`$. The sum can then be rewritten
$$\mathrm{Tr}\varphi =\underset{a_0=1}{\overset{\mathrm{}}{}}\underset{a_1=0}{\overset{\mathrm{}}{}}\frac{\left(2a_0+a_12\right)!}{a_0!a_1!(a_01)!}\mathrm{Tr}𝒮(A_0^{a_0}A_1^{a_1}).$$
(14)
Using $`\varphi =Q`$, $`A_0=q`$, $`A_1=pq`$, the combinatoric identity
$$\left(\begin{array}{c}a+b\\ c\end{array}\right)=\underset{m=\mathrm{max}(0,cb)}{\overset{\mathrm{min}(a,c)}{}}\left(\begin{array}{c}a\\ m\end{array}\right)\left(\begin{array}{c}b\\ cm\end{array}\right)$$
and the resummation identities
$`{\displaystyle \underset{r1}{}}{\displaystyle \underset{a=0}{\overset{r}{}}}`$ $`=`$ $`{\displaystyle \underset{a=0}{\overset{\mathrm{}}{}}}{\displaystyle \underset{r=\mathrm{max}(a,1)}{\overset{\mathrm{}}{}}},`$
$`{\displaystyle \underset{r=\mathrm{max}(a,1)}{\overset{\mathrm{}}{}}}{\displaystyle \underset{b=ra+1}{\overset{\mathrm{}}{}}}`$ $`=`$ $`{\displaystyle \underset{b=\mathrm{max}(1,2a)}{\overset{\mathrm{}}{}}}{\displaystyle \underset{r=\mathrm{max}(a,1)}{\overset{a+b1}{}}}`$
one can show that (14) reduces to (5).
## 3 Discussion
After completing the first version of this paper , where we only proved the symmetrization theorem for the trace of $`\varphi `$, we learned through private communications that A. Schwarz was developing another method of proving the theorem (for a slightly different, but related equation). Using his method he was able to show that the theorem is true for arbitrary powers of the solution. Inspired by this, we also extended the theorem, using our method, to positive powers of $`\varphi `$, see (12). In the process we discovered the simpler proof using the generating function (13).
## Acknowledgments
We would like to thank A. Schwarz for many helpful discussions. This work was supported in part by the Director, Office of Science, Office of High Energy and Nuclear Physics, of the U.S. Department of Energy under Contract
DE-AC03-76SF00098, and in part by the NSF under grant PHY-95-14797.
P.A. is supported by an INFN grant (concorso No. 6077/96).
|
no-problem/0003/hep-ph0003323.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
In order to probe the angle $`\gamma `$ of the unitarity triangle of the Cabibbo–Kobayashi–Maskawa (CKM) matrix at the $`B`$-factories, $`B\pi K`$ decays play an outstanding role. Remarkably, already CP-averaged branching ratios of such channels may imply very non-trivial constraints on $`\gamma `$. So far, the studies of these bounds have focussed on the following two systems: $`B_d\pi ^{}K^\pm `$, $`B^\pm \pi ^\pm K`$ , and $`B^\pm \pi ^0K^\pm `$, $`B^\pm \pi ^\pm K`$ ; they have received a lot of attention in the literature. In a recent paper , we pointed out that also the neutral decays $`B_d\pi ^{}K^\pm `$ and $`B_d\pi ^0K`$ may be interesting in this respect, and presented a general formalism, allowing us to describe all three $`B\pi K`$ systems within the same theoretical framework. Since the CLEO collaboration has reported the observation of the $`B_d\pi ^0K`$ channel in the summer of 1999, which finalizes the search for all four $`B\pi K`$ modes, we have reanalysed our approach in view of these new data. It turns out that the new CLEO results favour interesting bounds on $`\gamma `$ from the neutral $`B\pi K`$ decays. Here the key quantities are the following ratios of CP-averaged branching ratios :
$`R`$ $``$ $`{\displaystyle \frac{\text{BR}(B_d^0\pi ^{}K^+)+\text{BR}(\overline{B_d^0}\pi ^+K^{})}{\text{BR}(B^+\pi ^+K^0)+\text{BR}(B^{}\pi ^{}\overline{K^0})}}=0.95\pm 0.28`$ (1)
$`R_\mathrm{c}`$ $``$ $`2\left[{\displaystyle \frac{\text{BR}(B^+\pi ^0K^+)+\text{BR}(B^{}\pi ^0K^{})}{\text{BR}(B^+\pi ^+K^0)+\text{BR}(B^{}\pi ^{}\overline{K^0})}}\right]=1.27\pm 0.47`$ (2)
$`R_\mathrm{n}`$ $``$ $`{\displaystyle \frac{1}{2}}\left[{\displaystyle \frac{\text{BR}(B_d^0\pi ^{}K^+)+\text{BR}(\overline{B_d^0}\pi ^+K^{})}{\text{BR}(B_d^0\pi ^0K^0)+\text{BR}(\overline{B_d^0}\pi ^0\overline{K^0})}}\right]=0.59\pm 0.27,`$ (3)
where the factors of 2 and 1/2 have been introduced to absorb the $`\sqrt{2}`$ factors originating from the wavefunctions of the neutral pions; the errors of the experimental results given in Ref. have been added in quadrature. If these ratios are found to be smaller than one, they can be converted directly into constraints on $`\gamma `$ without any additional information. When the $`B_d\pi ^{}K^\pm `$, $`B^\pm \pi ^\pm K`$ channels were observed in 1997 by the CLEO collaboration, the first results gave $`R=0.65\pm 0.40`$, and the bound on $`\gamma `$ presented in Ref. led to great excitement in the $`B`$-physics community. In the case of $`R_\mathrm{n}`$, we now face a similarly exciting possibility, which we will discuss in more detail in this paper. However, in comparison with the original bound derived in , the neutral strategy has certain theoretical advantages, connected mainly with the impact of rescattering processes and electroweak penguin topologies.
If one of the ratios $`R_{(\mathrm{c},\mathrm{n})}`$ specified in (1)–(3) is found to be larger than one, additional experimental information is required to constrain $`\gamma `$. To this end, we have then to fix – sloppily speaking – certain ratios of “tree” to “penguin” amplitudes. Such an input allows us also to obtain stronger constraints on $`\gamma `$ in the case of $`R_{(\mathrm{c},\mathrm{n})}<1`$. The least fortunate case for the bounds on $`\gamma `$ would be $`R_{(\mathrm{c},\mathrm{n})}`$ close to 1. If CP-violating asymmetries in the channels appearing in the numerators in (1)–(3) can be measured, it is possible to go beyond the bounds on $`\gamma `$ and to determine this angle, also in the case of $`R_{(\mathrm{c},\mathrm{n})}=1`$. A first analysis of such CP asymmetries has recently been performed by the CLEO collaboration , where all results are unfortunately still consistent with zero. It is also possible to obtain theoretical upper bounds on such CP asymmetries. For instance, the ratio of the measured CP-averaged $`B_d\pi ^+\pi ^{}`$ and $`B_d\pi ^{}K^\pm `$ branching ratios implies $`|𝒜_{\mathrm{CP}}^{\mathrm{dir}}(B_d\pi ^{}K^\pm )|<0.3`$ .
It is an interesting feature of the bounds on $`\gamma `$ that they prefer values in the second quadrant, which would be in conflict with the standard analysis of the unitarity triangle . Other arguments for $`\mathrm{cos}\gamma <0`$ using $`BPP`$, $`PV`$ and $`VV`$ decays were recently given in (see also ). We would like to point out that, in addition to the bounds on $`\gamma `$, one may also derive constraints on CP-conserving strong phases $`\delta _\mathrm{n}`$ and $`\delta _\mathrm{c}`$ from the neutral and charged $`B\pi K`$ decays, respectively. Whereas the present CLEO data favour a positive value of $`\mathrm{cos}\delta _\mathrm{c}`$, as is expected in the factorization approximation, they point towards a negative value of $`\mathrm{cos}\delta _\mathrm{n}`$. However, on the basis of simple dynamical considerations, one would expect that $`\delta _\mathrm{n}`$ and $`\delta _\mathrm{c}`$ do not differ dramatically from each other. The present data do of course not allow us to draw any definite conclusions. However, if the future data should confirm this interesting “puzzle”, it may be an indication for new-physics contributions to the electroweak penguin sector, or a manifestation of large non-factorizable $`SU(3)`$-breaking effects.
The outline of this paper is as follows: in Section 2, we repeat briefly the general formalism developed in . The bounds on $`\gamma `$ are discussed in view of the recent CLEO data in Section 3, where we also have a brief look at constraints in the $`\overline{\varrho }`$$`\overline{\eta }`$ plane of the Wolfenstein parameters , generalized as in Ref. . In Section 4, we turn to the constraints on the strong phases $`\delta _\mathrm{n}`$ and $`\delta _\mathrm{c}`$. Finally, a few concluding remarks are given in Section 5.
## 2 General Formalism
The starting point of our description of the neutral $`B\pi K`$ system is the following isospin relation:
$$\sqrt{2}A(B_d^0\pi ^0K^0)+A(B_d^0\pi ^{}K^+)=\left[(T+C)+P_{\mathrm{ew}}\right]3A_{3/2},$$
(4)
where the combination $`(T+C)`$ originates from colour-allowed and colour-suppressed $`\overline{b}\overline{u}u\overline{s}`$ tree-diagram-like topologies, $`P_{\mathrm{ew}}`$ is due to electroweak penguin constributions, and $`A_{3/2}`$ reminds us that there is only an $`I=3/2`$ isospin component present in (4). Within the Standard Model, these amplitudes can be parametrized as follows:
$$T+C=|T+C|e^{i\delta _{T+C}}e^{i\gamma },P_{\mathrm{ew}}=|P_{\mathrm{ew}}|e^{i\delta _{\mathrm{ew}}},$$
(5)
where $`\delta _{T+C}`$ and $`\delta _{\mathrm{ew}}`$ denote CP-conserving strong phases. For the following considerations, we have to parametrize the $`B_d^0\pi ^0K^0`$ decay amplitude in an appropriate way. If we make use of the unitarity of the CKM matrix and employ the Wolfenstein parametrization , generalized to include non-leading terms in $`\lambda |V_{us}|=0.22`$ , we obtain
$$\sqrt{2}A(B_d^0\pi ^0K^0)P_\mathrm{n}=\left(1\frac{\lambda ^2}{2}\right)\lambda ^2A\left[1+\rho _\mathrm{n}e^{i\theta _\mathrm{n}}e^{i\gamma }\right]𝒫_{tc}^\mathrm{n},$$
(6)
where $`\rho _\mathrm{n}e^{i\theta _\mathrm{n}}`$ takes the form
$$\rho _\mathrm{n}e^{i\theta _\mathrm{n}}=\frac{\lambda ^2R_b}{1\lambda ^2}\left[1\left(\frac{𝒫_{uc}^\mathrm{n}𝒞}{𝒫_{tc}^\mathrm{n}}\right)\right].$$
(7)
Here $`𝒫_{tc}^\mathrm{n}|𝒫_{tc}^\mathrm{n}|e^{i\delta _{tc}^\mathrm{n}}`$ and $`𝒫_{uc}^\mathrm{n}`$ correspond to differences of penguin topologies with internal top and charm and up and charm quarks, respectively. The amplitude $`𝒞`$ is due to insertions of current–current operators into colour-suppressed tree-diagram-like topologies, and
$$A\frac{1}{\lambda ^2}|V_{cb}|=0.81\pm 0.06,R_b\frac{1}{\lambda }\left(1\frac{\lambda ^2}{2}\right)\left|\frac{V_{ub}}{V_{cb}}\right|=\sqrt{\overline{\varrho }^2+\overline{\eta }^2}=0.41\pm 0.07$$
(8)
are the usual CKM factors. In order to parametrize the observable $`R_\mathrm{n}`$ defined in (3), it is useful to introduce the following quantities:
$$r_\mathrm{n}\frac{|T+C|}{\sqrt{|P_\mathrm{n}|^2}},\delta _\mathrm{n}\delta _{T+C}\delta _{tc}^\mathrm{n},$$
(9)
where
$$|P_\mathrm{n}|^2\frac{1}{2}\left(|P_\mathrm{n}|^2+|\overline{P_\mathrm{n}}|^2\right)$$
(10)
is the CP-average of the $`B_d^0\pi ^0K^0`$ decay amplitude specified in (6). Then we obtain :
$$R_\mathrm{n}=1\frac{2r_\mathrm{n}}{u_\mathrm{n}}\left(h_\mathrm{n}\mathrm{cos}\delta _\mathrm{n}+k_\mathrm{n}\mathrm{sin}\delta _\mathrm{n}\right)+v^2r_\mathrm{n}^2,$$
(11)
where
$`h_\mathrm{n}`$ $`=`$ $`\mathrm{cos}\gamma +\rho _\mathrm{n}\mathrm{cos}\theta _\mathrm{n}q\left[\mathrm{cos}\omega +\rho _\mathrm{n}\mathrm{cos}(\theta _\mathrm{n}\omega )\mathrm{cos}\gamma \right]`$ (12)
$`k_\mathrm{n}`$ $`=`$ $`\rho _\mathrm{n}\mathrm{sin}\theta _\mathrm{n}+q\left[\mathrm{sin}\omega \rho _\mathrm{n}\mathrm{sin}(\theta _\mathrm{n}\omega )\mathrm{cos}\gamma \right],`$ (13)
and
$`u_\mathrm{n}`$ $`=`$ $`\sqrt{1+2\rho _\mathrm{n}\mathrm{cos}\theta _\mathrm{n}\mathrm{cos}\gamma +\rho _\mathrm{n}^2}`$ (14)
$`v`$ $`=`$ $`\sqrt{12q\mathrm{cos}\omega \mathrm{cos}\gamma +q^2}.`$ (15)
Moreover, we have introduced the electroweak penguin parameter
$$qe^{i\omega }\left|\frac{P_{\mathrm{ew}}}{T+C}\right|e^{i(\delta _{\mathrm{ew}}\delta _{T+C})},$$
(16)
which can be fixed theoretically (see also ). This interesting observation was made by Neubert and Rosner in the context of the charged $`B\pi K`$ system. However, as (4) is also satisfied by the corresponding charged combination, the same feature can be used in the neutral strategy as well . To this end, two electroweak penguin operators with tiny Wilson coefficients are neglected, as well as electroweak penguins with internal up and charm quarks. Furthermore, appropriate Fierz transformations of the remaining electroweak penguin operators are performed, and the $`SU(3)`$ flavour symmetry of strong interactions is applied. Finally, one arrives at the following result :
$$qe^{i\omega }=0.63\times \left[\frac{0.41}{R_b}\right],$$
(17)
where also factorizable $`SU(3)`$-breaking corrections have been taken into account. The amplitude $`T+C`$, i.e. the parameter $`r_\mathrm{n}`$, can be determined with the help of the decay $`B^+\pi ^+\pi ^0`$ by using the $`SU(3)`$ flavour symmetry of strong interactions :
$$T+C=\sqrt{2}\frac{V_{us}}{V_{ud}}\frac{f_K}{f_\pi }A(B^+\pi ^+\pi ^0).$$
(18)
Here the ratio $`f_K/f_\pi =1.2`$ of the kaon and pion decay constants takes into account factorizable $`SU(3)`$-breaking corrections. Electroweak penguin corrections to this expression can be taken into account theoretically , but play a minor role in this case. The CLEO collaboration sees already some indication for $`B^\pm \pi ^\pm \pi ^0`$ modes , with a CP-averaged branching ratio of
$$\text{BR}(B^\pm \pi ^\pm \pi ^0)=\left(5.6_{2.3}^{+2.6}\pm 1.7\right)\times 10^6.$$
(19)
However, the statistical significance of the signal yield is not yet sufficient to claim an observation of this channel. Using nevertheless (19), and taking into account the measured CP-averaged $`B_d\pi ^0K`$ branching ratio, the combination of (9) and (18) yields
$$r_\mathrm{n}=0.17\pm 0.06,$$
(20)
where we have added the experimental errors in quadrature.
The bounds on $`\gamma `$ implied by $`R_\mathrm{n}`$ are related to extremal values of this observable. If we keep $`r_\mathrm{n}`$ and $`\delta _\mathrm{n}`$ as free parameters, we obtain the following minimal value for $`R_\mathrm{n}`$ :
$$R_\mathrm{n}^{\mathrm{min}}|_{r_\mathrm{n},\delta _\mathrm{n}}=\left[\frac{1+2q\rho _\mathrm{n}\mathrm{cos}(\theta _\mathrm{n}+\omega )+q^2\rho _\mathrm{n}^2}{\left(12q\mathrm{cos}\omega \mathrm{cos}\gamma +q^2\right)\left(1+2\rho _\mathrm{n}\mathrm{cos}\theta _\mathrm{n}\mathrm{cos}\gamma +\rho _\mathrm{n}^2\right)}\right]\mathrm{sin}^2\gamma .$$
(21)
On the other hand, if only the strong phase $`\delta _\mathrm{n}`$ is kept as an unknown quantity, $`R_\mathrm{n}`$ takes minimal and maximal values, which are given by
$$R_\mathrm{n}^{\mathrm{ext}}|_{\delta _\mathrm{n}}=1\pm \mathrm{\hspace{0.17em}2}\frac{r_\mathrm{n}}{u_\mathrm{n}}\sqrt{h_\mathrm{n}^2+k_\mathrm{n}^2}+v^2r_\mathrm{n}^2.$$
(22)
Expressions (21) and (22) are the main equations of our paper. The parameter $`\rho _\mathrm{n}`$ is usually expected at the level of a few percent , and governs also direct CP violation in $`B_d\pi ^0K`$; model calculations of the corresponding CP asymmetry give results within the range $`[0.4\%,5\%]`$ . However, it should be kept in mind that $`\rho _\mathrm{n}`$ may be enhanced by final-state-interaction processes . These issues will be discussed in more detail in the following section.
The formulae given above apply also to the charged $`B\pi K`$ system, if we perform the following replacements:
$$r_\mathrm{n}r_\mathrm{c}\frac{|T+C|}{\sqrt{\left|P\right|^2}},\rho _\mathrm{n}e^{i\theta _\mathrm{n}}\rho e^{i\theta },\delta _\mathrm{n}\delta _\mathrm{c}\delta _{T+C}\delta _{tc}^\mathrm{c},$$
(23)
where
$$PA(B^+\pi ^+K^0)=\left(1\frac{\lambda ^2}{2}\right)\lambda ^2A\left[1+\rho e^{i\theta }e^{i\gamma }\right]\left|𝒫_{tc}^\mathrm{c}\right|e^{i\delta _{tc}^\mathrm{c}},$$
(24)
with
$$\rho e^{i\theta }=\frac{\lambda ^2R_b}{1\lambda ^2}\left[1\left(\frac{𝒫_{uc}^\mathrm{c}+𝒜}{𝒫_{tc}^\mathrm{c}}\right)\right].$$
(25)
Here the amplitude $`𝒜`$ is due to annihilation topologies. Using (18), (19) and the measured CP-averaged $`B^\pm \pi ^\pm K`$ branching ratio, we obtain
$$r_\mathrm{c}=0.21\pm 0.06,$$
(26)
where we have again added the experimental errors in quadrature.
The parameter $`\rho `$ is a measure of the importance of certain rescattering effects , and can be probed by comparing $`B^\pm \pi ^\pm K`$ with its $`U`$-spin counterpart $`B^\pm K^\pm K`$ . To this end, we consider the following quantity
$$K\left[\frac{1}{ϵR_{SU(3)}^2}\right]\left[\frac{\text{BR}(B^\pm \pi ^\pm K)}{\text{BR}(B^\pm K^\pm K)}\right]=\frac{1+2\rho \mathrm{cos}\theta \mathrm{cos}\gamma +\rho ^2}{ϵ^22ϵ\rho \mathrm{cos}\theta \mathrm{cos}\gamma +\rho ^2},$$
(27)
where $`ϵ\lambda ^2/(1\lambda ^2)`$, and
$$R_{SU(3)}=\frac{F_{B\pi }(M_K^2;0^+)}{F_{BK}(M_K^2;0^+)}$$
(28)
describes factorizable $`U`$-spin-breaking corrections. If we use the model of Bauer, Stech and Wirbel to estimate the relevant form factors, we obtain $`R_{SU(3)}=𝒪(0.7)`$. The expression on the right-hand side of (27) implies the following allowed range for $`\rho `$ (for a detailed discussion, see and ):
$$\frac{1ϵ\sqrt{K}}{1+\sqrt{K}}\rho \frac{1+ϵ\sqrt{K}}{|1\sqrt{K}|}.$$
(29)
The present CLEO data give $`\text{BR}(B^\pm K^\pm K)/\text{BR}(B^\pm \pi ^\pm K)<0.3`$ at $`90\%`$ C.L. . Consequently, using (29), this upper bound implies $`\rho <0.15`$ for $`R_{SU(3)}=0.7`$, and is not in favour of dramatic rescattering effects, although the upper bound is still one order of magnitude above the usual model calculations, making use of arguments based on factorization.
Let us finally note that the formalism discussed in this section can also be applied to the “mixed” $`B_d\pi ^{}K^\pm `$, $`B^\pm \pi ^\pm K`$ system. To this end, we have just to make appropriate replacements of variables, involving certain amplitudes $`T`$ and $`P_{\mathrm{ew}}^\mathrm{C}`$, which measure colour-allowed tree-diagram-like and colour-suppressed electroweak penguin topologies, respectively. In order to fix $`T`$, arguments based on the factorization hypothesis have to be employed, and usually it is assumed that the colour-suppressed electroweak penguin amplitude $`P_{\mathrm{ew}}^\mathrm{C}`$ plays a very minor role. However, in contrast to (5), these quantities may be affected by rescattering processes. An interesting approach, making use of a heavy-quark expansion for non-leptonic $`B`$ decays, was recently proposed in Ref. , which could help to reduce the uncertainties related to $`T`$ and $`P_{\mathrm{ew}}^\mathrm{C}`$. It should also be useful to reduce the theoretical uncertainties of $`r_\mathrm{n}`$, $`r_\mathrm{c}`$ and $`qe^{i\omega }`$, which are due to non-factorizable $`SU(3)`$-breaking corrections. Moreover, this approach allows also a calculation of the parameters $`\rho _\mathrm{n}e^{i\theta _\mathrm{n}}`$ and $`\rho e^{i\theta }`$. We will not consider the $`B_d\pi ^{}K^\pm `$, $`B^\pm \pi ^\pm K`$ system further in this paper, and refer the reader to Refs. , where detailed discussions can be found. Recently, also the utility of $`B_s\pi K`$ decays in this context was pointed out .
## 3 Bounds on $`𝜸`$ and Constraints in the $`\overline{\mathit{\varrho }}`$$`\overline{𝜼}`$ Plane
The bounds on the CKM angle $`𝜸`$ implied by the CP-averaged branching ratios of the neutral $`𝑩\mathbf{}𝝅𝑲`$ decays are related to the extremal values of $`𝑹_𝐧`$ given in (21) and (22). In Fig. 1, we show their dependence on $`𝜸`$ for $`𝒒𝒆^{𝒊𝝎}\mathbf{=}\mathbf{0.63}`$ and $`𝝆_𝐧\mathbf{=}\mathrm{𝟎}`$.<sup>1</sup><sup>1</sup>1In Fig. 1, we have assumed $`0^{}\gamma 180^{}`$, as implied by the measured CP-violating parameter $`\epsilon _K`$ of the neutral kaon system. Here all values of $`𝑹_𝐧`$ below the $`𝑹_{\mathrm{𝐦𝐢𝐧}}`$ curve are excluded. If $`𝒓_𝐧`$ is fixed, for example to be equal to 0.17, all values of $`𝑹_𝐧`$ outside the shaded region are excluded, which is enlarged (reduced) for larger (smaller) values of $`𝒓_𝐧`$. Fig. 1 allows us to read off immediately the allowed range for $`𝜸`$ corresponding to a given value of $`𝑹_𝐧`$. Let us consider, for example, the central value of (3), $`𝑹_𝐧\mathbf{=}\mathbf{0.6}`$. In this case, the $`𝑹_{\mathrm{𝐦𝐢𝐧}}`$ curve implies the allowed range $`\mathrm{𝟎}^{\mathbf{}}\mathbf{}𝜸\mathbf{}\mathrm{𝟐𝟏}^{\mathbf{}}\mathbf{}\mathbf{\hspace{0.17em}100}^{\mathbf{}}\mathbf{}𝜸\mathbf{}\mathrm{𝟏𝟖𝟎}^{\mathbf{}}`$. If we use additional information on the parameter $`𝒓_𝐧`$, we may put even stronger constraints on $`𝜸`$. For $`𝒓_𝐧\mathbf{=}\mathbf{0.17}`$, we obtain, for instance, the allowed range $`\mathrm{𝟏𝟑𝟖}^{\mathbf{}}\mathbf{}𝜸\mathbf{}\mathrm{𝟏𝟖𝟎}^{\mathbf{}}`$.
In the case of the charged $`𝑩\mathbf{}𝝅𝑲`$ system, bounds on $`𝜸`$ can be obtained in an analogous manner. The corresponding curves for the extremal values of $`𝑹_𝐜`$ are shown in Fig. 2. There is some kind of complementarity between the neutral and charged $`𝑩\mathbf{}𝝅𝑲`$ systems, since the CLEO data favour $`𝑹_𝐧\mathbf{<}\mathrm{𝟏}`$ and $`𝑹_𝐜\mathbf{>}\mathrm{𝟏}`$. Consequently, we have to fix $`𝒓_𝐜`$ in order to constrain $`𝜸`$ through the charged $`𝑩\mathbf{}𝝅𝑲`$ decays. For the central values of (2) and (26), $`𝑹_𝐜\mathbf{=}\mathbf{1.3}`$ and $`𝒓_𝐜\mathbf{=}\mathbf{0.21}`$, we obtain $`\mathrm{𝟖𝟕}^{\mathbf{}}\mathbf{}𝜸\mathbf{}\mathrm{𝟏𝟖𝟎}^{\mathbf{}}`$.
The allowed ranges for $`𝜸`$ arising in the examples given above would be of particular phenomenological interest, as they would be complementary to the range of $`𝜸`$ arising from the usual indirect fits of the unitarity triangle . The most recent analysis gives
$$\mathrm{𝟑𝟖}^{\mathbf{}}\mathbf{}𝜸\mathbf{}\mathrm{𝟖𝟏}^{\mathbf{}}\mathbf{.}$$
(30)
In our examples of the bounds from the neutral $`𝑩\mathbf{}𝝅𝑲`$ system, there would be no overlap between these ranges, which could be interpreted as a manifestation of new physics . In particular, the second quadrant for $`𝜸`$ is favoured; other arguments for $`\mathrm{𝐜𝐨𝐬}𝜸\mathbf{<}\mathrm{𝟎}`$ using $`𝑩\mathbf{}𝑷𝑷`$, $`𝑷𝑽`$ and $`𝑽𝑽`$ decays were recently given in (see also ). However, the present data do not yet allow us to draw any definite conclusions. Before we can speculate on physics beyond the Standard Model, it is of course crucial to explore hadronic uncertainties. For the formalism used in this paper, this was done in ; within a different framework, similar considerations were also made for the charged and “mixed” $`𝑩\mathbf{}𝝅𝑲`$ systems in .
The theoretical accuracy of the bounds on $`𝜸`$ discussed in this section is limited both by non-factorizable $`𝑺𝑼\mathbf{(}\mathrm{𝟑}\mathbf{)}`$-breaking corrections and by rescattering processes. The former may affect the determination of the parameters $`𝒒𝒆^{𝒊𝝎}`$ and $`𝒓_{𝐧\mathbf{,}𝐜}`$, whereas the latter may lead to sizeable values of $`𝝆_𝐧`$ and $`𝝆`$. In order to control the non-factorizable $`𝑺𝑼\mathbf{(}\mathrm{𝟑}\mathbf{)}`$-breaking corrections, the “QCD factorization” approach presented in appears to be very promising.
In the case of the neutral strategy, the parameter $`𝝆_𝐧𝒆^{𝒊𝜽_𝐧}`$ can be probed – and even taken into account in the bounds on $`𝜸`$ in an exact manner – through CP-violating effects. To this end, we consider the $`𝑩_𝒅\mathbf{}𝝅^\mathrm{𝟎}𝑲`$ modes and require that the kaon be observed as a $`𝑲_𝐒`$. The resulting final state is then an eigenstate of the CP operator with eigenvalue $`\mathbf{}\mathrm{𝟏}`$, and we obtain the following time-dependent CP asymmetry :
$`𝒂_{\mathrm{𝐂𝐏}}\mathbf{(}𝑩_𝒅\mathbf{(}𝒕\mathbf{)}\mathbf{}𝝅^\mathrm{𝟎}𝑲_𝐒\mathbf{)}\mathbf{}{\displaystyle \frac{\text{BR}\mathbf{(}𝑩_𝒅^\mathrm{𝟎}\mathbf{(}𝒕\mathbf{)}\mathbf{}𝝅^\mathrm{𝟎}𝑲_𝐒\mathbf{)}\mathbf{}\text{BR}\mathbf{(}\overline{𝑩_𝒅^\mathrm{𝟎}}\mathbf{(}𝒕\mathbf{)}\mathbf{}𝝅^\mathrm{𝟎}𝑲_𝐒\mathbf{)}}{\text{BR}\mathbf{(}𝑩_𝒅^\mathrm{𝟎}\mathbf{(}𝒕\mathbf{)}\mathbf{}𝝅^\mathrm{𝟎}𝑲_𝐒\mathbf{)}\mathbf{+}\text{BR}\mathbf{(}\overline{𝑩_𝒅^\mathrm{𝟎}}\mathbf{(}𝒕\mathbf{)}\mathbf{}𝝅^\mathrm{𝟎}𝑲_𝐒\mathbf{)}}}`$
$`\mathbf{=}`$ $`𝓐_{\mathrm{𝐂𝐏}}^{\mathrm{𝐝𝐢𝐫}}\mathbf{(}𝑩_𝒅\mathbf{}𝝅^\mathrm{𝟎}𝑲_𝐒\mathbf{)}\mathrm{𝐜𝐨𝐬}\mathbf{(}𝚫𝑴_𝒅𝒕\mathbf{)}\mathbf{+}𝓐_{\mathrm{𝐂𝐏}}^{\mathrm{𝐦𝐢𝐱}}\mathbf{(}𝑩_𝒅\mathbf{}𝝅^\mathrm{𝟎}𝑲_𝐒\mathbf{)}\mathrm{𝐬𝐢𝐧}\mathbf{(}𝚫𝑴_𝒅𝒕\mathbf{)}\mathbf{,}`$ (31)
where $`𝓐_{\mathrm{𝐂𝐏}}^{\mathrm{𝐝𝐢𝐫}}\mathbf{(}𝑩_𝒅\mathbf{}𝝅^\mathrm{𝟎}𝑲_𝐒\mathbf{)}`$ and $`𝓐_{\mathrm{𝐂𝐏}}^{\mathrm{𝐦𝐢𝐱}}\mathbf{(}𝑩_𝒅\mathbf{}𝝅^\mathrm{𝟎}𝑲_𝐒\mathbf{)}`$ are due to “direct” and “mixing-induced” CP violation, respectively. Using (6), these observables take the following form:
$$𝓐_{\mathrm{𝐂𝐏}}^{\mathrm{𝐝𝐢𝐫}}\mathbf{(}𝑩_𝒅\mathbf{}𝝅^\mathrm{𝟎}𝑲_𝐒\mathbf{)}\mathbf{=}\mathbf{}\frac{\mathrm{𝟐}𝝆_𝐧\mathrm{𝐬𝐢𝐧}𝜽_𝐧\mathrm{𝐬𝐢𝐧}𝜸}{\mathrm{𝟏}\mathbf{+}\mathrm{𝟐}𝝆_𝐧\mathrm{𝐜𝐨𝐬}𝜽_𝐧\mathrm{𝐜𝐨𝐬}𝜸\mathbf{+}𝝆_𝐧^\mathrm{𝟐}}$$
(32)
$`𝓐_{\mathrm{𝐂𝐏}}^{\mathrm{𝐦𝐢𝐱}}\mathbf{(}𝑩_𝒅\mathbf{}𝝅^\mathrm{𝟎}𝑲_𝐒\mathbf{)}\mathbf{=}`$ (33)
$`\mathbf{}\mathbf{\left[}{\displaystyle \frac{\mathrm{𝐬𝐢𝐧}\mathbf{\left(}\mathit{\varphi }_𝐌^{\mathbf{(}𝒅\mathbf{)}}\mathbf{+}\mathit{\varphi }_𝑲\mathbf{\right)}\mathbf{+}\mathrm{𝟐}𝝆_𝐧\mathrm{𝐜𝐨𝐬}𝜽_𝐧\mathrm{𝐬𝐢𝐧}\mathbf{\left(}\mathit{\varphi }_𝐌^{\mathbf{(}𝒅\mathbf{)}}\mathbf{+}\mathit{\varphi }_𝑲\mathbf{+}𝜸\mathbf{\right)}\mathbf{+}𝝆_𝐧^\mathrm{𝟐}\mathrm{𝐬𝐢𝐧}\mathbf{\left(}\mathit{\varphi }_𝐌^{\mathbf{(}𝒅\mathbf{)}}\mathbf{+}\mathit{\varphi }_𝑲\mathbf{+}\mathrm{𝟐}𝜸\mathbf{\right)}}{\mathrm{𝟏}\mathbf{+}\mathrm{𝟐}𝝆_𝐧\mathrm{𝐜𝐨𝐬}𝜽_𝐧\mathrm{𝐜𝐨𝐬}𝜸\mathbf{+}𝝆_𝐧^\mathrm{𝟐}}}\mathbf{\right]}\mathbf{.}`$
The latter expression reduces to
$$𝓐_{\mathrm{𝐂𝐏}}^{\mathrm{𝐦𝐢𝐱}}\mathbf{(}𝑩_𝒅\mathbf{}𝝅^\mathrm{𝟎}𝑲_𝐒\mathbf{)}\mathbf{=}\mathbf{}\mathrm{𝐬𝐢𝐧}\mathbf{\left(}\mathit{\varphi }_𝐌^{\mathbf{(}𝒅\mathbf{)}}\mathbf{+}\mathit{\varphi }_𝑲\mathbf{\right)}\mathbf{=}𝓐_{\mathrm{𝐂𝐏}}^{\mathrm{𝐦𝐢𝐱}}\mathbf{(}𝑩_𝒅\mathbf{}𝑱\mathbf{/}𝝍𝑲_𝐒\mathbf{)}$$
(34)
in the case of $`𝝆_𝐧\mathbf{=}\mathrm{𝟎}`$ . Clearly, a violation of (34) and a sizeable value of the direct CP asymmetry (32) would signal that the parameter $`𝝆_𝐧`$ cannot be neglected. Such a feature may either be due to large rescattering effects, or to new-physics contributions. The whole pattern of all experimentally observed $`𝑩\mathbf{}𝝅𝑲`$ and $`𝑩\mathbf{}𝑲\overline{𝑲}`$ decays may allow us to distinguish between these cases.
In the mixing-induced CP asymmetry (34), $`\mathit{\varphi }_𝐌^{\mathbf{(}𝒅\mathbf{)}}\mathbf{=}\mathrm{𝟐}\text{arg}\mathbf{(}𝑽_{𝒕𝒅}^{\mathbf{}}𝑽_{𝒕𝒃}\mathbf{)}`$ is related to the weak $`𝑩_𝒅^\mathrm{𝟎}`$$`\overline{𝑩_𝒅^\mathrm{𝟎}}`$ mixing phase, whereas $`\mathit{\varphi }_𝑲`$ is related to $`𝑲^\mathrm{𝟎}`$$`\overline{𝑲^\mathrm{𝟎}}`$ mixing, and is negligibly small in the Standard Model. The combination $`\mathit{\varphi }_𝒅\mathbf{=}\mathit{\varphi }_𝐌^{\mathbf{(}𝒅\mathbf{)}}\mathbf{+}\mathit{\varphi }_𝑲`$ is equal to $`\mathrm{𝟐}𝜷`$ in the Standard Model, and can be determined “straightforwardly” through the “gold-plated” mode $`𝑩_𝒅\mathbf{}𝑱\mathbf{/}𝝍𝑲_𝐒`$ at the $`𝑩`$-factories. Strictly speaking, a measurement of $`𝓐_{\mathrm{𝐂𝐏}}^{\mathrm{𝐦𝐢𝐱}}\mathbf{(}𝑩_𝒅\mathbf{}𝑱\mathbf{/}𝝍𝑲_𝐒\mathbf{)}`$ allows us to determine only $`\mathrm{𝐬𝐢𝐧}\mathit{\varphi }_𝒅`$, i.e. to fix $`\mathit{\varphi }_𝒅`$ up to a twofold ambiguity. Several strategies were proposed in the literature to resolve this ambiguity .
If we assume that $`\mathit{\varphi }_𝒅`$ has been fixed this way, the observables (32) and (33) allow us to determine $`𝝆_𝐧`$ and $`𝜽_𝐧`$ as a function of $`𝜸`$. The general formulae given in the previous section allow us then to take into account these parameters in the curves shown in Fig. 1. The usual model calculations for non-leptonic $`𝑩`$ decays give values for $`𝝆_𝐧`$ at the level of a few percent. In order to illustrate the impact on the bounds on $`𝜸`$, let us take $`𝝆_𝐧\mathbf{=}\mathbf{0.05}`$ and $`𝜽_𝐧\mathbf{}\mathbf{\{}\mathrm{𝟎}^{\mathbf{}}\mathbf{,}\mathrm{𝟏𝟖𝟎}^{\mathbf{}}\mathbf{\}}`$. For the example given above, we obtain then the allowed ranges $`\mathrm{𝟎}^{\mathbf{}}\mathbf{}𝜸\mathbf{}\mathbf{\left(}\mathrm{𝟐𝟏}^{\mathbf{}}\mathbf{\pm }\mathrm{𝟏}^{\mathbf{}}\mathbf{\right)}\mathbf{}\mathbf{\left(}\mathrm{𝟏𝟎𝟎}^{\mathbf{}}\mathbf{\pm }\mathrm{𝟒}^{\mathbf{}}\mathbf{\right)}\mathbf{}𝜸\mathbf{}\mathrm{𝟏𝟖𝟎}^{\mathbf{}}`$, and $`\mathbf{\left(}\mathrm{𝟏𝟑𝟖}^{\mathbf{}}\mathbf{\pm }\mathrm{𝟐}^{\mathbf{}}\mathbf{\right)}\mathbf{}𝜸\mathbf{}\mathrm{𝟏𝟖𝟎}^{\mathbf{}}`$. The feature that the uncertainty due to $`𝝆_𝐧`$ is larger in the case of $`𝑹_𝐧^{\mathrm{𝐦𝐢𝐧}}`$ can be understood easily by performing an expansion of (21) and (22) in powers of $`𝝆_𝐧`$, and neglecting second-order terms of $`𝓞\mathbf{(}𝝆_𝐧^\mathrm{𝟐}\mathbf{)}`$, $`𝓞\mathbf{(}𝒓_𝐧𝝆_𝐧\mathbf{)}`$ and $`𝓞\mathbf{(}𝒓_𝐧^\mathrm{𝟐}\mathbf{)}`$:
$`𝑹_𝐧^{\mathrm{𝐦𝐢𝐧}}\mathbf{|}_{𝒓_𝐧\mathbf{,}𝜹_𝐧}^{𝐋\mathbf{.}𝐎\mathbf{.}}`$ $`\mathbf{=}`$ $`\mathbf{\left[}{\displaystyle \frac{\mathrm{𝟏}\mathbf{+}\mathrm{𝟐}𝝆_𝐧\mathrm{𝐜𝐨𝐬}𝜽_𝐧\mathbf{\left(}𝒒\mathbf{}\mathrm{𝐜𝐨𝐬}𝜸\mathbf{\right)}}{\mathrm{𝟏}\mathbf{}\mathrm{𝟐}𝒒\mathrm{𝐜𝐨𝐬}𝜸\mathbf{+}𝒒^\mathrm{𝟐}}}\mathbf{\right]}\mathrm{𝐬𝐢𝐧}^\mathrm{𝟐}𝜸`$ (35)
$`𝑹_𝐧^{\mathrm{𝐞𝐱𝐭}}\mathbf{|}_{𝜹_𝐧}^{𝐋\mathbf{.}𝐎\mathbf{.}}`$ $`\mathbf{=}`$ $`\mathrm{𝟏}\mathbf{\pm }\mathbf{\hspace{0.17em}2}𝒓_𝐧\mathbf{\left|}\mathrm{𝐜𝐨𝐬}𝜸\mathbf{}𝒒\mathbf{\right|}\mathbf{.}`$ (36)
Here we have moreover made use of (17), which gives $`𝝎\mathbf{=}\mathrm{𝟎}`$. Interestingly, as was noted for the charged $`𝑩\mathbf{}𝝅𝑲`$ system in , there are no terms of $`𝓞\mathbf{(}𝝆_𝐧\mathbf{)}`$ present in (36), in contrast to (35). Consequently, the bounds on $`𝜸`$ related to (21) are affected more strongly by $`𝝆_𝐧`$ then those implied by (22). In the case of the charged strategy, we have to use the $`𝑼`$-spin flavour symmery, relating $`𝑩^\mathbf{\pm }\mathbf{}𝝅^\mathbf{\pm }𝑲`$ to $`𝑩^\mathbf{\pm }\mathbf{}𝑲^\mathbf{\pm }𝑲`$, in order to take into account the parameters $`𝝆`$ and $`𝜽`$ in the curves shown in Fig. 2 . To this end, the observable $`𝑲`$ introduced in (27) has to be combined with the direct CP asymmetries in $`𝑩^\mathbf{\pm }\mathbf{}𝝅^\mathbf{\pm }𝑲`$ or $`𝑩^\mathbf{\pm }\mathbf{}𝑲^\mathbf{\pm }𝑲`$ modes.
In addition to the theoretical uncertainties associated with $`𝑺𝑼\mathbf{(}\mathrm{𝟑}\mathbf{)}`$-breaking and rescattering effects, another uncertainty of the constraints on $`𝜸`$ is due to the CKM factor $`𝑹_𝒃`$ in expression (17) for the electroweak penguin parameter $`𝒒𝒆^{𝒊𝝎}`$. Because of this feature, it is actually more appropriate to consider constraints in the $`\overline{\mathit{\varrho }}`$$`\overline{𝜼}`$ plane. A similar “trick” was also employed for $`𝑩_𝒅\mathbf{}𝝅^\mathbf{+}𝝅^{\mathbf{}}`$ decays in , and recently for the charged $`𝑩\mathbf{}𝝅𝑲`$ system in .
The constraints in the $`\overline{\mathit{\varrho }}`$$`\overline{𝜼}`$ plane can be obtained straightforwardly from (21) and (22). In the former case, we obtain
$$\mathrm{𝐜𝐨𝐬}𝜸\mathbf{=}𝑹_𝐧𝒒\mathbf{\pm }\sqrt{\mathbf{\left(}\mathrm{𝟏}\mathbf{}𝑹_𝐧\mathbf{\right)}\mathbf{\left(}\mathrm{𝟏}\mathbf{}𝑹_𝐧𝒒^\mathrm{𝟐}\mathbf{\right)}}\mathbf{,}$$
(37)
whereas we have in the latter case
$$\mathrm{𝐜𝐨𝐬}𝜸\mathbf{=}\frac{\mathrm{𝟏}\mathbf{}𝑹_𝐧\mathbf{\pm }\mathrm{𝟐}𝒒𝒓_𝐧\mathbf{+}\mathbf{\left(}\mathrm{𝟏}\mathbf{+}𝒒^\mathrm{𝟐}\mathbf{\right)}𝒓_𝐧^\mathrm{𝟐}}{\mathrm{𝟐}𝒓_𝐧\mathbf{\left(}𝒒𝒓_𝐧\mathbf{\pm }\mathrm{𝟏}\mathbf{\right)}}\mathbf{.}$$
(38)
In these expressions, we have assumed, for simpliciy, $`𝝆_𝐧\mathbf{=}\mathrm{𝟎}`$ and $`𝝎\mathbf{=}\mathrm{𝟎}`$. For the charged $`𝑩\mathbf{}𝝅𝑲`$ system, we obtain analogous expressions. The right-hand sides of these formulae depend implicitly on the CKM factor $`𝑹_𝒃`$ through the electroweak penguin parameter $`𝒒𝒆^{𝒊𝝎}`$, which is given by (17). Consequently, it is actually more appropriate to consider contours in the $`\overline{\mathit{\varrho }}`$$`\overline{𝜼}`$ plane instead of the CKM angle $`𝜸`$. They can be obtained with the help of (37) and (38) by taking into account
$$\overline{\mathit{\varrho }}\mathbf{=}𝑹_𝒃\mathrm{𝐜𝐨𝐬}𝜸\mathbf{,}\overline{𝜼}\mathbf{=}𝑹_𝒃\mathrm{𝐬𝐢𝐧}𝜸\mathbf{,}$$
(39)
and are illustrated in Figs. 3 and 4 for the examples given in the previous section.
## 4 Bounds on Strong Phases
If we use the general expression (11) for $`𝑹_𝐧`$, we can determine $`\mathrm{𝐜𝐨𝐬}𝜹_𝐧`$ as a function of $`𝜸`$:
$$\mathrm{𝐜𝐨𝐬}𝜹_𝐧\mathbf{=}\frac{\mathrm{𝟏}}{𝒉_𝐧^\mathrm{𝟐}\mathbf{+}𝒌_𝐧^\mathrm{𝟐}}\mathbf{\left[}\frac{\mathbf{\left(}\mathrm{𝟏}\mathbf{}𝑹_𝐧\mathbf{+}𝒗^\mathrm{𝟐}𝒓_𝐧^\mathrm{𝟐}\mathbf{\right)}𝒖_𝐧𝒉_𝐧}{\mathrm{𝟐}𝒓_𝐧}\mathbf{\pm }𝒌_𝐧\sqrt{𝒉_𝐧^\mathrm{𝟐}\mathbf{+}𝒌_𝐧^\mathrm{𝟐}\mathbf{}\mathbf{\left[}\frac{\mathbf{\left(}\mathrm{𝟏}\mathbf{}𝑹_𝐧\mathbf{+}𝒗^\mathrm{𝟐}𝒓_𝐧^\mathrm{𝟐}\mathbf{\right)}𝒖_𝐧}{\mathrm{𝟐}𝒓_𝐧}\mathbf{\right]}^\mathrm{𝟐}}\mathbf{\right]}\mathbf{.}$$
(40)
In Fig. 5, we show the dependence of $`\mathrm{𝐜𝐨𝐬}𝜹_𝐧`$ for various values of $`𝑹_𝐧`$ in the case of $`𝒒𝒆^{𝒊𝝎}\mathbf{=}\mathbf{0.63}`$ and $`𝒓_𝐧\mathbf{=}\mathbf{0.17}`$. From this figure, also the allowed range for $`𝜸`$ can be read off for a given value of $`𝑹_𝐧`$. For the central value $`𝑹_𝐧\mathbf{=}\mathbf{0.6}`$ of the present CLEO data, we obtain moreover $`\mathbf{}\mathrm{𝟏}\mathbf{}\mathrm{𝐜𝐨𝐬}𝜹_𝐧\mathbf{}\mathbf{}\mathbf{0.86}`$. Performing the replacements given in (23), (40) applies also to the charged $`𝑩\mathbf{}𝝅𝑲`$ system. The corresponding contours in the $`𝜸`$$`\mathrm{𝐜𝐨𝐬}𝜹_𝐜`$ plane are shown in Fig. 6. For $`𝑹_𝐜\mathbf{=}\mathbf{1.3}`$, we obtain $`\mathbf{+}\mathbf{0.27}\mathbf{}\mathrm{𝐜𝐨𝐬}𝜹_𝐜\mathbf{}\mathbf{+}\mathrm{𝟏}`$.
As can be seen in (9) and (23), we have $`𝜹_𝐧\mathbf{}𝜹_𝐜\mathbf{=}𝜹_{𝒕𝒄}^𝐜\mathbf{}𝜹_{𝒕𝒄}^𝐧`$, where $`𝜹_{𝒕𝒄}^𝐜`$ and $`𝜹_{𝒕𝒄}^𝐧`$ denote the strong phases of the amplitudes $`𝓟_{𝒕𝒄}^𝐜`$ and $`𝓟_{𝒕𝒄}^𝐧`$, which describe the differences of penguin topologies with internal top- and charm-quark exchanges of the decays $`𝑩^\mathbf{+}\mathbf{}𝝅^\mathbf{+}𝑲^\mathrm{𝟎}`$ and $`𝑩_𝒅^\mathrm{𝟎}\mathbf{}𝝅^\mathrm{𝟎}𝑲^\mathrm{𝟎}`$, respectively. These penguin topologies consist of QCD and electroweak penguins, where the latter contribute to $`𝑩^\mathbf{+}\mathbf{}𝝅^\mathbf{+}𝑲^\mathrm{𝟎}`$ only in colour-suppressed form. In contrast, $`𝑩_𝒅^\mathrm{𝟎}\mathbf{}𝝅^\mathrm{𝟎}𝑲^\mathrm{𝟎}`$ receives contributions both from colour-allowed and from colour-suppressed electroweak penguins. Nevertheless, they are expected to be at most of $`𝓞\mathbf{(}\mathrm{𝟐𝟎}\mathbf{\%}\mathbf{)}`$ of the $`𝑩_𝒅^\mathrm{𝟎}\mathbf{}𝝅^\mathrm{𝟎}𝑲^\mathrm{𝟎}`$ QCD penguin amplitude. If we neglect the electroweak penguins and make use of isospin flavour-symmetry arguments, we obtain $`𝓟_{𝒕𝒄}^𝐧\mathbf{}𝓟_{𝒕𝒄}^𝐜`$, yielding $`𝜹_𝐧\mathbf{}𝜹_𝐜`$ and $`\mathrm{𝐜𝐨𝐬}𝜹_𝐧\mathbf{}\mathrm{𝐜𝐨𝐬}𝜹_𝐜`$. Employing moreover “factorization”, these cosines are expected to be close to $`\mathbf{+}\mathrm{𝟏}`$.
Consequently, as the present CLEO data are in favour of $`\mathrm{𝐜𝐨𝐬}𝜹_𝐧\mathbf{<}\mathrm{𝟎}`$ and $`\mathrm{𝐜𝐨𝐬}𝜹_𝐜\mathbf{>}\mathrm{𝟎}`$, we arrive at a “puzzling” situation, although it is of course too early to draw definite conclusions. If future data should confirm this “discrepancy”, it may be an indication for new-physics contributions to the electroweak penguin sector, or a manifestation of large non-factorizable $`𝑺𝑼\mathbf{(}\mathrm{𝟑}\mathbf{)}`$-breaking effects. Since the parameter $`𝝆_𝐧`$ enters in expression (11) for $`𝑹_𝐧`$ in the term proportional to $`𝒓_𝐧`$, it can be regarded as a second-order effect and does not play a dramatic role for the contraints on $`\mathrm{𝐜𝐨𝐬}𝜹_{𝐧\mathbf{(}𝐜\mathbf{)}}`$ and $`𝜸`$. This feature is illustrated in Fig. 7 for the central values of the present CLEO data.
In Fig. 8, we consider the impact of a modified electroweak penguin parameter, $`𝒒𝒆^{𝒊𝝎}\mathbf{=}\mathbf{1.26}\mathbf{\times }\mathrm{𝐞𝐱𝐩}\mathbf{(}𝒊\mathbf{\hspace{0.17em}45}^{\mathbf{}}\mathbf{)}`$, which differs significantly from the $`𝑺𝑼\mathbf{(}\mathrm{𝟑}\mathbf{)}`$ Standard-Model expression (17). In this case, the discrepancy between $`\mathrm{𝐜𝐨𝐬}𝜹_𝐧`$ and $`\mathrm{𝐜𝐨𝐬}𝜹_𝐜`$ would be essentially resolved, favouring values of $`𝓞\mathbf{(}\mathbf{}\mathbf{0.5}\mathbf{)}`$, which would still be in conflict with the factorization expectation. A value of $`𝒒𝒆^{𝒊𝝎}\mathbf{=}\mathbf{1.26}\mathbf{\times }\mathrm{𝐞𝐱𝐩}\mathbf{(}𝒊\mathbf{\hspace{0.17em}45}^{\mathbf{}}\mathbf{)}`$ may be due to CP-conserving new-physics contributions to the electroweak penguin sector . In general, new physics will also lead to CP-violating contributions, which may lead to sizeable direct CP violation in $`𝑩_𝒅\mathbf{}𝝅^\mathrm{𝟎}𝑲_𝐒`$, and to a violation of (34). Consequently, as we have already emphasized above, it would be an important task to measure the CP-violating observables of this decay.
If the new-physics contributions are CP-conserving, it will be hard to distinguish them from large non-factorizable flavour-symmetry-breaking effects, which may also shift the parameter $`𝒒𝒆^{𝒊𝝎}`$ from (17). In Refs. , it was argued that these effects are very small, whereas we gave a more critical picture in . Also the approach proposed in is in favour of small non-factorizable effects. The deviation of $`𝒒𝒆^{𝒊𝝎}\mathbf{=}\mathbf{1.26}\mathbf{\times }\mathrm{𝐞𝐱𝐩}\mathbf{(}𝒊\mathbf{\hspace{0.17em}45}^{\mathbf{}}\mathbf{)}`$ used in the example given in Fig. 8 from (17) would probably be too large to be explained by $`𝑺𝑼\mathbf{(}\mathrm{𝟑}\mathbf{)}`$ breaking in a “natural” way. However, there may be additional sources for flavour-symmetry-breaking effects. An example is $`𝝅^\mathrm{𝟎}`$$`𝜼`$, $`𝜼^{\mathbf{}}`$ mixing, which has not yet been considered for $`𝑩\mathbf{}𝝅^\mathrm{𝟎}𝑲`$ decays. In a recent paper , it was emphasized that isospin violation arising from such effects could mock new physics in the extraction of the CKM angle $`𝜶`$ from $`𝑩\mathbf{}𝝅𝝅`$ isospin relations. It would be interesting to extend these studies also to the $`𝑩\mathbf{}𝝅𝑲`$ approaches to probe $`𝜸`$.
## 5 Conclusions
As we have pointed out in Ref. , the neutral $`𝑩\mathbf{}𝝅𝑲`$ strategy could be useful to constrain – and eventually determine – $`𝜸`$ in an analogous manner as the strategy of Neubert and Rosner using charged $`𝑩\mathbf{}𝝅𝑲`$ modes. The most recent CLEO data look very interesting in this respect. As we have illustrated in Figs. 14, improved measurements of both the neutral and the charged modes, in particular taken together, could give a powerful constraint on $`𝜸`$. There is some indication that the second quadrant for $`𝜸`$ is preferred. This is in contrast to the standard analysis of the unitarity triangle, which favours the first quadrant. Unfortunately, no definite conclusions can be drawn at present. This “discrepancy” between the $`𝑩\mathbf{}𝝅𝑲`$ approaches and the standard analysis of the unitarity triangle could turn out to be more pronounced when the $`𝑩`$-decay data improve and the lower bound on $`𝑩_𝒔^\mathrm{𝟎}`$$`\overline{𝑩_𝒔^\mathrm{𝟎}}`$ mixing will be raised, forcing the upper bound on $`𝜸`$ from the standard analysis to be even smaller than presently known.
We have also pointed out that the CLEO data suggest bounds on the strong phases $`𝜹_𝐧`$ and $`𝜹_𝐜`$ with $`\mathrm{𝐜𝐨𝐬}𝜹_𝐧\mathbf{<}\mathrm{𝟎}`$ and $`\mathrm{𝐜𝐨𝐬}𝜹_𝐜\mathbf{>}\mathrm{𝟎}`$. The substantial deviation of $`𝜹_𝐧`$ from $`𝜹_𝐜`$ and the negative value of $`\mathrm{𝐜𝐨𝐬}𝜹_𝐧`$, if confirmed by improved data, would either indicate substantial new-physics contributions to the electroweak penguin sector, or large non-factorizable $`𝑺𝑼\mathbf{(}\mathrm{𝟑}\mathbf{)}`$-breaking effects. In order to distinguish between these possibilties, detailed studies of the various patterns of new-physics effects in all $`𝑩\mathbf{}𝝅𝑲`$ decays are essential, as well as critical analyses of possible sources for $`𝑺𝑼\mathbf{(}\mathrm{𝟑}\mathbf{)}`$ breaking. We hope that future studies following the strategies discussed in this paper will eventually shed light on the physics beyond the Standard Model.
This work has been supported in part by the German Bundesministerium für Bildung und Forschung under contract 05HT9WOA0.
|
no-problem/0003/physics0003105.html
|
ar5iv
|
text
|
# Introduction
## Introduction
Regulated gene expression is the process through which cells control fundamental functions, such as the production of enzymatic and structural proteins, and the time sequence of this production during development . Many of these regulatory processes take place at the level of gene transcription , and there is evidence that the underlying reactions governing transcription can be affected by external influences from the environment .
As experimental techniques are increasingly capable of providing reliable data pertaining to gene regulation, theoretical models are becoming important in the understanding and manipulation of such processes. The most common theoretical approach is to model the interactions of elements in a regulatory network as biochemical reactions. Given such a set of chemical reactions, the individual jump processes (i.e., the creation or destruction of a given reaction species) and their associated probabilities are considered. In its most general form, this often leads to a type of Monte Carlo simulation of the interaction probabilities . Although this approach suffers from a lack of analytic tractability, its strength is its completeness – fluctuations in species’ concentrations are embedded in the modeling process. These internal fluctuations are important for systems containing modest numbers of elements, or when the volume is small.
Rate equations originate as a first approximation to such a general approach, whereby internal fluctuations are ignored. These deterministic differential equations describe the evolution of the mean value of some property of the set of reactions, typically the concentrations of the various elements involved. The existence of positive or negative feedback in a regulatory network is thought to be common , and, within the reaction framework, feedback leads to nonlinear rate equations .
Noise in the form of random fluctuations arises in these systems in one of two ways. As discussed above, internal noise is inherent in the biochemical reactions. Its magnitude is proportional to the inverse of the system size, and its origin is often thermal. On the other hand, external noise originates in the random variation of one or more of the externally set control parameters, such as the rate constants associated with a given set of reactions. If the noise source is small, its effect can often be incorporated post hoc into the rate equations. In the case of internal noise, this is done in an attempt to recapture the lost information embodied in the rate equation approximation. But in the case of external noise, one often wishes to introduce some new phenomenon where the details of the effect are not precisely known. In either case, the governing rate equations are augmented with additive or multiplicative stochastic terms. These terms, viewed as a random perturbation to the deterministic picture, can induce various effects, most notably the switching between potential attractors (i.e., fixed points, limit cycles, chaotic attractors) .
While impressive progress has been made in genome sequencing and the understanding of certain qualitative features of gene expression, there have been comparatively few advancements in the quantitative understanding of genetic networks. This is due to the inherent complexity of such biological systems. In this work, we adopt an engineering approach in studying a solitary gene network. We envision that a plasmid, or genetic applet , containing a small, self-contained gene regulatory network, can be designed and studied in isolation. Such an approach has two distinct advantages. First, since the approach is inherently reductionist, it can make gene network problems tractable and thus more amenable to a mathematical formulation. Secondly, such an approach could form the basis for new techniques in the regulation of in vivo gene networks, whereby a genetic applet is designed to control cellular function.
In this paper, we develop a model describing the dynamics of protein concentration in such a genetic applet, and demonstrate how external noise can be used to control the network. Although our results are general for networks designed with positive autoregulation, we ground the discussion by considering an applet derived from the promotor region of bacteriophage $`\lambda `$. Since the range of potentially interesting behavior is wide, we focus primarily on the steady-state mean value of the concentration of the $`\lambda `$ repressor protein. This choice is motivated by experiment; detailed dynamical information is still rather difficult to obtain, as are statistical data concerning higher moments. We show how an additive noise term can be introduced to our model, and how the subsequent Langevin equation is analyzed by way of transforming to an equation describing the evolution of a probability function. We then obtain the steady-state mean repressor concentration by solving this equation in the long time limit, and discuss its relationship to the magnitude of the external perturbation. This leads to a potentially useful application, whereby one utilizes the noise to construct a genetic switch. We then consider noise at the level of transcription, where noise enters the formulation in a multiplicative manner. As in the additive case, we transform to an equation describing a probability distribution, and solve for the steady-state mean concentration as a function of noise strength. Finally, we demonstrate how such a noise source can be used to amplify the repressor concentration by several orders of magnitude.
## A Model for Repressor Expression
In the context of the lysis-lysogeny pathway in the $`\lambda `$ virus, the autoregulation of $`\lambda `$ repressor expression is well-characterized . In this section, we present two models describing the regulation of such a network. We envision that our system is a plasmid consisting of the $`P_RP_{RM}`$ operator region and components necessary for transcription, translation, and degradation.
Although the full promotor region in $`\lambda `$ phage contains the three operator sites known as OR1, OR2, and OR3, we first consider a mutant system whereby the operator site OR1 is absent from the region. The basic dynamical properties of this network, along with a categorization of the biochemical reactions, are as follows . The gene cI expresses repressor (CI), which in turn dimerizes and binds to the DNA as a transcription factor. In the mutant system, this binding can take place at one of the two binding sites OR2 or OR3. (Here, we ignore nonspecific binding.) Binding at OR2 enhances transcription, which takes place downstream of OR3, while binding at OR3 represses transcription, effectively turning off production.
The chemical reactions describing the network are naturally divided into two categories – fast and slow. The fast reactions have rate constants of order seconds, and are therefore assumed to be in equilibrium with respect to the slow reactions, which are described by rates of order minutes. If we let $`X`$, $`X_2`$, and $`D`$ denote the repressor, repressor dimer, and DNA promoter site, respectively, then we may write the equilibrium reactions
$`2X`$ $`\stackrel{K_1}{}`$ $`X_2`$ (1)
$`D+X_2`$ $`\stackrel{K_2}{}`$ $`DX_2`$
$`D+X_2`$ $`\stackrel{K_3}{}`$ $`DX_2^{}`$
$`DX_2+X_2`$ $`\stackrel{K_4}{}`$ $`DX_2X_2`$
where the $`DX_2`$ and $`DX_2^{}`$ complexes denote binding to the OR2 or OR3 sites, respectively, $`DX_2X_2`$ denotes binding to both sites, and the $`K_i`$ are forward equilibrium constants. We let $`K_3=\sigma _1K_2`$ and $`K_4=\sigma _2K_2`$, so that $`\sigma _1`$ and $`\sigma _2`$ represent binding strengths relative to the dimer-OR2 strength.
The slow reactions are transcription and degradation,
$`DX_2+P`$ $`\stackrel{k_t}{}`$ $`DX_2+P+nX`$ (2)
$`X`$ $`\stackrel{k_d}{}`$ $`A`$
where $`P`$ denotes the concentration of RNA polymerase and $`n`$ is the number of proteins per mRNA transcript. These reactions are considered irreversible.
If we consider an in vitro system with high copy-number plasmids<sup>*</sup><sup>*</sup>*This assumption is necessary since the number of relevant molecules per cell is small in vivo. Since there are many cells, we could alternatively use state probabilities as dynamical variables describing an in vivo system., we may define concentrations as our dynamical variables. Letting $`x=[X]`$, $`y=[X_2]`$, $`d=[D]`$, $`u=[DX_2]`$, $`v=[DX_2^{}]`$, and $`z=[DX_2X_2]`$, we can write a rate equation describing the evolution of the concentration of repressor,
$$\dot{x}=2k_1x^2+2k_1y+nk_tp_0uk_dx+r$$
(3)
where we assume that the concentration of RNA polymerase $`p_0`$ remains constant during time. The parameter $`r`$ is the basal rate of production of CI, i.e., the expression rate of the cI gene in the absence of a transcription factor.
We next eliminate $`y`$, $`u`$, and $`d`$ from Eq. (3) as follows. We utilize the fact that the reactions in Eq. (1) are fast compared to expression and degradation, and write algebraic expressions
$`y`$ $`=`$ $`K_1x^2`$ (4)
$`u`$ $`=`$ $`K_2dy=K_1K_2dx^2`$
$`v`$ $`=`$ $`\sigma _1K_2dy=\sigma _1K_1K_2dx^2`$
$`z`$ $`=`$ $`\sigma _2K_2uy=\sigma _2(K_1K_2)^2dx^4`$
Further, the total concentration of DNA promoter sites $`d_T`$ is constant, so that
$$d_T=d+u+v+z=d(1+(1+\sigma _1)K_1K_2x^2+\sigma _2K_1^2K_2^2x^4)$$
(5)
Under these assumptions, Eq. (3) becomes
$$\dot{x}=\frac{nk_tp_0d_tK_1K_2x^2}{1+(1+\sigma _1)K_1K_2x^2+\sigma _2K_1^2K_2^2x^4}k_dx+r$$
(6)
Without loss of generality, we may eliminate two of the parameters in Eq. (3) by rescaling the repressor concentration $`x`$ and time. To this end, we define the dimensionless variables $`\stackrel{~}{x}=x\sqrt{K_1K_2}`$ and $`\stackrel{~}{t}=t(r\sqrt{K_1K_2})`$. Upon substitution into Eq. (3), we obtain
$$\dot{x}=\frac{\alpha x^2}{1+(1+\sigma _1)x^2+\sigma _2x^4}\gamma x+1$$
(7)
where the time derivative is with respect to $`\stackrel{~}{t}`$ and we have suppressed the overbar on $`x`$. The dimensionless parameter $`\alpha nk_tp_0d_T/r`$ is effectively a measure of the degree in which the transcription rate is increased above the basal rate by repressor binding, and $`\gamma k_d/(r\sqrt{K_1K_2})`$ is proportional to the relative strengths of the degradation and basal rates.
For the mutant operator region of $`\lambda `$ phage, we have $`\sigma _11`$ and $`\sigma _25`$ , so that the two parameters $`\alpha `$ and $`\gamma `$ in Eq. (7) determine the steady-state concentration of repressor. For this equation, there are two types of behavior. For one set of parameter values, we have monostability, whereby all initial concentrations evolve to the same fixed-point value. For another set, we have three fixed points, and the initial concentration will determine which steady state is selected. Additionally, in the multiple fixed-point regime, stability analysis indicates that the middle fixed point $`x_m`$ is unstable, so that all initial values $`x<x_m`$ will evolve to the lower fixed point, while those satisfying $`x>x_m`$ will evolve to the upper. This bistability arises as a consequence of the competition between the production of $`x`$ along with dimerization and its degradation. For certain parameter values, the initial concentration is irrelevant, but for those that more closely balance production and loss, the final concentration is determined by the initial value.
Graphically, we can see how bistability arises in Eq. (7) by setting $`\alpha x^2/(1+2x^2+5x^4)=\gamma x1`$. In Fig. 1A we plot the functions $`\alpha x^2/(1+2x^2+5x^4)`$ and $`\gamma x1`$ for fixed $`\alpha `$ and several values of the slope $`\gamma `$. We see that for $`\gamma `$ small (whereby degradation is minimal compared with production), there is one possible steady-state value of $`x`$ (and therefore CI). As we increase $`\gamma `$ above some critical value $`\gamma _L`$, we observe that three fixed-point values appear. As we increase $`\gamma `$ still further beyond a second critical value $`\gamma _U`$, the concentration “jumps” to a lower value and the system returns to a state of monostability.
The preceding ideas lead to a plausible method whereby the system may be experimentally probed for bistability. We envision that $`\alpha `$ is fixed by the transcription rate and DNA binding site concentration, and that the degradation parameter $`\gamma `$ is an adjustable control. Beginning with a low initial value of $`\gamma =\gamma _0=5`$, we slowly increase the degradation rate. The effect is illustrated in Fig. 1B. We see that as $`\gamma `$ is slowly increased, the concentration of CI slowly decreases as the system tracks the fixed point. Then, at the moment when $`\gamma `$ is greater than $`\gamma _U`$, the concentration abruptly jumps to a lower value, followed by a further slow increase. Now suppose we reverse course, and begin to decrease $`\gamma `$. Then the system will track along the lower fixed point until a point when $`\gamma `$ is greater than $`\gamma _L`$. At this point, the system will again jump, this time to a higher fixed-point value. The trademark of hysterisis is that the two jumps, one when increasing $`\gamma `$ and the other when decreasing, occur for different values of $`\gamma `$.
As is well-known, the full operator region of $`\lambda `$ phage contains three sites. We turn briefly to the effect of the additional site OR1 on the above network. In order to incorporate its effect, Eq. (1) must be generalized to account for additional equilibrium reactions. This generalization amounts to the incorporation of dimer binding to OR1 , and permutations of multiple binding possibilities at the three operator sites. Then, using known relationships between the cooperative binding rates, the above steps can be repeated and an equation analogous to Eq. (7) constructed. We obtain
$$\dot{x}=\frac{\alpha (2x^2+50x^4)}{25+29x^2+52x^4+4x^6}\gamma x+1$$
(8)
As can be seen, the addition of OR1 has the effect of changing the first term on the right-hand side of the equation. While this augmentation does not affect the qualitative features of the above discussion, one important quantitative difference is depicted in Fig. 1B. In this figure, we see that the addition of OR1 has a large effect on the bistability region, increasing the overall size of the region by roughly an order of magnitude. Additionally, the model predicts that, while the drop in the concentration of repressor at the first bifurcation point will be approximately the same in both cases, the jump to the higher concentration will be around five times greater in the system containing OR1. Finally, since one effect of a larger bistable region is to make the switching mechanism more robust to noise, these results are of notable significance in the context of the lysogeny-to-lysis switching of $`\lambda `$ phage.
## Additive Noise
We now focus on parameter values leading to bistability, and consider how an additive external noise source affects the production of repressor. Physically, we take the dynamical variable $`x`$ described above to represent the repressor concentration within a colony of cells, and consider the noise to act on many copies of this colony. In the absence of noise, each colony will evolve identically to one of the two fixed points, as discussed above. The presence of a noise source will at times modify this simple behavior, whereby colony-to-colony fluctuations can induce novel behavior.
An additive noise source alters the “background” repressor production. As an example, consider the effect of a randomly varying external field on the biochemical reactions. The field could, in principle, impact the individual reaction rates , and since the rate equations are probabilistic in origin, its influence enters statistically. We posit that such an effect will be small and can be treated as a random perturbation to our existing treatment; we envision that events induced will affect the basal production rate, and that this will translate to a rapidly varying background repressor production. In order to introduce this effect, we generalize the aforementioned model such that random fluctuations enter Eq. (8) linearly,
$$\dot{x}=f(x)+\xi (t)$$
(9)
where $`f(x)`$ is the right-hand side of Eq. (8), and $`\xi (t)`$ is a rapidly fluctuating random term with zero mean ($`<\xi (t)>=0`$). In order to encapsulate the rapid random fluctuations, we make the standard requirement that the autocorrelation be “$`\delta `$-correlated”, i.e., the statistics of $`\xi (t)`$ are such that $`<\xi (t)\xi (t^{})>=D\delta (tt^{})`$, with $`D`$ proportional to the strength of the perturbation.
Eq. (9) can be rewritten as
$$\dot{x}=\frac{\varphi (x)}{x}+\xi (t)$$
(10)
where we introduce the potential $`\varphi (x)`$, which is simply the integral of the right-hand side of Eq. (7). $`\varphi (x)`$ can be viewed as an “energy landscape”, whereby $`x`$ is considered the position of a particle moving in the landscape. One such landscape is plotted in Fig. 2A. Note that the stable fixed values of repressor concentration correspond to the minima of the potential $`\varphi `$ in Fig. 2A, and the effect of the additive noise term is to cause random kicks to the particle (system state point) lying in one of these minima. On occasion, a sequence of kicks may enable the particle to escape a local minimum and reside in a new valley.
To solve Eq. (10), we introduce the probability distribution $`P(x,t)`$, which is effectively the probability of finding the system in a state with concentration $`x`$ at time $`t`$. Given Eq. (10), a Fokker-Planck (FP) equation for $`P(x,t)`$ can be constructed
$$_tP(x,t)=_x(f(x)P(x,t))+\frac{D}{2}_{x}^{}{}_{}{}^{2}P(x,t)$$
(11)
We focus here on the value of the steady-state mean (ssm) concentration. To this end, we first solve for the steady-state distribution, obtaining
$$P_s(x)=Ae^{\frac{2}{D}\varphi (x)}$$
(12)
where $`A`$ is a normalization constant determined by requiring the integral of $`P_s(x)`$ over all $`x`$ be unity. In Fig. 2B, we plot $`P_s(x)`$, corresponding to the landscape of Fig. 2A, for two values of the noise strength $`D`$. It can be seen that for the smaller noise value the probability is distributed around the lower concentration of repressor, while for the larger noise value the probability is split and distributed around both concentrations. This is consistent with our conceptual picture of the landscape: low noise will enable only transitions from the upper state to the lower state as random kicks are not sufficient to climb the steep barrier from the lower state, while high noise induces transitions between both of the states. Additionally, the larger noise value leads to a spreading of the distribution, as expected.
Using the steady-state distribution, the steady-state mean (ssm) value of $`x<x>_{ss}`$ is given by
$$<x>_{ss}=_0^{\mathrm{}}xAe^{\frac{2}{D}\varphi (x)}𝑑x$$
(13)
In Fig. 2C, we plot the ssm concentration as a function of $`D`$, obtained by numerically integrating Eq. (13) and transforming from the dimensionless variable $`x`$ to repressor concentration. It can be seen that the ssm concentration increases with $`D`$, corresponding to the increasing likelihood of populating the upper state, as discussed previously with respect to Figs. 2A and B.
Figure 2C indicates that the external noise can be used to control the ssm concentration. As a candidate application, consider the following protein switch. Given parameter values leading to the landscape of Fig. 2A, we begin the switch in the “off” position by tuning the noise strength to a very low value. This will cause a high population in the lower state, and a correspondingly low value of the concentration. Then at some time later, consider pulsing the system by increasing the noise to some large value for a short period of time, followed by a decrease back to the original low value. The pulse will cause the upper state to become populated, corresponding to a concentration increase and a flipping of the switch to the “on” position. As the pulse quickly subsides, the upper state remains populated as the noise is not of sufficient strength to drive the system across either barrier (on relevant time scales). To return the switch to the off position, the upper-state population needs to be decreased to a low value. This can be achieved by applying a second noise pulse of intermediate strength. This intermediate value is chosen large enough so as to enhance transitions to the lower state, but small enough as to remain prohibitive to upper-state transitions.
Figure 2D depicts the time evolution of the switching process for noise pulses of strengths $`D=1.0`$ and $`D=0.05`$. Initially, the concentration begins at a level of $`\text{[CI]}=10\text{nM}`$, corresponding to a low noise value of $`D=0.01`$. After six hours in this low state, a $`30`$-minute noise pulse of strength $`D=1.0`$ is used to drive the concentration to a value of $`\text{[CI]}58\text{nM}`$. Following this burst, the noise is returned to its original value. At $`11`$ hours, a second $`90`$-minute pulse of strength $`D=0.05`$ is used to return the concentration to its original value.
## Multiplicative Noise
We now consider the effect of a noise source which alters the transcription rate. Although transcription is represented by a single biochemical reaction, it is actually a complex sequence of reactions , and it is natural to assume that this part of the gene regulatory sequence is likely to be affected by fluctuations of many internal or external parameters. We vary the transcription rate by allowing the parameter $`\alpha `$ in Eq. (8) to vary stochastically, i.e., we set $`\alpha \alpha +\xi (t)`$. In this manner, we obtain an equation describing the evolution of the protein concentration $`x`$
$$\dot{x}=h(x)+\xi (t)g(x)$$
(14)
where $`h(x)`$ is the right-hand side of Eq. (8), and
$$g(x)\frac{2x^2+50x^4}{25+29x^2+52x^4+4x^6}$$
(15)
Thus, in this case, the noise is multiplicative, as opposed to additive, as in the previous case.
Qualitatively, we can use the bifurcation plot of Fig. 3A to anticipate one effect of allowing the parameter $`\alpha `$ to fluctuate. Such a bifurcation plot is yet another way of depicting the behavior seen in Fig. 1A; it can be seen that for certain values of $`\alpha `$ there is one unique steady-state value of repressor concentration, and that for other values there are three. To incorporate fluctuations, if we envision $`\alpha `$ to stochastically vary in the bistable region of Fig. 3A, we notice that the steep top branch implies the corresponding fluctuations in repressor concentration will be quite large. This is contrasted with the flat lower branch, where modest fluctuations in $`\alpha `$ will induce small variations. In order to verify this observation quantitatively, we simulated Eq. (14), the results of which are presented in Fig. 3B. Beginning with repressor concentration equal to its upper value of approximately $`500`$ nM, we notice that the immediate fluctuations are quite large even though $`\alpha `$ varies by only a few percent (Fig. 3A). Then, at around $`700`$ minutes, the concentration quickly drops to its lower value, indicating that the fluctuations envisioned in Fig. 3A were sufficient to drive the repressor concentration to the dotted line of Fig. 3A and off the upper branch (across the unstable fixed point). The final state is then one of very small variation, as anticipated.
As in the previous section, the steady-state probability distribution is obtained by transforming Eq. (14) to an equivalent Fokker-Planck equation ,
$$_tP(x,t)=_x(h(x)+\frac{D}{2}g(x)g^{}(x))P(x,t)+\frac{D}{2}_u^2g^2(u)P(x,t)$$
(16)
where the prime denotes the derivative of $`g(x)`$ with respect to $`x`$. We again solve for the steady-state distribution, obtaining
$$P_s(x)=Be^{\frac{2}{D}\varphi _m(x)}$$
(17)
As before, the steady-state distribution can be used to obtain the ssm concentration.
Although not originating from a deterministic equation like that of Eq. (7), the function $`\varphi _m(u)`$ in Eq. (17) can still be viewed as a potential. We now consider parameter values leading to one such landscape in Fig. 3C. This landscape implies that we will have two steady-state repressor concentrations of approximately $`5`$ and $`1200`$ nM. This large difference is due to the largeness of the parameter $`\alpha `$, implying that repressor “induced” transcription amplifies the basal rate by a large amount. (Since $`d_T`$ enters in the numerator of the definition of $`\alpha `$, one could construct such a system experimentally with a high copy-number plasmid). This feature suggests that multiplicative noise could be used to amplify protein production, as described in the following example. We begin with zero protein concentration and very low noise strength $`D`$, leading to a highly populated lower state and low overall concentration. Then, at some later time, we pulse the system by increasing $`D`$ for some short interval. This will cause the upper state to become quickly populated as it is easy to escape the shallow valley of the landscape and move into the large basin. In Fig. 3D, we plot the temporal evolution of the mean repressor concentration obtained from the simulation of Eq. (14). We see that the short noise pulse at around $`20`$ hours indeed causes the concentration to increase abruptly by over three orders of magnitude, making this type of amplification an interesting case for experimental exploration.
## Discussion
From an engineering perspective, the control of cellular function through the design and manipulation of genetic regulatory networks is an intriguing possibility. In this paper, we have shown how external noise can be used to control the dynamics of a regulatory network, and how such control can be practically utilized in the design of a genetic switch and/or amplifier. Although the main focus of this work was on a network derived from the promotor region of $`\lambda `$ phage, our approach is generally applicable to any autoregulatory network where a protein-multimer acts as a transcription factor.
An important element of our control scheme is bistability. This implies that a necessary criterion in the design of a noise-controlled applet be that the network is poised in a bistable region. This could potentially be achieved by methods such as the utilization of a temperature-dependent repressor protein, DNA titration, SSRA tagging, or pH control.
Physically, the noise might be generated using an external field. Importantly, it has been claimed that electromagnetic fields can exert biological effects . In addition, recent theoretical and experimental work suggests a possible mechanism whereby an electric field can alter an enzyme-catalyzed reaction. These findings suggest that, although there is global charge neutrality, an external field can interact with local dipoles which arise through transient conformational changes or in membrane transport.
Current gene therapy techniques are limited in that transfected genes are typically either in an “on” or “off” state. However, for the effective treatment of many diseases, the expression of a transfected gene needs to be regulated in some systematic fashion. Thus, the development of externally-controllable noise-based switches and amplifiers for gene expression could have significant clinical implications.
ACKNOWLEDGEMENTS. We respectfully acknowledge insightful discussions with Kurt Wiesenfeld, Farren Issacs, Tim Gardner, and Peter Jung. This work was supported by the Office of Naval Research (Grant N00014-99-1-0554) and the U.S. Department of Energy.
## Figure Captions
FIG. 1. Bifurcation plots for the variable $`x`$ and concentration of $`\lambda `$ repressor. (A) Graphical depiction of the fixed points of Eq. (7), generated by setting $`\dot{x}=0`$ and plotting $`\alpha x^2/(1+2x^2+5x^4)`$ and the line $`\gamma x1`$. As the slope $`\gamma `$ is increased, the system traverses through a region of multistability and returns to a state of monostability. (B) Hysterisis loops for the mutant and nonmutant systems obtained by setting $`\dot{x}=0`$ in Eqs. (7) and (8). Beginning with concentrations of $`35`$ nM for the mutant system and $`85`$ nM for the nonmutant system, we steadily increase the degradation parameter $`\gamma `$. In both systems, the concentration of repressor slowly decreases until a bifurcation point. In the mutant (nonmutant) system, the repressor concentration abruptly drops to a lower value at $`\gamma 16`$ ($`\gamma 24`$). Then, upon reversing course and decreasing $`\gamma `$, the repressor concentration increases slowly until $`\gamma `$ encounters a second bifurcation point at $`\gamma 14`$ ($`\gamma 6`$), whereby the concentration immediately jumps to a value of $`15`$ nM (mutant) or $`70`$ nM (nonmutant). The subsequent hysterisis loop is approximately $`10`$ times larger in the nonmutant case. Parameter values are $`\alpha =50`$, $`K_1=0.05\text{nM}^1`$, and $`K_2=0.026\text{nM}^1`$ for the mutant system, and $`K_2=0.033\text{nM}^1`$ for the nonmutant system .
FIG. 2. Results for additive noise with parameter values $`\alpha =10`$ and $`\gamma =5.5`$. (A) The energy landscape. Stable equilibrium concentration values of Eq. (8) correspond to the valleys at $`\text{[CI]}=10`$ and $`200`$ nM, with an unstable value at $`\text{[CI]}=99`$ nM. (B) Steady-state probability distributions for noise strengths of $`D=0.04`$ (solid line) and $`D=0.4`$ (dotted line). (C) The steady-state equilibrium protein concentration plotted versus noise strength. The concentration increases as the noise causes the upper state of (A) to become increasingly populated. (D) Simulation of Eq. (9) demonstrating the utilization of external noise for protein switching. Initially, the concentration begins at a level of $`\text{[CI]}=10`$ nM corresponding to a low noise value of $`D=0.01`$. After six hours, a large $`30`$-minute noise pulse of strength $`D=1.0`$ is used to drive the concentration to $`58`$ nM. Following this pulse, the noise is returned to its original value. At $`11`$ hours, a smaller $`90`$-minute noise pulse of strength $`D=0.04`$ is used to return the concentration to near its original value. The simulation technique is that of Ref. .
FIG. 3. Results for multiplicative noise. (A) Bifurcation plot for the repressor concentration versus the model parameter $`\alpha `$. The steep upper branch implies that modest fluctuations in $`\alpha `$ will cause large fluctuations around the upper fixed value of repressor concentration, while the flat lower branch implies small fluctuations about the lower value. (B) The evolution of the repressor concentration in a single colony, obtained by simulation of Eq. (14). Relatively small random variations of the parameter $`\alpha `$ ($``$ 6%) induce large fluctuations in the steady-state concentration until around $`700`$ minutes and small fluctuations thereafter. (C) Energy landscape for parameter values $`\alpha =100`$ and $`\gamma =8.5`$. (D) Large-scale amplification of the protein concentration obtained by simulation of Eq. (14). At $`20`$ hours, a $`60`$-minute noise pulse of strength $`D=1.0`$ is used to quickly increase the protein concentration by over three orders of magnitude. The parameter values are the same as those in (C).
Figure 1 - Hasty et al.
Figure 2 - Hasty et al.
Figure 3 - Hasty et al.
|
no-problem/0003/hep-ph0003179.html
|
ar5iv
|
text
|
# Perturbative and non-perturbative aspects of moments of the thrust distribution in 𝑒⁺𝑒⁻ annihilation11footnote 1Research supported by the EC program “Training and Mobility of Researchers”, Network “QCD and Particle Structure”, contract ERBFMRXCT980194.
## 1 Introduction
The systematic analysis of event-shape variables in $`e^+e^{}`$ annihilation has become an active research field in the recent years . High quality experimental data in a wide range of center-of-mass energies $`Q`$ (from $`10`$ up to $`200\mathrm{GeV}`$) provide an opportunity for precision test of the theory. On the theoretical side, the simple set-up of $`e^+e^{}`$ annihilation allows to probe directly the QCD vacuum and learn about the interface between perturbative and non-perturbative physics.
Empirically, it is known for quite some time that perturbative approximations to moments of event-shape variables are consistent with experimental data only upon including additive power-corrections. A classical example is the case of the thrust, defined by
$$T=\frac{_i\left|\stackrel{}{p}_i\stackrel{}{n}_T\right|}{_i|\stackrel{}{p}_i|}$$
(1)
where the summation is over all the final-state particles $`\stackrel{}{p}_i`$. The thrust axis $`\stackrel{}{n}_T`$ is a unit vector which is set, for a given event, such that $`T`$ is maximized. Below we shall use the variable $`t1T`$ which vanishes in the two-jet limit.
For the average thrust, as for several other average event-shape variables, one finds $`1/Q`$ corrections, $`<t><t>_{\text{PT}}+\lambda /Q`$, where $`\lambda `$ is some hadronic scale. These corrections are quite large at LEP energies. The case of average event-shape variables was intensively studied both theoretically and experimentally.
Here we investigate perturbative and non-perturbative aspects of higher moments of $`1`$ thrust, $`<t^m>`$. This subject was addressed before (see e.g. ), but it did not receive yet the appropriate attention. Whereas some experimental results are already available , many more can be extracted from stored LEP data . This analysis is worthwhile: alongside interesting theoretical insights, the investigation of higher moments of event-shape variables may lead to additional precision measurements of $`\alpha _s`$.
Existence of a well-defined perturbative approximation to event-shape variables is guaranteed since they are infrared and collinear safe. However, the remaining sensitivity of these observables to soft and collinear emission is usually quite high and it shows up in the form of large perturbative coefficients. For event-shape distributions close to the two-jet limit (the Sudakov region) the leading logarithms in the perturbative coefficients can be resummed to all-orders . Out of this special part of phase-space, and in particular when moments of event-shape variables are considered, only the leading and next-to-leading perturbative terms are known . Since the apparent convergence of these perturbative expansions in a standard renormalization-scheme and scale is slow, resummation seems necessary. A relevant source of large coefficients are renormalon diagrams which reflect the effect of the running-coupling.
One can imagine , in analogy with the skeleton expansion in the Abelian theory, a possibility of reorganizing the perturbative expansion such that all diagrams which correspond to dressing a single gluon are summed first, then diagrams with two dressed gluons, etc. Formally, a systematic expansion of this type has not yet been shown to exist. However, the spirit of the Brodsky-Lepage-Mackenzie (BLM) scale setting method , one can attempt to identify and resum the single dressed gluon (SDG) terms to all orders. This is the basis of the renormalon resummation methodology, which was applied in many different QCD examples . In the context of event-shape variables in $`e^+e^{}`$ annihilation we mention the case of the longitudinal cross-section and that of the average thrust .
Whatever resummation procedure is applied, it is clear that perturbation theory alone cannot predict the observed values of event-shape variables: the perturbative calculation uses quark and gluon fields while the measurement is of hadrons. The effect of hadronization on the observables can be modeled by Monte-Carlo simulations. However, in order to gain some understanding on the nature of confinement in QCD it is favorable to analyze raw hadronic data, and parameterize non-perturbative effects in the simplest possible way.
It was noticed that hadronization does not involve large momentum transfer. Therefore perturbative results calculated in terms of partons can almost be directly compared with the data . Due to the sensitivity of the observables to soft physics some modification of the perturbative result is still necessary. At the perturbative level, infrared sensitivity shows up in the form of infrared renormalons : the coefficients increase fast, asymptotically as $`n!`$, and have a constant sign pattern. Consequently, the series is non-summable. Different resummation prescriptions, or “regularizations” of the perturbative sum, differ by power-suppressed terms. This ambiguity must be compensated at the non-perturbative level. In the SDG approximation, one can calculate the perturbative sum and extract the form of its ambiguity. This way a perturbative calculation can be used to identify the parametric dependence of non-perturbative corrections on the external scale $`Q`$. The magnitude of these power-corrections cannot be calculated, but assuming some universality properties it is possible to estimate it for a large class of observables based on experimental data for one of them.
The SDG approximation, as applied in in the case of the average thrust, provides a systematic framework for the analysis of perturbative running-coupling effects together with the related power-corrections. These two aspects of improving the truncated perturbative result naturally complement each other: they reflect the same physical phenomenon.
It is clear that any observable, including the moments of $`1`$ thrust considered here, may eventually depend on other non-perturbative effects, which are unrelated to ambiguities of perturbation theory. We assume that these effects are not important. Since the power-corrections identified from the ambiguities of the perturbative expansion have a definite $`Q`$ dependence, this assumption can be confronted with experimental data.
It should be emphasized that, contrary to the Operator Product Expansion (OPE), the SDG renormalon approach lacks the rigor of a systematic expansion: there is no small parameter which distinguishes the contribution of multiple emission from that of a single emission. One should be aware of the possibility that in certain cases the leading power-corrections cannot be analyzed in the framework of the SDG calculation as they are associated with soft gluon emission from configurations of three of more hard partons. In particular, in the case of the higher moments of $`1`$ thrust, such effects may yield $`\alpha _s(Q^2)/Q`$ power-corrections . We shall return to this important issue in the conclusions.
Further subtleties are related to the fact that event-shape variables are not completely inclusive : they are sensitive to certain details of the final state, while the resummation procedure we use is completely inclusive with respect to the fragmentation products of the gluon.
The purpose of this work is to examine, in the framework of the SDG approximation, running-coupling effects and power-corrections to higher moments of $`1`$ thrust. We begin, in section 2 by analyzing the thrust distribution. As in , where the average thrust was analyzed, we use the “massive” gluon dispersive approach . In this framework resummation as well as parameterization of power-corrections are obtained from a so-called characteristic function. After explaining the main assumptions of the SDG model we calculate the characteristic function for the thrust distribution. We then devote a short discussion to the domain of applicability of the SDG result as a function of the thrust. Next, in section 3 we evaluate the characteristic functions for various moments of $`1`$ thrust and study their properties. In particular we extract the characteristic invariant mass of gluons contributing to the various moments and compare the significance of running-coupling effects to other perturbative contributions. We also quantify the effect of the non-inclusive contribution at the next-to-leading order. Finally, we extract the leading power-corrections implied for the various moments of $`1`$ thrust by the ambiguity of the perturbative SDG result, and identify the regions of phase-space from which they emerge. The conclusions are given in section 4.
## 2 The thrust distribution in the single dressed gluon model
The differential cross section with respect to the thrust $`d\sigma /dt`$ can be calculated in perturbation theory in the single dressed gluon (SDG) approximation as follows
$$\frac{d\sigma }{dt}(t)|_{\text{SDG}}=C_F_0^1\frac{dϵ}{ϵ}\overline{a}_{\text{eff}}(ϵQ^2)\dot{}(ϵ,t)=C_F_0^1\frac{dϵ}{ϵ}\overline{\rho }(ϵQ^2)\left[(ϵ,t)(0,t)\right]$$
(2)
where the integration over $`ϵ\mu ^2/Q^2`$ corresponds to inclusive summation over final states into which the emitted gluon fragments. In this calculation a final state of invariant mass $`\mu ^2`$ is characterized by a single universal (i.e. observable independent) function $`\overline{\rho }(\mu ^2)`$ which is identified as the time-like discontinuity of the coupling $`\overline{a}(k^2)\overline{\alpha }_s(k^2)/\pi `$, where the bar stands for a specific renormalization-scheme. The two integrals in (2) are related by integration by parts:
$$\dot{}(ϵ,t)ϵ\frac{d}{dϵ}(ϵ,t),$$
(3)
and the “time-like coupling” $`\overline{a}_{\text{eff}}(\mu ^2)`$ obeys
$$\mu ^2\frac{d\overline{a}_{\text{eff}}(\mu ^2)}{d\mu ^2}=\rho (\mu ^2).$$
(4)
The thrust characteristic function $`(ϵ,t)`$ is obtained from the following integral over phase-space,
$$(ϵ,t)=_{\mathrm{phase}\mathrm{space}}𝑑x_1𝑑x_2(x_1,x_2,ϵ)\delta \left(1T(x_1,x_2,ϵ)t\right)$$
(5)
where $`C_Fa`$ is the squared tree level matrix element for the production of quark–anti-quark pair and a gluon of virtuality $`\mu ^2ϵQ^2`$, and
$$(x_1,x_2,ϵ)=\frac{1}{2}\left[\frac{(x_1+ϵ)^2+(x_2+ϵ)^2}{(1x_1)(1x_2)}\frac{ϵ}{(1x_1)^2}\frac{ϵ}{(1x_2)^2}\right].$$
(6)
The integration variables $`x_{1,2}`$ represent the energy fraction of each of the quarks in the center-of-mass frame. The energy fraction of the gluon is $`x_3=2x_1x_2`$.
For the calculation of the characteristic function we shall use the following definition of the thrust
$$T=\frac{_i\left|\stackrel{}{p}_i\stackrel{}{n}_T\right|}{_iE_i}=\frac{_i\left|\stackrel{}{p}_i\stackrel{}{n}_T\right|}{Q}.$$
(7)
In case of three partons (a quark, an anti-quark and a “massive” gluon) it yields ,
$$1T(x_1,x_2,ϵ)=\mathrm{min}\{1x_1,\mathrm{\hspace{0.17em}1}x_2,\mathrm{\hspace{0.17em}1}\sqrt{(2x_1x_2)^24ϵ}\}.$$
(8)
Note that in the definition (7) the denominator is modified with respect to the standard one (1), $`_i|\stackrel{}{p}_i|_iE_i`$, in a way that does not change the observable for massless partons. The virtual (“massive”) gluon is understood to fragment eventually into massless partons. This modification ensures that the value of the thrust calculated with a “massive” gluon will be correct, provided that all the (massless) partons produced in the process of the gluon fragmentation end up in the same hemisphere with respect to $`\stackrel{}{n}_T`$. The inclusive calculation performed here is justified only if the gluon fragmentation is predominantly collinear, in which case fragmentation into opposite hemispheres will be rare. The discrepancy between the inclusive “massive gluon” calculation and the full non-inclusive calculation was addressed before by several authors . This issue will be further discussed in the next section in the context of the moments of $`1`$ thrust.
The last ingredient for the calculation of the characteristic function is the phase-space<sup>*</sup><sup>*</sup>*The reader is referred to for more details on the calculation of the phase-space boundaries.. Fig. 1 shows the three parton phase-space in case of a gluon with a “mass” of $`\mu ^2=ϵQ^2=\mathrm{\hspace{0.17em}0.1}Q^2`$.
The external boundaries of phase-space in the figure correspond to the softest gluons (the upper curved line) and to the hardest ones (the lower linear line). The dashed lines represent the separation of phase-space according to which particle carries the largest momentum and thus determines the thrust axis (cf. eq. (8)): in the upper left region $`T=x_2`$, in the upper right region $`T=x_1`$ and in the lower region $`T=\sqrt{x_3^24ϵ}`$.
It is important to note that the two upper regions of phase-space in the figure (where one of the primary quarks carries the largest momentum, $`T=x_{1,2}`$) have three corners with a definite physical meaning for $`ϵ1`$:
(i) the collinear limit corresponding to a two-jet configuration, with $`t=ϵ`$. This is the lowest possible value of $`t`$, given $`ϵ`$.
(ii) the three parton symmetric limit, corresponding to a three-jet configuration, $`t=\left(2\sqrt{1+3ϵ}1\right)/3`$. This is the highest possible value of $`t`$, given $`ϵ`$.
(iii) the large-angle soft gluon limit, with $`t=\sqrt{ϵ}`$. Here gluons are emitted close to $`90`$ degrees with respect to the quark–anti-quark direction.
It is clear from fig. 1 that in order to perform the integral in (5) one has to treat separately values of $`t`$ smaller than $`\sqrt{ϵ}`$ where the soft gluon phase-space boundary (the curved line in fig. 1) is relevant, vs. $`t`$ larger than $`\sqrt{ϵ}`$, where it is not. As explained above, the value $`t\sqrt{ϵ}`$ corresponds, for small $`ϵ`$, to the situation where the gluon is soft and emitted in a large angle with respect to the quark–anti-quark direction. This part of phase-space is the source of the $`1/Q`$ power-corrections for the average thrust, and as we shall see in the next section, also for the leading, though much suppressed, infrared power-corrections for higher moments of $`1`$ thrust, at the SDG level. Note that other limits of phase-space, as well as the squared matrix element (6) itself do not contain any $`\sqrt{ϵ}`$ terms. In particular, in the collinear limit $`tϵ`$.
Evaluating (5) we obtain the characteristic function for the thrust distribution,
$$(ϵ,t)=\{\begin{array}{ccc}_Q^l(ϵ,t)+_G(ϵ,t)\hfill & & ϵ<t<\sqrt{ϵ}\hfill \\ _Q^h(ϵ,t)+_G(ϵ,t)\hfill & & \sqrt{ϵ}<t<\frac{2}{3}\sqrt{1+3ϵ}\frac{1}{3}\hfill \end{array}$$
(9)
where the dominant contribution $`_Q(ϵ,t)`$ corresponds to the phase-space regions where one of the primary quarks carries the largest momentum ($`T=x_{1,2}`$) and $`_G(ϵ,t)`$ corresponds to the region where the gluon momentum is the largest (see fig. 1). The superscripts $`l`$ and $`h`$ on $`_Q(ϵ,t)`$ denote low and high $`t`$ values, respectively. These functions are given by
$`_Q^h(ϵ,t)`$ $`=`$ $`{\displaystyle \frac{1}{t}}\left[(1t+ϵ)^2+(1+ϵ)^2\right]\mathrm{ln}{\displaystyle \frac{t}{qt}}+\left(32{\displaystyle \frac{q}{t}}+{\displaystyle \frac{1}{2}}{\displaystyle \frac{1}{t}}+{\displaystyle \frac{1}{2}}tq\right)`$
$`+`$ $`\left(42{\displaystyle \frac{q}{t}}+3{\displaystyle \frac{1}{t}}{\displaystyle \frac{q}{t^2}}+{\displaystyle \frac{1}{qt}}\right)ϵ`$
$`_Q^l(ϵ,t)`$ $`=`$ $`{\displaystyle \frac{1}{t}}\left[(1t+ϵ)^2+(1+ϵ)^2\right]\mathrm{ln}{\displaystyle \frac{ϵ}{t(qt)}}+\left(12{\displaystyle \frac{q}{t}}+{\displaystyle \frac{1}{2}}{\displaystyle \frac{1}{t}}q\right)`$ (10)
$`+`$ $`\left(3{\displaystyle \frac{1}{t}}+{\displaystyle \frac{1}{qt}}{\displaystyle \frac{q}{t^2}}+2{\displaystyle \frac{1}{t^2}}+22{\displaystyle \frac{q}{t}}\right)ϵ+\left(2+{\displaystyle \frac{1}{2t}}\right){\displaystyle \frac{ϵ^2}{t^2}}`$
$`_G(ϵ,t)`$ $`=`$ $`{\displaystyle \frac{1t}{q^2}}\left[\left((1+ϵ)^2+(1+ϵq)^2\right)\mathrm{ln}{\displaystyle \frac{qt}{t}}+(2tq)\left(q+{\displaystyle \frac{ϵ}{t}}+{\displaystyle \frac{ϵ}{qt}}\right)\right]`$
where $`q\sqrt{(1t)^2+4ϵ}`$.
Eq. (2) with $`(ϵ,t)`$ given by (9) and (2) can now be used to calculate the SDG contribution to the thrust distribution. The leading order perturbative result can be recovered from (2) by assuming a constant coupling $`\overline{a}_{\text{eff}}(\mu ^2)\mathrm{const}.`$ This yields
$$\frac{d\sigma }{dt}(t)|_{\mathrm{LO}}=C_F\overline{a}_{\text{eff}}(\mu ^2)_0^1\frac{dϵ}{ϵ}\dot{}(ϵ,t)=C_F\overline{a}_{\text{eff}}(\mu ^2)(0,t)$$
(11)
with
$$(0,t)=\frac{3t^23t+2}{t(1t)}\mathrm{ln}\frac{12t}{t}\frac{3}{2}\frac{(1+t)(13t)}{t}$$
(12)
for $`t\mathrm{\hspace{0.17em}1}/3`$. Eq. (11) is the contribution to the thrust distribution of an on-shell gluon. The improved calculation (2) takes into account the emission of “massive” gluons that later dissociate. For a given thrust value, the gluon virtuality $`\mu ^2=ϵQ^2`$ can change in the following range:
$$\begin{array}{ccc}t<\frac{1}{3}\hfill & & \hfill 0<ϵ<t\\ t>\frac{1}{3}\hfill & & \hfill \frac{3}{4}t^2+\frac{1}{2}t\frac{1}{4}<ϵ<t\end{array}$$
(13)
Whereas the leading order result (11) depends on an a priori arbitrary choice of the renormalization scale $`\mu ^2`$, eq. (2) resums running coupling effects to all orders, and is therefore renormalization-group invariantIn fact, to have full renormalization-group invariance we have to fix the scheme of $`\overline{a}`$. This requires further assumption concerning the diagrams that contribute to the gluon fragmentation (“dressing the gluon” ). Below we fix it according to the Abelian-limit, in order to guarantee the correct resummation of the terms leading in $`\beta _0`$.. Note that performing the integral in eq. (2) can be in turn re-formulated as a specific choice of the renormalization point $`\mu `$ in (11) according to the BLM criterion . For the thrust distribution this representation seems rather cumbersome, since the BLM scale will depend on $`t`$ in a complicated way. We shall use the BLM formulation later on for the moments of $`1`$ thrust.
In spite of the resummation achieved by eq. (2), since this calculation takes into account just a single gluon emission its applicability is limited as follows:
(i) At “high” values $`1`$ thrust, $`t\begin{array}{c}>\hfill \\ \hfill \end{array}\mathrm{\hspace{0.17em}1}/3`$, only non-planar configurations, e.g. more than three hadron jets, contribute. The leading-order perturbative result (11) corresponding to three partons vanishes identically for $`t\mathrm{\hspace{0.17em}1}/3`$ and the first non-vanishing contribution appears at the next order $`𝒪(a^2)`$. The resummation result (2) does not vanish above $`t=\mathrm{\hspace{0.17em}1}/3`$. This fits the intuitive expectation: the fragmentation of the “massive” gluon is not restricted to the three parton plane. On the other hand, it is clear that in this region emission of two hard gluons from the primary quarks is important.
(ii) At low values of $`1`$ thrust, $`C_Fa\mathrm{ln}^2(1/t)\begin{array}{c}>\hfill \\ \hfill \end{array}\mathrm{\hspace{0.17em}1}`$, the contribution of multiple emission of gluons which are both soft and collinear becomes dominant and must be resummed to all-orders in perturbation theory . At even lower $`1`$ thrust values, $`\beta _0a\mathrm{ln}(1/t)\mathrm{\hspace{0.17em}1}`$, this resummation breaks down since non-perturbative soft gluon emission becomes important.
In view of these facts, we can expect the SDG approximation to describe the thrust distribution only in some restricted range of intermediate thrust values. Since soft and collinear gluon resummation is complementary to dressing the gluon (2), the two resummation procedures can be combined to describe the thrust distribution in a wider range. The most promising approach to describe the differential distribution of event-shape variables in the Sudakov region (here small $`t`$) is based on introducing a non-perturbative “shape-function” (SF) . The physical distribution is then obtained by convoluting this function with the perturbative result. Further analysis of the thrust distribution along these lines will appear in a separate publication. Here we proceed by analyzing moments of $`1`$ thrust.
## 3 Moments of $`1`$ thrust
Resummation of running-coupling effects in the SDG approximation was already demonstrated to be important for the average thrust. In particular, it was shown in that the resummation modifies significantly the value of $`\alpha _s`$ extracted from experimental data. The application of the SDG approximation to the average thrust was justified by asserting that low $`t=1T`$ values are suppressed in the average, and therefore the effect of multiple soft and collinear gluons should not be significant. From this respect the calculation of higher moments of $`1`$ thrust, $`<t^m>`$, based on a SDG is even safer since they have a stronger suppression of the low $`t`$ region. On the other hand, the contribution to $`<t^m>`$ from extremely high values of $`1`$ thrust, $`t\begin{array}{c}>\hfill \\ \hfill \end{array}\mathrm{\hspace{0.17em}1}/3`$, becomes more significant as $`m`$ increases. For high enough $`m`$ this contribution must be dominant, making the SDG calculation unreliable.
Let us now assume that the contribution to $`<t^m>`$ is dominated by intermediate values of $`1`$ thrust where the SDG approximation to the distribution (eq. (2)) holds, and use it to calculate the $`m`$-th moment of $`1`$ thrust according to
$`<t^m><t^m>_{\text{SDG}}`$ $`=`$ $`{\displaystyle \frac{d\sigma }{dt}(t)}|_{\text{SDG}}t^mdt=C_F{\displaystyle t^m𝑑t_0^1\frac{dϵ}{ϵ}\overline{a}_{\text{eff}}(ϵQ^2)\dot{}(ϵ,t)}`$ (14)
$`=`$ $`C_F{\displaystyle _0^1}{\displaystyle \frac{dϵ}{ϵ}}\overline{a}_{\text{eff}}(ϵQ^2)\dot{}_{<t^m>}(ϵ),`$
where in the last step we changed the order of integration defining
$$_{<t^m>}(ϵ,t)t^m𝑑t.$$
(15)
In order to use the resummation formula (14) we have to specify the time-like coupling $`\overline{a}_{\text{eff}}(\mu ^2)`$. As explained in , the renormalization-scheme defining the coupling $`\overline{a}`$ should be uniquely determined once an Abelian-like skeleton expansion is established in QCD. For the purpose of the current investigation we shall just use the one-loop approximation of the space-like coupling
$$\overline{a}(k^2)=\frac{1}{\beta _0}\frac{1}{\mathrm{ln}\left(k^2/\overline{\mathrm{\Lambda }}^2\right)}$$
(16)
with $`\overline{\mathrm{\Lambda }}`$ set such that $`\overline{a}`$ coincides with the Gell-Mann Low effective charge in the Abelian (large $`\beta _0`$) limit. This is realized, for example, if $`\overline{a}`$ is related to the $`\overline{\mathrm{MS}}`$ coupling by a scale-shift, i.e.
$$\overline{a}(k^2)=\frac{a_{\overline{\text{MS}}}(k^2)}{1\frac{5}{3}\beta _0a_{\overline{\text{MS}}}(k^2)}.$$
(17)
The corresponding time-like coupling is given by
$$\overline{a}_{\text{eff}}(\mu ^2)=\frac{1}{\beta _0}\left[\frac{1}{2}\frac{1}{\pi }\mathrm{arctan}\left(\frac{1}{\pi }\mathrm{log}\frac{\mu ^2}{\overline{\mathrm{\Lambda }}^2}\right)\right].$$
(18)
Using this coupling in (14), the terms which are leading in $`\beta _0`$ will be resummed correctly (ignoring the non-inclusive nature of the observable , which will be discussed below), while other terms which are sub-leading in $`\beta _0`$ will be neglected.
Before proceeding with the SDG analysis it is worthwhile to examine the known next-to-leading order result . The next-to-leading order coefficient is calculated based on numerical integration over the three and four parton phase-space . In $`\overline{\mathrm{MS}}`$ it is given by
$`<t>`$ $`=`$ $`0.7888C_Fa_{\overline{\text{MS}}}(Q^2)+\left(1.1570C_{F}^{}{}_{}{}^{2}+4.4708C_FC_A0.8445C_FN_f\right)a_{\overline{\text{MS}}}^2(Q^2)`$
$`<t^2>`$ $`=`$ $`0.0713C_Fa_{\overline{\text{MS}}}(Q^2)+\left(0.3073C_{F}^{}{}_{}{}^{2}+0.3280C_FC_A0.0583C_FN_f\right)a_{\overline{\text{MS}}}^2(Q^2)`$
$`<t^3>`$ $`=`$ $`0.0112C_Fa_{\overline{\text{MS}}}(Q^2)+\left(0.06877C_{F}^{}{}_{}{}^{2}+0.04973C_FC_A0.00808C_FN_f\right)a_{\overline{\text{MS}}}^2(Q^2)`$
$`<t^4>`$ $`=`$ $`0.0022C_Fa_{\overline{\text{MS}}}(Q^2)+\left(0.01622C_{F}^{}{}_{}{}^{2}+0.00989C_FC_A0.00145C_FN_f\right)a_{\overline{\text{MS}}}^2(Q^2)`$
In order to correctly identify the terms that originate from the running-coupling , we use (17) to translate (3) to the $`\overline{a}`$ scheme. Then, expressing the $`N_f`$ dependence of the next-to-leading order coefficients in terms of $`\beta _0=(11/12)C_A(1/6)N_f`$ as done in the “naive non-Abelianization” procedure we obtain
$`<t>`$ $`=`$ $`C_F\left[0.7888\overline{a}(Q^2)+\left(3.7526\beta _01.1567C_F0.1740C_A\right)\overline{a}^2(Q^2)\right]`$
$`<t^2>`$ $`=`$ $`C_F\left[0.0713\overline{a}(Q^2)+\left(0.2308\beta _0+0.3073C_F+0.00762C_A\right)\overline{a}^2(Q^2)\right]`$ (20)
$`<t^3>`$ $`=`$ $`C_F\left[0.0112\overline{a}(Q^2)+\left(0.02981\beta _0+0.06877C_F+0.005271C_A\right)\overline{a}^2(Q^2)\right]`$
$`<t^4>`$ $`=`$ $`C_F\left[0.0022\overline{a}(Q^2)+\left(0.005045\beta _0+0.01622C_F+0.00191C_A\right)\overline{a}^2(Q^2)\right]`$
Let us now look at the relative magnitudes of the different terms in the next-to-leading order coefficients in (3). Substituting the QCD values of $`C_F`$, $`C_A`$ and $`\beta _0`$ (for $`N_f=5`$) we obtain the numerical values summarized in table 1.
We find that the non-Abelian $`C_FC_A`$ component is always small. The relative significance of the $`C_F\beta _0`$ term associated with the running-coupling decreases with $`m`$: this term is absolutely dominant in the case of the average thrust. It is still the largest in the case of $`<t^2>`$, but it is no longer dominant. This trend continues for higher moments $`<t^m>`$, $`m=3`$ and $`4`$. On the other hand, the significance of the double emission term, proportional to $`C_{F}^{}{}_{}{}^{2}`$, increases with $`m`$. This fits the intuitive expectation: higher moments of $`1`$ thrust become sensitive to spherical configurations, and in particular, to multi-jet configurations. This property, which makes the next-to-leading order perturbative series less reliable as $`m`$ increases, also implies that the significance of the resummation of running-coupling effects by the SDG formula (14) compared to other contributions decreases with $`m`$. Note, however, that with increasing orders in perturbation theory, the terms associated with the running-coupling are expected in general to increase fast and eventually dominate the coefficients.
We now want to calculate the characteristic functions (15) for the first few moments. Using eq. (9) we have
$$_{<t^m>}(ϵ)=_ϵ^{\frac{2}{3}\sqrt{1+3ϵ}\frac{1}{3}}_G(ϵ,t)t^m𝑑t+_ϵ^\sqrt{ϵ}_Q^l(ϵ,t)t^m𝑑t+_\sqrt{ϵ}^{\frac{2}{3}\sqrt{1+3ϵ}\frac{1}{3}}_Q^h(ϵ,t)t^m𝑑t,$$
(21)
or, after rearranging the terms,
$$_{<t^m>}(ϵ)=_ϵ^{\frac{2}{3}\sqrt{1+3ϵ}\frac{1}{3}}\left[_G(ϵ,t)+_Q^h(ϵ,t)\right]t^m𝑑t_ϵ^\sqrt{ϵ}\left[_Q^h(ϵ,t)_Q^l(ϵ,t)\right]t^m𝑑t$$
(22)
where the difference
$$_Q^h(ϵ,t)_Q^l(ϵ,t)=\frac{1}{t}\left[(1t+ϵ)^2+(1+ϵ)^2\right]\mathrm{ln}\frac{t^2}{ϵ}+2+\frac{t}{2}+ϵ\left(2\frac{2}{t^2}\right)\frac{ϵ^2}{t^2}\left(2+\frac{1}{2t}\right)$$
contains the essential information on the region $`t\sqrt{ϵ}`$. As we shall see below, non-analytic terms in the expansion of $`_{<t^m>}(ϵ)`$ at small $`ϵ`$ imply power-suppressed contributions to $`<t^m>`$. It is the $`\sqrt{ϵ}`$ in the upper boundary in the second term of (22), corresponding to large-angle soft-gluon emission, which is the source the leading non-analytic terms.
The resulting characteristic functions for $`m=1`$ and $`2`$ are shown in fig. 2 and these for $`m=3`$ and $`4`$ – in fig. 3. On this basis, the SDG perturbative sum in (14) can be calculated.
As an alternative to evaluating the integral we can expand the coupling $`\overline{a}_{\text{eff}}(ϵQ^2)`$ under the integration sign, e.g. in terms for $`\overline{a}\left(Q^2\right)`$, obtaining,
$$<t^m>_{\text{SDG}}=C_F\left[d_0\overline{a}(Q^2)+d_1\beta _0\overline{a}^2(Q^2)+\left(d_2\frac{\pi ^2}{3}d_0\right)\beta _0^2\overline{a}^3(Q^2)+\mathrm{}\right],$$
(23)
where the coefficients are expressed in terms of the log-moments of the corresponding characteristic function, $`d_0_{<t^m>}(0)`$ and
$$d_i_0^1\dot{}_{<t^m>}(ϵ)\left(\mathrm{ln}ϵ\right)^i\frac{dϵ}{ϵ}.$$
(24)
The log-moments can be evaluated by a straightforward numerical integration. The resulting values are summarized in table 2.
In the cases of the average thrust and $`<t^2>`$ one identifies already at the first few orders the characteristics of infrared renormalons, namely fast increase (that asymptotically becomes factorial) and a constant sign pattern. This behavior is much delayed for the higher moments.
The approximation of $`<t^m>`$ by $`<t^m>_{\text{SDG}}`$ in (14), or by (23), can be improved in a straightforward manner by matching it with the next-to-leading order result (3). Such a procedure was used in for the phenomenological analysis of the average thrust. The additional terms in (3), correspond primarily to double gluon emission (a $`C_{F}^{}{}_{}{}^{2}`$ term). They also include a non-Abelian $`C_AC_F`$ component, which depends on the identification of the coupling $`\overline{a}`$ beyond the Abelian limit, and finally, also some residual $`\beta _0`$ dependent piece, which is related to the non-inclusive nature of the thrust.
Had the moments of the thrust been completely inclusive with respect to the fragmentation products of the gluon, the terms proportional to $`\beta _0`$ should have been fully contained in the SDG result (14) or (23). As explained in section 5.2 in , in the inclusive case the log-moments $`d_i`$, for any $`i`$, are equal to the terms leading in $`\beta _0`$ in the exact calculation. In the present case the next-to-leading order coefficient $`d_1`$ calculated as a log-moment of the inclusive characteristic function differs from the actual $`\beta _0`$ dependent term in (3). The “massive gluon” inclusive treatment is justified only if the discrepancy between them is small. In the case of $`<t>`$ this discrepancy was found to be tiny. Comparing table 2 with eq. (3) we find that it is only $`4.4\%`$! Evidently it increases for the higher moments of $`1`$ thrust. For $`<t^2>`$ the inclusive approximation still seems reasonable: the discrepancy is $`18.7\%`$. It is much worse for $`<t^3>`$, where the discrepancy is $`46\%`$ and it completely breaks down for $`<t^4>`$.
The physical reason why the inclusive calculation for the high moments fails is that $`<t^m>`$ becomes sensitive to large $`t`$ values. The high $`t`$ region is correlated with high gluon virtuality (see e.g. (13)). High virtuality allows the gluon fragmentation products to spread in large angles compared to the original gluon momentum. Consequently, fragmentation into opposite hemispheres becomes more probable. In such a case, a full non-inclusive calculation would yield a higher thrust ($`T`$) value compared to the inclusive calculation that takes into account the gluon momentum itself (this follows from (7) using the triangle inequality). As a result, the inclusive calculation under-estimates the thrust distribution $`d\sigma /dt`$ at large $`t`$ on the expense of over-estimating it for somewhat lower $`t`$ values. This is why for all the moments the full $`\beta _0`$ dependent term in (3) is larger than the inclusive result $`d_1`$. The success of the inclusive approximation for the average thrust and for $`<t^2>`$ is related to the fact that most of the gluon fragmentation is roughly collinear.
For the first two moments of $`1`$ thrust, the coefficients in (23) increase fast due to infrared renormalons already at the first few orders. It is clear that $`\overline{a}(Q^2)`$ is not a good expansion parameter. A simple way to approximate the perturbative sum (ignoring the renormalon ambiguity) is the BLM method. In this case we approximate (14) by
$$<t^m>_{\text{SDG}}C_F\overline{a}_{\text{eff}}(\mu _{\text{BLM}}^2)_0^1\frac{dϵ}{ϵ}\dot{}_{<t^m>}(ϵ)=C_F\overline{a}_{\text{eff}}(\mu _{\text{BLM}}^2)_{<t^m>}(0)$$
(25)
where, at leading order, the BLM scale is the center of the characteristic function, i.e. the average virtuality of the gluon contributing to $`<t^m>_{\text{SDG}}`$,
$$\mu _{\text{BLM}}^2=Q^2\mathrm{exp}\left(_0^1\dot{}_{<t^m>}(ϵ)\left(\mathrm{ln}ϵ\right)\frac{dϵ}{ϵ}/_0^1\dot{}_{<t^m>}(ϵ)\frac{dϵ}{ϵ}\right)Q^2\mathrm{exp}\left(d_1/d_0\right).$$
(26)
In principle, higher order log-moments (24) can be used to improve the approximation in (25) by further modifying the scale (see, for example, eq. (33) in ). For the qualitative discussion here, the leading-order BLM approximation will suffice. Using (17) one can translate the BLM result to the $`\overline{\mathrm{MS}}`$ scheme,
$$<t^m>_{\text{SDG}}C_Fa_{\overline{\text{MS}}}(\mu _{\text{BLM},\overline{\text{MS}}}^2)_{<t^m>}(0)$$
(27)
with $`\mu _{\text{BLM},\overline{\text{MS}}}^2=\mu _{\text{BLM}}^2e^{5/3}`$.
The first two log-moments in table 2 allow to calculate the leading-order BLM scales for the various moments of $`1`$ thrust according to (26). The results are shown in table 3.
The numbers in the first row of the table have the interpretation of the average gluon virtuality contributing to $`<t^m>`$. We see that
(i) the typical virtuality, which is also the correct argument for the coupling in (25), is much lower than $`Q`$ for the first moments of $`1`$ thrust, especially for the average thrust and for $`<t^2>`$.
(ii) with increasing $`m`$ highly virtual gluons become dominant (at large $`m`$, $`\mu _{\text{BLM}}Q`$).
These features can be read directly from the characteristic function curves. In fig. 2 we see that $`_{<t>}(ϵ)`$ is wide and centered at very low virtualities while $`_{<t^2>}(ϵ)`$ is narrower and centered at higher virtualities. Fig. 3 shows that this trend persists for the higher moments. Note that $`_{<t^m>}(ϵ)`$ for $`m3`$, as opposed to $`m=1`$ and $`2`$, is not positive definite. Thus for $`m3`$ the interpretation of $`\mu _{\text{BLM}}`$ as the average gluon virtuality contributing to $`<t^m>`$ is no longer accurate. Note that for $`m=4`$ the cancelation between positive and negative contributions to $`d_1`$ is already large, making the approximation of the integral (14) by the coupling at the BLM scale absolutely unreliable.
Let us now turn to discuss the power-correction implied by the SDG perturbative sum. The starting point is the observation that eq. (14), understood as an all-order perturbative sum, is ill-defined due to infrared renormalons. The standard way to resum asymptotic series of this type is the Borel method. To obtain a Borel representation one expresses the coupling in (14) as
$$\overline{a}_{\text{eff}}^{\text{PT}}(\mu ^2)=\frac{1}{\beta _0}_0^{\mathrm{}}𝑑z\overline{a}(z)\frac{\mathrm{sin}\pi z}{\pi z}e^{z\mathrm{ln}\frac{\mu ^2}{\overline{\mathrm{\Lambda }}^2}},$$
(28)
where the $`\mathrm{sin}`$ factor arises from the analytic continuation to the time-like region. For the one-loop coupling (16), $`\overline{a}(z)=1`$. Substituting this in (14) and changing the order of integration one arrives at the following Borel-sum,
$$<t^m>_{\text{SDG}}=\frac{C_F}{\beta _0}_0^{\mathrm{}}𝑑zB_{<t^m>}(z)e^{z\mathrm{ln}\frac{Q^2}{\overline{\mathrm{\Lambda }}^2}}$$
(29)
with
$$B_{<t^m>}(z)=\frac{\mathrm{sin}\pi z}{\pi z}_0^1ϵ^z\dot{}_{<t^m>}(ϵ)\frac{dϵ}{ϵ}.$$
(30)
The integral over $`z`$ in (29) is ill-defined yielding an ambiguous result (see for more details). The physical reason for this ambiguity is that our “perturbative” calculation (14) actually depends on the coupling at all scales, including the infrared. This ambiguity can thus be resolved only at the non-perturbative level. Nevertheless, in practice it is possible to cure the ambiguity by defining the perturbative sum through some regularization procedure, like the principal-value or a momentum cutoff. At the next step, one adds to the regularized perturbative sum explicit power-suppressed terms that have the same dependence on $`Q^2`$ as the leading ambiguities and a normalization which is controlled by free parameters to be fixed by a fit. An attractive possibility is to write this parameterization in terms of the small $`k^2`$ moments of coupling $`\overline{a}(k^2)`$, assumed to be regular in the infrared at the non-perturbative level. Thanks to the assumed universality of $`\overline{a}(k^2)`$, this parameterization immediately implies universality of the power suppressed terms for different observables, up to calculable factors which depend on the characteristic function.
Notice that since the time-like coupling (18) is finite for any $`\mu ^2`$ and has an infrared fixed-point: $`\overline{a}_{\text{eff}}(0)=1/\beta _0`$, the integral (14) with (18) is well-defined. This is the so-called “Analytic Perturbation Theory” (APT) result. It differs by ambiguous power termsSee and refs. therein. from the corresponding Borel-sum (29). The APT result, just like the principal-value Borel-sum, is a specific regularization of the perturbative sum. Its advantage is that it is straightforward to calculate.
In general, if we ignore sub-leading perturbative terms that are not included in the SDG approximation (14), the physical result should be given by the regularized perturbative sum plus power-corrections. The apparent ambiguity in the choice of the regularization procedure is eliminated by the power terms. In absence of relevant non-perturbative calculation, the only way to appreciate the significance of the power terms is by comparing different regularization procedures.
One has to distinguish between two types of power-suppressed ambiguities of the perturbative sum:
(i) differences between various regularizations, e.g. between the APT integral and the principal-value Borel-sum.
(ii) the specific differences which are associated with infrared scales, i.e. space-like momentum scales at which the coupling is not under control in perturbation theory.
This second type of ambiguity can be studied by introducing a momentum cutoff in the space-like region . It was shown to be related to non-analytic terms in the small gluon virtuality (or $`ϵ`$) expansion of the characteristic function. In case of observables<sup>§</sup><sup>§</sup>§This does not apply to moments of $`1`$ thrust which cannot be expressed in terms of local operators. that admit an operator product expansion (OPE), the parametric dependence of these ambiguous terms on $`Q`$ should fit the dimensions of the higher-twist operators. On the other hand, the first type of ambiguity may appear due to both analytic and non-analytic terms in the small $`ϵ`$ expansion of the characteristic function . This implies that certain regularizations differ from others, as well as from the physical non-perturbative result, by non-infrared non-OPE power terms. The simplest possibility is that non-perturbative effects are restricted to large distances. Then a momentum cutoff in the space-like region as introduced in , or equivalently, the principal-value Borel-sum, define a class of favorable regularizations which differ from each other, as well as from the physical result, just by infrared power-corrections. We stress that the APT integral does not belong to this class of regularizations, and is suggestive of a different scenario for the regularization of perturbation theory, advocated in .
In order to study the ambiguities of perturbation theory that emerge from the SDG calculation in the case of moments of $`1`$ thrust (14), we calculated the small $`ϵ`$ asymptotic expansions of the characteristic functions $`_{<t^m>}(ϵ)`$ given by eq. (22). These expansions for $`m=1`$ through $`4`$ are summarized in the appendix in eq. (A) through (A) and in table 6. According to the explanation above, non-analytic terms in the expansion of the characteristic functions are of special interest, since they imply ambiguity of perturbation theory which is associated with large distances. There are two physically distinct sources of non-analytic terms contributing to the expansions (A) through (A). The first is the contribution of collinear soft gluon emission. It originates in the lower integration limit ($`t=ϵ`$) of the two terms in (22). The second is large-angle soft-gluon emission, which originates in the upper integration limit ($`t=\sqrt{ϵ}`$) of the second term in (22). To distinguish between these two regions of phase-space, we evaluated separately the second integral in (22) at its upper limit. The results, which represent the contribution of large-angle soft-gluon emission to $`_{<t^m>}(ϵ)`$, are summarized in eq. (A). The expansions in the appendix can be used, based on the general formulae obtained in ref. , to analyze the power-corrections implied by (14).
Note first that all the leading non-analytic terms in the small $`ϵ`$ expansion of the characteristic functions are of the half-integer type. These terms, as well as all the other half-integer terms, originate exclusively in the large-angle soft-gluon region through the upper limit $`t=\sqrt{ϵ}`$ in the second integral in (22). This conclusion follows from the comparison of eq. (A) with the full expansion in eq. (A) through (A).
Note also that in eq. (A) through (A) there are no logarithmic terms at order $`ϵ\mathrm{ln}ϵ`$. In general, presence of such terms in the characteristic function implies infrared power-corrections that scale as $`1/Q^2`$. Their absence at the level of the moments of $`1`$ thrust, $`<t^m>`$, is not expected a priori, since they are present at the level of the thrust distribution, see eq. (9) with (2). Comparing eq. (A) to eq. (A) it becomes apparent that for the average thrust the $`ϵ\mathrm{ln}ϵ`$ terms cancel between collinear and large-angle soft terms. For higher moments of $`1`$ thrust such terms do not appear even in the separate contributions of these two phase-space regions (see eq. (A)). The first non-vanishing logarithmic terms for the first two moments, $`m=1`$ and $`2`$, appear at order $`ϵ^2\mathrm{ln}ϵ`$, namely $`1/Q^4`$. These terms originate in both collinear and large-angle soft-gluon regions. The leading logarithmic terms for higher moments, $`m=3`$ and $`4`$, appear much later, and exclusivelyIn eq. (A) for $`m=3`$ and $`4`$ there are no logarithmic terms at all. due to the collinear limit. Since in non of the moments, do logarithmic terms appear as the leading non-analytic terms we can safely ignore them in the more detailed analysis that follows.
Let us now summarize some formulae for the regularization dependence of (14), using the simplest one-loop ansatz (16) for the coupling. Given a generic term of the formAs stressed above we ignore logarithmic terms. Regularization of integrals like (14) in presence of such terms was also analyzed in . $`c_nϵ^n`$ in the small $`ϵ`$ expansion of $`_{<t^m>}(ϵ)_{<t^m>}(0)`$, the difference between the APT regularization and the Borel-sum is given by ,
$$\delta <t^m>c_n\frac{C_F}{\beta _0}\left(\frac{\overline{\mathrm{\Lambda }}^2}{Q^2}\right)^n\mathrm{exp}(\pm i\pi n).$$
(31)
Note that for half-integer values of $`n`$, this formula yields an ambiguous imaginary result. This is the ambiguity of the Borel-sum. The principal-value regularization amounts to taking the real part of the Borel sum, and thus it coincides with the APT integral, as far as these terms are concerned. On the other hand for integer values of $`n`$ the difference between the APT integral and the Borel-sum is unambiguous and real.
To quantify the ambiguities associated with infrared scales we define a space-like momentum cutoff $`\mu _I`$, with $`t_I\mathrm{ln}\left(\mu _I^2/\overline{\mathrm{\Lambda }}^2\right)`$. The contribution to the principal-value Borel-sum from momentum scales below $`\mu _I`$ is given by ,
$$<t^m>_{\text{IR}}c_n\frac{C_F}{\beta _0}\left(\frac{\mu _I^2}{Q^2}\right)^n\frac{\mathrm{sin}\pi n}{\pi }\mathrm{exp}(nt_I)\mathrm{Ei}(nt_I).$$
(32)
In accordance with the general statement above, $`<t^m>_{\text{IR}}`$ is non-zero for non-analytic terms with half-integer $`n`$ while it vanishes for analytic terms where $`n`$ is integer.
Using (31) and (32) with the expansions of the characteristic functions in eq. (A) through (A) we find the leading ambiguities associated with the perturbative sum (14) of each moment of $`1`$ thrust. The two rows in table 4 correspond to (31) and (32), respectively. The table shows the parametric dependence of the leading ambiguity on the center-of-mass energy $`Q`$. Table 5 shows the corresponding numerical values of the power terms for $`Q=\mathrm{M}_\mathrm{Z}`$, normalized by the (approximate) perturbative result (25) with the BLM scales of table 3, for $`\mu _I=2\mathrm{GeV}`$, assuming $`\alpha _s(\mathrm{M}_\mathrm{Z})=0.115`$ and $`N_f=5`$.
As mentioned above, in all the cases the leading contributions from the infrared (32) appear due to half-integer terms associated with large-angle soft-gluon emission. These ambiguities imply existence of non-perturbative power terms which scale as $`1/Q`$ for $`<t>`$, as $`1/Q^3`$ for $`<t^2>`$ and $`<t^3>`$, and as $`1/Q^5`$ for $`<t^4>`$.
Considering the leading ambiguity of the perturbative sum (31), a definite difference exists between the case of the average thrust, $`m=1`$, and higher moments, $`m2`$. In the case of $`<t>`$ the leading ambiguity of the Borel-sum is due to the same $`\sqrt{ϵ}`$ term which dominates the infrared contribution, whereas for $`<t^m>`$ with $`m2`$ the leading ambiguity originates in an analytic term ($`c_1ϵ`$) in $`_{<t^m>}(ϵ)`$. It is therefore not associated with large distance scales. Moreover, a careful examination of the source of the terms proportional to $`ϵ`$, (compare e.g. (A) with the full expansion (A) through (A)) shows that they are not associated with any definite part of phase-space.
The numerical values in table 5 clearly indicate that all the power-corrections that appear in the SDG approximation are small for any $`m2`$. The relative error at $`\mathrm{M}_\mathrm{Z}`$ is less than a pro-mil. The case of the average thrust is unique having a significant contribution from the infrared: with the parameters quoted above it is 17% (!) at $`\mathrm{M}_\mathrm{Z}`$. We stress that in contrast with the average thrust case, the difference between the APT and Borel-sum regularizations for $`m2`$ becomes parametrically larger, as well as numerically larger at $`\mathrm{M}_\mathrm{Z}`$, than the infrared contribution to the Borel-sum. This might give an opportunity to use experimental data in order to constrain $`1/Q^2`$ power terms such as the ones making the APT integral different from the principal-value Borel-sum.
## 4 Conclusions
We started in section 2 by calculating the characteristic function of the thrust distribution, $`(ϵ,t)`$. The result, summarized by eq. (9) and (2), forms the basis for resummation of running-coupling effects in the SDG approximation in this case. On its own the SDG approximation is expected to describe the physical distribution only in a limited range of thrust values. However, when combined with other techniques , the range of applicability of the calculated distribution can be extended, leading to more meaningful comparison with experimental data than available today.
In section 3 we concentrated on the first few moments of $`1`$ thrust, $`<t^m>`$, $`m=1`$ through $`4`$. The corresponding characteristic functions $`_{<t^m>}(ϵ)`$ were obtained by a straightforward integration of $`(ϵ,t)`$ and then used to study the properties of the SDG approximation to $`<t^m>`$. We saw that the characteristic mass scale of gluons contributing to $`<t^m>`$ increases fast with $`m`$ (see table 3). Whereas typical gluon virtuality contributing to the average thrust is about $`10\%`$ of the center-of-mass energy $`Q`$, this fraction becomes $`27\%`$ for $`<t^2>`$. Still, if one is using the $`\overline{\mathrm{MS}}`$ scheme, the natural renormalization scale $`\mu _{\text{BLM},\overline{\text{MS}}}`$ is quite far from the naive choice $`\mu =Q`$: for $`<t^2>`$ we have $`\mu _{\text{BLM},\overline{\text{MS}}}0.12Q`$. This means that the effect of the resummation compared to the naive perturbative treatment is still very significant in the case of $`<t^2>`$.
For high moments $`m3`$, a significant source of uncertainty in the available next-to-leading order perturbative approximation (see eq. (3)) is related to multi-jet configurations that contribute at high values of $`t`$, $`t\begin{array}{c}>\hfill \\ \hfill \end{array}1/3`$. Improving the approximation requires a full next-to-next-to-leading order calculation.
Our renormalon analysis was performed in the inclusive “massive gluon” approach. The advantage of this approach, apart from its simplicity, is that it naturally generalizes to include higher orders in the $`\beta `$ function of the “skeleton scheme”. In addition, it allows for non-perturbative effects to be parametrized in a transparent way using an infrared regular running-coupling. Using this parameterization the universality assumption can be tested. Of course, the inclusive resummation is justified only if the non-inclusive effect is small. An alternative approach to perform renormalon resummation is based on the “naive non-Abelianization” procedure. One can define non-inclusive large $`N_f`$ characteristic functions (see ) for $`<t^m>`$, and then restore the full $`\beta _0`$ of the non-Abelian theory. Comparing the expansion of the inclusive resummation formula (14) to the next-to-leading order perturbative expansion, we obtained a quantitative estimate of the non-inclusive effect. The discrepancy at the next-to-leading order is very small ($`4.4\%`$) for $`<t>`$ and it increases for higher moments. In the case of $`<t^2>`$ the inclusive approximation still seems reasonable (a discrepancy of $`18.7\%`$) but less so for $`<t^3>`$. Due to the appreciable discrepancy it is worthwhile to compute also the non-inclusive resummation.
We find that within the framework of the SDG calculation, power-corrections to $`<t^m>`$, with $`m2`$, are highly suppressed (see tables 4 and 5). The main question that remains open concerning the phenomenology of $`<t^2>`$ is the significance of power-corrections from configurations of three hard partons plus a soft gluon. Such corrections can be as large as $`\alpha _s(Q^2)/Q`$. We further address this subject below.
In the case of the average thrust the leading ambiguity of the perturbative sum scales as $`1/Q`$. This ambiguity originates in a particular part of phase-space where a soft gluon is emitted at a large angle. Being associated with large distance physics, it is quite clear that this ambiguity can be resolved only at the non-perturbative level, by including explicit non-perturbative terms that fall as $`1/Q`$. We showed that for higher moments of $`1`$ thrust the infrared contribution to the SDG perturbative sum is dominated by the same large-angle soft-gluon region of phase-space. The corresponding non-analytic terms in the characteristic functions lead to much suppressed power-corrections, which scale as $`1/Q^3`$ for $`<t^2>`$ and $`<t^3>`$ and as $`1/Q^5`$ for $`<t^4>`$.
For the various moments of $`1`$ thrust, $`<t^m>`$, as for any other time-like observable, there are further ambiguities in the summation of perturbation theory that are not related to large distance scales. Such ambiguities become apparent when comparing different regularizations, such as the principal-value Borel-sum and the APT integral. In the case of the moments of $`1`$ thrust, these ambiguities scale as $`1/Q^2`$ and thus dominate over the infrared ambiguity for $`m2`$. We saw that this type of ambiguity is not associated to any particular part of phase-space. This suggest that, contrary to the soft emission ambiguities, it does not signal any genuine non-perturbative effects. It may be then possible to eliminate the ambiguity just by choosing the correct class of regularizations of the perturbative sum. The simplest possibility is that this class is defined by a space-like momentum cutoff , or equivalently by the principal-value Borel-sum. One should be aware that other scenarios are possible as well.
Finally, we would like to view our conclusions concerning the power-corrections for the moments of $`1`$ thrust in the SDG model in the context of previous analysis of the thrust distribution. For this purpose we briefly recall the results of refs. . Assuming a two-jet configuration, it was shown that the main effect of non-perturbative soft gluon emission is a shift of the Sudakov-resummed perturbative spectrum to higher values of $`t`$:
$$\frac{d\sigma }{dt}(t)=\frac{d\sigma }{dt}|_{\text{PT}}(t\mathrm{\Delta }t),$$
(33)
where $`\mathrm{\Delta }t=\lambda /Q`$. A priori, this formula applies only in the range $`\mathrm{\Delta }tt1/3`$. It was demonstrated that the measured thrust distribution can be fitted over a large range of $`t`$ and $`Q`$ values by introducing such a shift.
It is clear that (33) cannot describe the physical distribution at extremely small values of $`t`$, $`t\begin{array}{c}<\hfill \\ \hfill \end{array}\mathrm{\Delta }t`$, where multiple non-perturbative soft gluon emission becomes essential. This difficulty can be resolved , by introducing a non-perturbative (observable dependent) shape-function (SF) to describe the energy flow in the final state. As explained in , in this case the physical distribution is obtained by a convolution of the Sudakov-resummed perturbative spectrum with the shape-function. The resulting distribution at extremely small $`t`$ (to the left of the distribution peak) depends on the form of the shape-function but at higher $`t`$ it approximately coincides with the shifted distribution (33).
Both the simple shift model and the shape-function approach , like the Sudakov-resummed perturbative spectrum on which they are based, strongly rely on the two-jet kinematics. These approaches strictly do not apply to the large $`t`$ region where one gluon becomes hard. The success of the fits in a large range of $`t`$ and $`Q`$ values is encouraging and it might suggest that also the first few moments of $`1`$ thrust could be studied in this framework. Yet, one should be aware of the fact that the distribution is rather flat at large $`t`$ (where one gluon becomes hard) and thus the total effect of the shift, or the convolution with the shape-function, is minor there. On the other hand the first moments crucially depend on this very same region of $`t`$. Therefore it is dangerous to analyze the non-perturbative corrections to the moments relying on the apparent success of the fits to the distribution.
Nevertheless, it is interesting to see the consequences of the assumption that the simple shift model or the shape-function approach apply beyond the small $`t`$ region. In this case one finds ,
$`<t>_{\text{SF}}`$ $`=`$ $`<t>_{\text{PT}}+\lambda _1/Q`$ (34)
$`<t^2>_{\text{SF}}`$ $`=`$ $`<t^2>_{\text{PT}}+2\lambda _1<t>_{\text{PT}}/Q+\lambda _2/Q^2,`$
where the scales $`\lambda _i`$ are the moments of the shape-function and in the case of a shift (33), they are simply $`\lambda _i=\lambda ^i`$. This can be compared to the infrared power-corrections found in the SDG calculation (table 4), namely
$`<t>_{\text{SDG}}`$ $`=`$ $`<t>_{\text{PT}}+\lambda /Q+𝒪(1/Q^3)`$ (35)
$`<t^2>_{\text{SDG}}`$ $`=`$ $`<t^2>_{\text{PT}}+𝒪(1/Q^3),`$
baring the fact that the perturbative part $`<t^m>_{\text{PT}}`$ can be quite different in the two approaches: in (34) Sudakov resummation is implicitly assumed while in (35) SDG renormalon resummation is assumed.
For the average thrust the two approaches predict the same type of leading power-correction. Let us examine the case of $`<t^2>`$:
(i) the leading non-perturbative term in (34), $`2\lambda _1<t>_{\text{PT}}/Q`$, is proportional to $`\alpha _s(Q^2)`$. Therefore, this term is attributed to soft emission around a configuration of three hard partons. Clearly, this is beyond the scope the SDG calculation performed here. On the other hand, this term crucially depends on the extension of the distribution models of beyond their range of validity (two-jet kinematics). Therefore further theoretical work is required. Note that this prediction for the leading power-correction of $`<t^2>`$ is rather easy to verify, or exclude, using experimental data: due to the large ratio (see eq. (3)) between the normalization of $`<t>_{\text{PT}}`$ and $`<t^2>_{\text{PT}}`$ in (34), the numerical significance of this power-correction is quite large.
(ii) the second term in (34), $`\lambda _2/Q^2`$, is not suppressed by any power of $`\alpha _s`$, and is therefore attributed to soft emission around the two-jet configuration. On the other hand, the SDG calculation (35) does not yield any infrared $`1/Q^2`$ correction to $`<t^2>_{\text{PT}}`$. There are two possibilities how these facts can be reconciled. The first is that once a more complete<sup>\**</sup><sup>\**</sup>\**The shape-function approach resums the power-corrections to the distribution which are the most singular in the small $`t`$ limit, i.e. corrections that scale as $`1/(Q^nt^n)`$ for any $`n`$. Less singular corrections to the distribution, e.g. $`1/(Q^nt^{n1})`$ can be quite important for the moments. description of the thrust distribution is achieved there will not be any infrared $`1/Q^2`$ correction to $`<t^2>`$. The second is that an infrared $`1/Q^2`$ correction will emerge entirely from double gluon emission. It is a possible, but yet an unusual, situation that double soft gluon emission becomes parametrically less suppressed than a single soft gluon emission around the same hard configuration. Therefore it will be interesting to investigate power-corrections to $`<t^2>`$ from double gluon emission within the renormalon approach.
We comment that techniques for the systematic analysis of power-corrections in the full four parton phase-space are not well established yet. The discussion above clearly shows that such techniques are necessary for the analysis of event-shape variables.
###### Acknowledgments.
I am grateful to Yuri Dokshitzer, Georges Grunberg and Gregory Korchemsky for very interesting and useful discussions and to Otmar Biebel for his great help.
## Appendix A Asymptotic expansions for the characteristic functions
The asymptotic expansions of $`_{<t^m>}(ϵ)`$, calculated according to eq. (21) or (22) are the given in eq. (A) through (A) below. Table 6 summarizes the numerical values of the coefficients.
$`_{<t>}(ϵ)`$ $`=`$ $`2\mathrm{dilog}\mathrm{\hspace{0.17em}3}{\displaystyle \frac{1}{6}}\pi ^2{\displaystyle \frac{1}{36}}{\displaystyle \frac{3}{8}}\mathrm{ln}\mathrm{\hspace{0.17em}3}`$
$``$ $`4\sqrt{ϵ}+\left[2+6\mathrm{ln}\mathrm{\hspace{0.17em}3}\right]ϵ{\displaystyle \frac{80}{9}}ϵ^{3/2}`$
$`+`$ $`\left[{\displaystyle \frac{28}{3}}\mathrm{ln}\mathrm{\hspace{0.17em}2}+2\mathrm{dilog}\mathrm{\hspace{0.17em}3}{\displaystyle \frac{3}{2}}\mathrm{ln}\left({\displaystyle \frac{1}{ϵ}}\right)+{\displaystyle \frac{17}{12}}+{\displaystyle \frac{1}{6}}\pi ^2\right]ϵ^24ϵ^{5/2}`$
$`+`$ $`\left[{\displaystyle \frac{8}{3}}\mathrm{ln}\left({\displaystyle \frac{1}{ϵ}}\right)+{\displaystyle \frac{14}{15}}\mathrm{ln}\mathrm{\hspace{0.17em}2}{\displaystyle \frac{4}{5}}\right]ϵ^3+\left[{\displaystyle \frac{64}{105}}\mathrm{ln}\mathrm{\hspace{0.17em}2}{\displaystyle \frac{11}{2}}\mathrm{ln}\left({\displaystyle \frac{1}{ϵ}}\right)+{\displaystyle \frac{2719}{1260}}\right]ϵ^4+\mathrm{}`$
$`_{<t^2>}(ϵ)`$ $`=`$ $`2\mathrm{dilog}\mathrm{\hspace{0.17em}3}{\displaystyle \frac{1}{6}}\pi ^2{\displaystyle \frac{9}{8}}\mathrm{ln}\mathrm{\hspace{0.17em}3}+{\displaystyle \frac{17}{216}}`$
$`+`$ $`\left[12\mathrm{dilog}\mathrm{\hspace{0.17em}3}+\pi ^2{\displaystyle \frac{91}{18}}+{\displaystyle \frac{127}{12}}\mathrm{ln}\mathrm{\hspace{0.17em}3}\right]ϵ+{\displaystyle \frac{16}{9}}ϵ^{3/2}`$
$`+`$ $`\left[{\displaystyle \frac{11}{6}}\pi ^2+{\displaystyle \frac{1}{4}}\mathrm{ln}\left({\displaystyle \frac{1}{ϵ}}\right)+22\mathrm{dilog}\mathrm{\hspace{0.17em}3}13\mathrm{ln}\mathrm{\hspace{0.17em}3}+{\displaystyle \frac{104}{3}}\mathrm{ln}\mathrm{\hspace{0.17em}2}+{\displaystyle \frac{4}{3}}\right]ϵ^2+{\displaystyle \frac{16}{9}}ϵ^{5/2}`$
$`+`$ $`\left[8\mathrm{dilog}\mathrm{\hspace{0.17em}3}+{\displaystyle \frac{188}{15}}\mathrm{ln}\mathrm{\hspace{0.17em}2}{\displaystyle \frac{1}{3}}\mathrm{ln}\left({\displaystyle \frac{1}{ϵ}}\right){\displaystyle \frac{103}{30}}+{\displaystyle \frac{2}{3}}\pi ^2\right]ϵ^3`$
$`+`$ $`\left[{\displaystyle \frac{5}{12}}\mathrm{ln}\left({\displaystyle \frac{1}{ϵ}}\right){\displaystyle \frac{1003}{315}}+{\displaystyle \frac{152}{105}}\mathrm{ln}\mathrm{\hspace{0.17em}2}\right]ϵ^4+\mathrm{}`$
$`_{<t^3>}(ϵ)`$ $`=`$ $`2\mathrm{dilog}\mathrm{\hspace{0.17em}3}{\displaystyle \frac{1}{6}}\pi ^2+{\displaystyle \frac{28}{135}}{\displaystyle \frac{83}{64}}\mathrm{ln}\mathrm{\hspace{0.17em}3}`$
$`+`$ $`\left[{\displaystyle \frac{8}{3}}\pi ^2+{\displaystyle \frac{373}{16}}\mathrm{ln}\mathrm{\hspace{0.17em}3}{\displaystyle \frac{53}{9}}+32\mathrm{dilog}\mathrm{\hspace{0.17em}3}\right]ϵ{\displaystyle \frac{4}{9}}ϵ^{3/2}`$
$`+`$ $`\left[{\displaystyle \frac{227}{12}}+{\displaystyle \frac{23}{6}}\pi ^2+76\mathrm{ln}\mathrm{\hspace{0.17em}2}+46\mathrm{dilog}\mathrm{\hspace{0.17em}3}{\displaystyle \frac{155}{4}}\mathrm{ln}\mathrm{\hspace{0.17em}3}\right]ϵ^2{\displaystyle \frac{128}{225}}ϵ^{5/2}`$
$`+`$ $`\left[{\displaystyle \frac{1127}{180}}+{\displaystyle \frac{302}{15}}\mathrm{ln}\mathrm{\hspace{0.17em}2}+\pi ^2+12\mathrm{dilog}\mathrm{\hspace{0.17em}3}\right]ϵ^3{\displaystyle \frac{4}{9}}ϵ^{7/2}`$
$`+`$ $`\left[{\displaystyle \frac{1}{4}}\mathrm{ln}\left({\displaystyle \frac{1}{ϵ}}\right){\displaystyle \frac{34999}{5040}}+{\displaystyle \frac{1201}{105}}\mathrm{ln}\mathrm{\hspace{0.17em}2}\right]ϵ^4+\mathrm{}`$
$`_{<t^4>}(ϵ)`$ $`=`$ $`2\mathrm{dilog}\mathrm{\hspace{0.17em}3}{\displaystyle \frac{1}{6}}\pi ^2+{\displaystyle \frac{1259}{4860}}{\displaystyle \frac{649}{480}}\mathrm{ln}\mathrm{\hspace{0.17em}3}`$
$`+`$ $`\left[{\displaystyle \frac{85}{9}}+{\displaystyle \frac{3373}{80}}\mathrm{ln}\mathrm{\hspace{0.17em}3}+5\pi ^2+60\mathrm{dilog}\mathrm{\hspace{0.17em}3}\right]ϵ`$
$`+`$ $`\left[{\displaystyle \frac{4879}{108}}+{\displaystyle \frac{400}{3}}\mathrm{ln}\mathrm{\hspace{0.17em}2}+{\displaystyle \frac{5}{2}}\pi ^2{\displaystyle \frac{326}{3}}\mathrm{ln}\mathrm{\hspace{0.17em}3}+30\mathrm{dilog}\mathrm{\hspace{0.17em}3}\right]ϵ^2+{\displaystyle \frac{32}{75}}ϵ^{5/2}`$
$`+`$ $`\left[{\displaystyle \frac{841}{30}}{\displaystyle \frac{824}{15}}\mathrm{ln}\mathrm{\hspace{0.17em}2}{\displaystyle \frac{16}{3}}\pi ^2+24\mathrm{ln}\mathrm{\hspace{0.17em}3}64\mathrm{dilog}\mathrm{\hspace{0.17em}3}\right]ϵ^3+{\displaystyle \frac{32}{75}}ϵ^{7/2}`$
$`+`$ $`\left[{\displaystyle \frac{5849}{315}}+{\displaystyle \frac{372}{35}}\mathrm{ln}\mathrm{\hspace{0.17em}2}8\mathrm{ln}\mathrm{\hspace{0.17em}3}{\displaystyle \frac{8}{3}}\pi ^232\mathrm{dilog3}\right]ϵ^4+\mathrm{}`$
Substituting only the upper limit $`\sqrt{ϵ}`$ in the second integral in eq. (22) we can isolate the large-angle soft-gluon contribution to $`_{<t^m>}(ϵ)`$. The results are the following:
$`_{<t>}(ϵ)|_{\text{large-angle}}`$ $`=`$ $`4\sqrt{ϵ}ϵ\mathrm{ln}\left({\displaystyle \frac{1}{ϵ}}\right){\displaystyle \frac{80}{9}}ϵ^{3/2}ϵ^2\mathrm{ln}\left({\displaystyle \frac{1}{ϵ}}\right)4ϵ^{5/2}`$
$`_{<t^2>}(ϵ)|_{\text{large-angle}}`$ $`=`$ $`ϵ+{\displaystyle \frac{16}{9}}ϵ^{3/2}\left[{\displaystyle \frac{1}{4}}\mathrm{ln}\left({\displaystyle \frac{1}{ϵ}}\right)+{\displaystyle \frac{9}{4}}\right]ϵ^2+{\displaystyle \frac{16}{9}}ϵ^{5/2}ϵ^3`$
$`_{<t^3>}(ϵ)|_{\text{large-angle}}`$ $`=`$ $`{\displaystyle \frac{4}{9}}ϵ^{3/2}+{\displaystyle \frac{3}{4}}ϵ^2{\displaystyle \frac{128}{225}}ϵ^{5/2}+{\displaystyle \frac{3}{4}}ϵ^3{\displaystyle \frac{4}{9}}ϵ^{7/2}`$ (40)
$`_{<t^4>}(ϵ)|_{\text{large-angle}}`$ $`=`$ $`{\displaystyle \frac{1}{4}}ϵ^2+{\displaystyle \frac{32}{75}}ϵ^{5/2}{\displaystyle \frac{7}{18}}ϵ^3+{\displaystyle \frac{32}{75}}ϵ^{7/2}{\displaystyle \frac{1}{4}}ϵ^4`$
Note that (A) presents exact results rather than asymptotic expansions at small $`ϵ`$.
|
no-problem/0003/astro-ph0003068.html
|
ar5iv
|
text
|
# AN IMPROVED RED SPECTRUM OF THE METHANE OR T-DWARF SDSS 1624+0029: ROLE OF THE ALKALI METALS
## 1. INTRODUCTION
Methane or T dwarfs are substellar objects cooler than L and M dwarfs, and have near-infrared (1-2$`\mu `$m) spectra dominated by molecular absorption due to water, methane, and pressure-induced molecular hydrogen. Methane is expected to remain an important atmospheric constituent down to the temperature of Jupiter ($``$125 K), where it also is prominent in the infrared spectrum. The prototype of the class is the companion to the nearby M dwarf star Gl 229 (Nakajima et al. 1995, Oppenheimer et al. 1995). Model atmosphere analyses fitting synthetic spectra to detailed spectrophotometric and photometric observations indicate a temperature for this object near or slightly below 1,000 K (Marley et al. 1996, Allard et al. 1996). This past year has seen the discovery of several, similar field T dwarfs, found first in the Sloan Digital Sky Survey (Strauss et al. 1999, S99; Tsvetanov et al. 2000), shortly thereafter in the Two Micron All Sky Survey (Burgasser et al. 1999) data sets, and also in an ESO survey (Cuby et al. 1999).
All of the new objects also have 1–2$`\mu `$m spectra characterized by the very strong molecular absorbers listed above. Unfortunately, at least at low spectral resolution, the differences among their spectra appear somewhat subtle. The current field surveys by SDSS and 2MASS are magnitude limited, and therefore likely to identify the warmest (highest luminosity) T dwarfs. Still, the known parallax for Gl 570D shows that this object must be substantially cooler than the prototype, yet its infrared spectrum is similar to the others (Burgasser et al. 2000). The relative strengths of the molecular bands are not strongly dependent on the effective temperature. The molecular absorbers also are effective in hiding the weaker atomic line transitions which might be useful discriminants of temperature. At least initially, it is proving difficult to establish spectral types and a temperature sequence for the T dwarfs at the wavelengths where they are easiest to observe.
It is possible to observe the brightest of the new T dwarfs at wavelengths significantly shortward of 1$`\mu `$m, where the atmospheres may prove to be more transparent. Model calculations indicate that there may be few molecular and atomic opacity sources, and those that are present may be more sensitive to temperature. In particular, the behavior of the alkali resonance doublet features which reside generally in this red part of the spectrum (0.5-0.9$`\mu `$m) could be particularly useful in diagnosing the temperature and testing for the formation of dust grains (Burrows and Sharp 1999; Lodders 1999; Tsuji, Ohnaka & Aoki 1999; Burrows, Marley & Sharp 2000; hereafter, BMS). These papers generally predict that different alkalis should precipitate out as sulfides, salts or other condensates over a range of T<sub>eff</sub> below about 1,500 K, in the order (with decreasing T<sub>eff</sub>) according to BMS of Li first, then Cs, K and Na. Indeed, the latter two alkali features are the most prominent features in the red spectra of the somewhat-warmer late L dwarfs (Kirkpatrick et al. 1999, Martín et al. 1999, Reid et al. 2000).
The red spectra of field T dwarfs can be relevant to another controversy regarding the spectrum of Gl 229B: The companion object shows too little red flux relative to the model predictions of Marley et al. (1996) and Allard et al. (1996). These authors concluded that an additional opacity source exists shortward of 1$`\mu `$m. Golimowski et al. (1998) suggested as the solution that TiO returned to gaseous form (it precipitates out in M-L dwarfs above 2,000 K) in Gl 229B. However, this hypothesis predicts TiO band absorption at the wavelengths seen in M dwarfs, but these are seen neither in late L dwarfs nor in Gl 229B. Griffith et al. (1998) turned to solar system physics for an intriguing answer: they hypothesized a population of small photochemical haze particles analogous to the red Titan Tholins (Khare and Sagan 1984), heated by ultraviolet radiation from the primary M dwarf to temperatures at least 50% higher than the T<sub>eff</sub>. Dust is invoked in two other proposed solutions to the Gl 229B spectral slope: Tsuji, Ohnaka & Aoki (1999) describe a hybrid atmospheric model with a warm dust layer that effectively blocks short-wavelength flux. Pavlenko, Zapaterio Osorio & Rebolo (2000) attempt to fit the red spectrum with a scattering dust opacity which increases sharply to shorter wavelengths. Finally, BMS suggest that the alkali opacity alone – in particular the broad wings of K and Na – is the agent which depresses the emergent flux out to 1$`\mu `$m. Provided K and Na still exist in atomic form at the relevant temperature, they argued that there is no need to invoke dust or some additional absorber at short wavelengths.
SDSS 1624+0029 (hereafter, SDSS 1624), the first field T dwarf, was found in preliminary Sloan Digital Sky Survey data (S99). The discovery paper includes a red spectrum with the Apache Point 3.5-m telescope, with detected flux down to 8000Å. The accessible Cs I features at 8521Å and 8943Å appeared weak or absent, while both are distinct features in the Gl 229B red spectrum (Oppenheimer et al. 1998). The apparent weakness of the Cs I features and shallower red spectral slope of SDSS 1624 led BMS to the preferred conclusion that this object “is tied to a lower core entropy,” which would normally imply a lower T<sub>eff</sub> than that of Gl 229B. In contrast, the Sloan source showed somewhat shallower methane absorption in the infrared spectrum, suggesting to Nakajima et al. (2000) that it is warmer than the prototype. The red spectrum also shows that a field object can show a similar excess below 1$`\mu `$m as the companion object Gl 229B, thus demonstrating that the excess does not depend upon the presence of a nearby source of potential ultraviolet photons (ie., to produce the “Titan Tholins”).
We discuss here a red spectrum of SDSS 1624 obtained with the Keck II LRIS spectrograph that extends the detection of flux down to 6200Å, and has improved signal-to-noise ratio at longer red wavelengths. We believe this observation allows us to test the roles of the alkali metals and the need for dust and/or an additional short-wavelength absorber.
## 2. THE SPECTRUM AND ITS FEATURES
Two consecutive spectra of SDSS 1624 were obtained on 1999 July 16 with the Keck II telescope and LRIS using the configuration described for most observations in Kirkpatrick et al. (1999). Each had an exposure time of 1800 seconds. Reduction was done with standard IRAF tools. The averaged spectrum spanning the entire wavelength interval of 6,300–10,100Å at 9Å resolution is shown in Fig. 1. Significant flux is detected over this entire range, as shown in an inset, which details the 6,300–8,200Å spectrum on an appropriate vertical scale. The variance spectrum from the standard IRAF task is also shown in the short wavelength inset where the signal-to-noise ratio (SNR) is smallest. Longward of 8200Å the variance spectrum (not shown) rises slowly to about 4$`\times `$10<sup>-18</sup> near one micron, except for the noisier intervals affected by the strongest atmospheric OH bands. Some significant conclusions may be drawn just from inspection of these figures.
(1) The spectrum at the shorter wavelengths, not accessible in previous observations, shows a strong, broad dip to zero flux centered precisely on the 7700Å blended doublet (see upper inset). Note that the variance plot remains flat over this interval, a strong indication that the feature is not due to any change in the noise. A “pseudo”-EW (pEW) of 390Å for the line cores was measured over the interval 7300-8100Å. This was estimated from the IRAF “splot” routine which simply fits a linear continuum across the stated interval. It is recognized that the flux levels at the interval boundaries are not the true continuum. Moreover, the procedure ignores the the considerable absorption in the extended wings of this doublet. Nonetheless, the pEW estimate may serve as a useful benchmark of the K I strength for quick comparison with any similar spectra obtained for other objects. The strength of the feature confirms the suggestion of Tsuji et al. (1999) and BMS that the red wing of this feature is a substantial absorber shortward of one micron in this T dwarf; presumably the same feature contributes to the flux deficiency in Gl 229B. Indeed, it was already recognized that this feature increases in strength with later types among the L dwarfs (Kirkpatrick et al. 1999; Martín et al. 1999).
(2) The detected flux rises shortward of 7700Å, revealing the blue wing of the K I feature. A broad maximum of the flux level may be reached near 7000Å, but significant flux is detected down to 6200Å, after moderate decline in the flux level shortward from 7000Å. This last decline is likely due to the red wing of the Na I resonance doublet centered near 5892Å. Na is normally the most abundant of the five alkalis in a Popuation I mix. Both Na and K are expected to survive to lower temperatures than Li, Cs and Rb, so their presence here is not surprising. Indeed, subordinate lines of K I were reported in the S99 (see also Nakajima et al. 2000) infrared spectrum. The SNR dips below unity at the last few hundred Å, but the conclusions about the shape of the continuum are robust. No significant absorption features are claimed.
(3) Significant Cs I absorption lines, both members of the well-separated doublet, are easily detected. The pEWs of 6.5Å and 6.1Å, for the respective 8521Å and 8943Å transitions, were measured over full width intervals of 25Å each. The Cs lines appear much stronger than was apparent in the Arc 3.5-m spectrum of S99. We note that the apparent strength of the 8943Å line could be enhanced by contribution from an overlapping weak CH<sub>4</sub> band. For the same intervals, pEWs of 6.5Å and 5.4Å were measured for the web-posted Oppenheimer et al. (1998) spectrum of Gl 229B. While detailed modeling of the alkali line profiles and red spectrum needs to be done, the conclusion of BMS that SDSS 1624 may have a lower temperature than Gl 229B needs to be reconsidered. It is also possible, however, that the Cs line strength varies with time and/or location on the surface.
(4) No significant H$`\alpha `$ emission or Li I absorption is detected, to limits we estimate as pEWs of 15Å very approximately, since the SNR is near unity.
(5) The $``$9300Å H<sub>2</sub>O band reported by S99 is strong in this spectrum. A possible absorption which appears to be strongest near 9955Å, could be due at least in part to FeH, if the wavelength calibration is poor near the edge of the spectrum. A possible absorption feature near 8343Å may be H<sub>2</sub>O; one near 8624Å (see lower inset of Fig. 1) coincides with both CH<sub>4</sub> and CrH bands, more likely the former.
## 3. AN EXPLORATORY MODEL FIT AND POSSIBLE IMPLICATIONS
Several authors have explored quantitative fits to the alkali resonance lines as a tool for estimating T<sub>eff</sub> values for L and T dwarfs. Unfortunately, the detailed treatment of the line broadening poses a complicated problem – see BMS Section 3. Only a few key points are mentioned here. Available empirical data (cf. Nefedov, Sinel’shchikov, and Usachev 1999) provide valuable clues to what might be the best functional form for the profile. However, broadening parameters remain uncertain. In any case, simple treatments (ie. Lorentzian profiles) used previously in the literature appear inappropriate.
The detailed spectral fit depends upon several parameters, including the T<sub>eff</sub>, gravity, and abundances, but also the line profile shapes, and the degree of rainout (Burrows and Sharp 1999; BMS). We emphasize that the physical treatment of these last two remain uncertain. A detailed exploration of these many parameters would be required, at minimum, to achieve quantitative estimates of the T<sub>eff</sub>, surface gravity and abundances. It is questionable whether a satisfactory, unique solution will be found until a trigonometric parallax can help fix the luminosity and radius. What is shown below is an exploratory fit which we argue nonetheless yields useful qualitative conclusions. The techniques employed are described more generally in BMS.
Figure 2 shows a comparison between a representative model spectrum (dashed line) and a smoothed version of the SDSS 1624 spectrum (solid). Superposed is the Leggett et al. Gliese 229B spectrum of Oppenheimer et al. (1998). However, the observed flux has been transformed to an absolute flux for an assumed distance to the Sloan dwarf of 10 parsecs (S99), and is in milliJanskys, while the wavelength is in microns.
It may be seen clearly that the K I and Na I doublets and their broad wings dominate the spectrum. For this illustrative model fit, we assumed T<sub>eff</sub> = 1100 K, gravity = $`10^5`$ cm s<sup>-2</sup>, abundances of one half solar, alkali line wing cutoff parameters defined in BMS of 0.2 (Na I) and 0.5 (K I), and an intermediate degree of rainout for the alkalis (BMS). The SDSS 1624 data were smoothed with a 10-Å boxcar function, which, among other things, muted the depth of the Cs lines relative to the model, but are of similar strength. No strong Li I absorption is predicted.
To obtain a reasonable fit, it is not clear to us that a dust component or additional source of red opacity is required. Tsuji’s need for additional red opacity may be explainable by (1) underestimation of the alkali wing opacity due to the assumption of a Lorentzian, and (2) the I-band broad band flux was plotted at the wrong mean wavelength (see BMS). Although the presence of dust in the atmosphere certainly cannot be precluded, the alkalis appear to be the dominant cause of the unique shape of the red energy distribution. The detection of flux to the blue boundary of the spectrum also has consequences. In particular, a Rayleigh scattering dust opacity, as suggested by Pavlenko et al (2000), would have more than double the opacity at 7000Å than at 8400Å. Finally, the observed narrowness of the 7700Å feature (relative to our models at the Gl 229B temperature near 950 K) and the presence of strong cesium features together argue that the effective temperature of SDSS 1624 is above that of Gliese 229B (BMS), in concurrence with Nakajima et al. (2000).
We emphasize that no concerted attempt was made to find a rigorous fit, that other combinations of parameters are still viable, and that, given the SNR of the data at the shorter wavelengths, there are indeed parameter degeneracies.
This research is supported by a NASA JPL grant (961040NSF) permitting us to undertake a core science project on very low mass objects discovered in the $`2MASS`$ survey. AB acknowledges support from NASA grants NAG5-7499 and NAG5-7073. The model curve was computed based upon a temperature/pressure profile generated by M. Marley (private communication) and the models in Burrows et al. (1997). We wish to acknowledge helpful suggestions from an anonymous referee.
|
no-problem/0003/cond-mat0003189.html
|
ar5iv
|
text
|
# Effect of Quantised Lattice Fluctuations on the Electronic States of Polyenes
## Abstract
We solve a model of interacting electrons coupled to longitudinal phonons using the density matrix renormalisation group method. The model is parametrised for polyenes. We calculate the ground state, and first excited odd-parity singlet and triplet states; and we investigate their energies, and bond length changes and fluctuations for up to $`30`$ sites. The transition energy and the soliton width of the triplet state show deviations from the adiabatic approximation for chain lengths larger than the classical soliton size, because of de-pinning by the quantised lattice fluctuations.
The inter-play of electron-electron interactions and electron-lattice coupling in polyene oligomers and trans-polyacetylene, (CH)<sub>x</sub>, results in a rich variety of low energy excitations. These excitations include triplet states of soliton-antisoliton pairs, singlet states comprising bound pairs of triplets, and exciton-polarons. Within the adiabatic (or classical) approximation the nature and energy of these excitations is now fairly well understood. A parametrised Pariser-Parr-Pople-Peierls model, solved within the adiabatic approximation, predicts accurate excitation energies for oligomers of up to $`20`$ or so sites . However, for longer chains there are deviations from the polyacetylene thin film results. These discrepencies are partly explained by the self-trapping of the excited states by the lattice. The calculated energies deviate from a linear extrapolation in $`1/N`$ as the chain length becomes larger than the solitonic structures. We can use this deviation in the energy to estimate an upper bound for the self-trapping energy. This deviation is $`0.4`$ eV for the optically allowed ($`1^1B_u^{}`$) state, $`0.7`$ eV for the even-parity singlet ($`2^1A_g^+`$) state and $`0.3`$ eV for the lowest lying triplet ($`1^3B_u^+`$). Furthermore, a linear extrapolation in $`1/N`$ of the short oligomer experimental values predicts infinite chain energies of the $`1^1B_u^{}`$ and $`2^1A_g^+`$ states close to those observed in polyacetylene thin films, suggesting that self-trapping may be a partial artefact of the adiabatic approximation. Thus, the question remains as to the role of quantised lattice fluctuations, both on the dimerisation of the ground state, and to the de-pinning of the excited states. These fluctuations are the subject of this paper.
There have been a number of studies of quantised lattice dynamics in the ground state of the uncorrelated Su-Schrieffer-Heeger model , indicating that fluctuations in the bond length are comparable to the bond length changes, but that the Peierls dimerisation is stable against such fluctuations. There has also been a variational Monte Carlo study of an interacting electron-phonon model . However, there have been no studies of excited states, as the incorporation of quantised lattice dynamics into the correlated Pariser-Parr-Pople-Peierls model presents a formidable challenge.
The advent of the density matrix renormalisation group (DMRG) method , has enabled definitive model studies of correlated electron systems, including long range interactions and dynamical phonons , . In this work we report the results of extensive calculations on a model system which for the first time affords us insight into the effect of quantised lattice dynamics on the properties of excited states of long polyenes. Electrons, interacting via long-range Coulomb forces, are coupled to longitudinal phonons. The key results of this calculation are that the de-pinning of excited states due to quantum lattice fluctuations can become substantial as the conjugation increases. In particular, there is a marked reduction in the energy and increase in the soliton width of the triplet excited state.
The Hamiltonian, with free boundary conditions, is defined as ,
$``$ $`=`$ $`\mathrm{}\omega {\displaystyle \underset{i=2}{\overset{N1}{}}}\left(b_i^{}b_i+{\displaystyle \frac{1}{2}}\right)+\mathrm{}\omega _0\left(b_1^{}b_1+b_N^{}b_N+1\right)\mathrm{}\omega {\displaystyle \underset{i=1}{\overset{N1}{}}}B_{i+1}B_i+2\mathrm{\Gamma }tg{\displaystyle \underset{i=1}{\overset{N1}{}}}\left(B_{i+1}B_i\right),`$ (3)
$`+U{\displaystyle \underset{i=1}{\overset{N}{}}}\left(n_i{\displaystyle \frac{1}{2}}\right)\left(n_i{\displaystyle \frac{1}{2}}\right)+{\displaystyle \frac{1}{2}}{\displaystyle \underset{ij}{\overset{N}{}}}V_{ij}(n_i1)(n_j1)`$
$`t{\displaystyle \underset{i=1,\sigma }{\overset{N1}{}}}\left(1+g\left(B_{i+1}B_i\right)\right)\left(c_{i+1\sigma }^{}c_{i\sigma }+c_{i\sigma }^{}c_{i+1\sigma }\right).`$
$`b_i^{}`$ ($`b_i`$) creates (destroys) a phonon and $`c_{i\sigma }^{}`$ ($`c_{i\sigma }`$) creates (destroys) an electron on site $`i`$. $`B_i=(b_i^{}+b_i)/2`$, $`g=(\lambda \pi \mathrm{}\omega /2t)^{1/2}`$ and $`\omega =\sqrt{2}\omega _0=\sqrt{2K/m}`$. We use the Ohno function for the Coulomb interaction: $`V_{ij}=U/\sqrt{1+(Ur_{ij}/14.397)^2},`$ where the bond lengths are in Å. The single and double bond lengths used in the evaluation of $`V_{ij}`$ are $`1.46\AA `$ and $`1.35\AA `$, respectively, and the bond angle is $`120^0`$. $`t=2.539`$ eV, $`U=10.06`$ eV, $`\mathrm{\Gamma }=0.602`$, $`\lambda =0.115`$ and $`\mathrm{}\omega _0=0.2`$ eV.
The essential approach we adopt is an extension of the local Hilbert space reduction of for a representative repeat unit, namely two lattice sites. Once a repeat unit Hilbert space is optimised it is then augmented with the system block in the standard finite lattice algorithm . Since the classical lattice geometry of excited states changes as the chain length increases, there is no a priori reason to suppose that the optimal repeat unit electron-phonon basis for the shortest chain is appropriate for longer chains. Thus, it is generally necessary to perform in situ optimisation, i.e. a repeat unit Hilbert space is re-optimised when it forms part of the target chain size. Generally, we expect in situ optimisation to be necessary whenever the short scale properties are modified by the long scale properties.
We now outline the procedure in more detail. We begin with a six site lattice, composed of three repeat units, and optimise the repeat unit electron-phonon Hilbert space. This is done by retaining the optimised states and ‘folding-in’ some ‘bare’ electron-phonon states (typically $`16`$). Once the full electron-phonon basis has been swept through for a particular repeat unit, the same procedure is applied to the next repeat unit until convergence is achieved. Next, two repeat units are augmented to form a four site block for the next chain size, i.e. $`10`$ sites. The optimised basis for each repeat unit is retained. For $`10`$ sites and greater, each repeat unit is re-optimised (including the end units) by sweeping through the electron-phonon basis on the first finite lattice sweep. (In situ optimisations in subsequent finite lattice sweeps were found to be unnecessary.) During the in situ optimisation only a few states (typically $`50`$) are retained for the environment blocks. After completing the sweep the optimised states of the repeat unit are retained for augmentation with the left hand block. Typically, $`150`$ states are used for the system and environment blocks during augmentation.
A key goal of this work is to study excited states, which we do by exploiting the particle-hole (Ĵ) and spin-flip (P̂) symmetries of Eq. (1) . The inversion symmetry is measured at the middle of a finite lattice sweep. We have checked that setting $`J=+1`$ and $`P=+1`$ targets the $`1^1A_g^+`$ state, setting $`J=1`$ and $`P=+1`$ targets the $`1^1B_u^{}`$ state, and setting $`J=+1`$ and $`P=1`$ targets the $`1^3B_u^+`$ state.
We now turn to the convergence tests. We first establish convergence with respect to the number of optimised states per repeat unit. Table I shows the ground state and the $`1^1B_u^{}`$ transition energies for the six site chain for a maximum number of two and three bare phonons per site. We see that with $`64`$ states the transition energy has converged to within $`0.001`$ eV. Next, we consider the ground state and the $`1^1B_u^{}`$ transition energies as a function of the maximum number of bare phonons per site, as shown in table II. We see that the transition energy has essentially converged to $`0.001`$ eV with five phonons per site, and to within $`0.02`$ eV with two phonons (which is better than experimental accuracy). The converged $`1^1B_u^{}`$ excitation energy of $`4.62`$ eV is very close to the classical result of $`4.65`$ eV. Notice also, that the average phonon occupation per site is ca. $`0.2`$. Finally, we consider the convergence with super block Hilbert space size for $`18`$ sites. The convergence of the ground state energy is reasonable for up to $`180000`$ states, and the transition energy has converged to better than $`0.01`$ eV.
Next, we consider in what way quantising the lattice degrees of freedom leads to deviations from the adiabatic approximation. We calculate the classical phonon displacement, $`q_i=(\mathrm{}\omega /K)^{1/2}B_i`$ , the bond length distortion and the root mean square fluctuations in the bond length. Fig. 1 shows the staggered bond length changes in the ground state of an 18 site chain for up to three phonons per site. Also shown is the classical result. In the middle of the chain the phonon calculation is close to the classical result of a bond length distortion of ca. $`0.05\AA `$. However, towards the end of the chain the phonon calculation predicts a somewhat larger distortion. We find that in the limit of long chains the relative bond length distortion is ca. 0.9, almost independent of the number of phonons per site, and close to previous theoretical and experimental estimates. We note that the bond length fluctuations do not reduce the average bond length change for the linear polyenes considered here. This is because the dimerised ground state is not degenerate with respect to the state with the bond lengths reversed (in contrast to cyclic polyenes) , and thus there is no quantum mechanical tunnelling between the two dimerised states.
Lattice fluctuations do, however, de-pin the self-trapped solitonic structures of the excited states. Fig. 2 shows the triplet state transition energy as a function of inverse chain length for up to three phonons per site, and for the classical approximation. The energy in the classical approximation deviates markedly from a $`1/N`$ behaviour for chain lengths greater than $`20`$ sites, as shown in Fig. 2 and especially in . We intrepret this as a result of the electronic wavefunction being trapped by the lattice structure. The phonon calculations have essentially converged by three phonons per site. In constrast to the classical result, the converged phonon calculation shows a much weaker deviation from $`1/N`$ behaviour, leading to an expected correction of a few tenths of an eV in the infinite chain limit.
The solitonic structure of the triplet state supports this de-pinning hypothesis. Fig. 3 (a) and (b) show the soliton structures for $`18`$ and $`30`$ sites, respectively. The position of the defect, at roughly the 4th. bond from the center, is roughly the same for both chain lengths, and for both the classical and phonon calculations. We expect this, as its position is determined by the electronic component of the wavefunction . However, we can see that for the classical calculation the soliton width is virtually the same for both chain lengths (see also Fig. 4 of ), whereas for the phonon calculation the soliton width is greater in the longer chain. This indicates that the coupled electronic and lattice fluctuations lead to an increased delocalisation of the wavefunction, and hence to a lower energy. The $`2^1A_g^+`$ state is a bound state of two triplets, and it too is self-trapped by the lattice. Although we cannot target this state in the current calculation, the above discussion indicates that it will also be de-pinned by lattice fluctuations.
Finally, we consider the optically allowed excitonic ($`1^1B_u^{}`$) state. According to the adiabatic approximation , this state creates a shallow polaronic distortion of the lattice, with self-trapping only becoming important for chain lengths longer than ca. $`40`$ sites. Thus, we would not expect lattice fluctuations to play a significant role for the shorter chains which we have so far considered. This is confirmed by the excitation energies shown in Table IV, indicating that the quantised lattice calculated energies are within $`0.1`$ eV from the classical result, and Fig. 4, showing that the quantum and classical polaronic structures are virtually identical.
In conclusion, an extended DMRG method has been applied to an interacting electron-phonon model of polyenes. Quantum lattice fluctuations are shown to play an important role in the de-pinning of the self-trapped excited states, leading to corrections to the adiabatic approximation, and to an expected reduction of the transition energies of a few tenths of an eV for long chains. Thus, a full quantum mechanical treatment of the Pariser-Parr-Pople-Peierls model gives remarkably accurate predictions for the excited state energies of polyenes.
M. Yu. L. was supported by the EPSRC (U.K.) (GR/K86343).
|
no-problem/0003/quant-ph0003029.html
|
ar5iv
|
text
|
# Controlling decoherence of a two-level-atom in a lossy cavity
## I Introduction
The idea of controlling the coherent dynamics of a quantum system by an external time-dependent force has found wide spread experimental and theoretical interest in many areas of physics (for reviews see ). It is, e.g., a commonly used tool to manipulate trapped atoms in quantum optics as well as to control chemical reactions by a strong laser field . In the context of quantum optics, it has been demonstrated experimentally that a frequency-modulated excitation of a two-level atom by use of a microwave field driving transitions between two Rydberg Stark states of potassium significantly modifies the time evolution of the system. In the context of tunneling systems it has also been demonstrated that it is in principle possible to completely suppress the coherent tunneling of an initially localized wave packet in a double-well potential by an external, suitably designed time-periodic cw-perturbation (coherent destruction of tunneling) .
However, real quantum systems are always in contact with their environment. The coherent dynamics is then usually destroyed due to the influence of the large number of environmental degrees of freedom. Not only the phase of the quantum system is disturbed (decoherence) but also energy exchange (dissipation) takes place between the system under consideration and the environment. An example of such a system-bath interaction is the ensemble of electromagnetic field modes in a cavity, each of which is described as a quantum mechanical harmonic oscillator . Each mode interacts with an atom trapped in the cavity. On the other side, the cavity modes themselves are also not isolated from the macroscopic environment; as such they are more realistically described as damped quantum harmonic oscillators. A topic of fundamental interest is the decay of quantum superpositions of states. In it is shown how quantum optical nonclassical states are highly sensitive to dissipation stemming from a zero-temperature heat bath. Experimental works studying decoherence systematically are rare. In the decoherence of mesoscopic superpositions of field states in the cavity has been investigated. In a recent work, Wineland and collaborators demonstrate that the decoherence rate scales with the square of a quantity that describes the separation between two initial states. Moreover, Knight and co-workers proposed an experimental scheme to probe the decoherence of a macroscopic object.
In this spirit, the question arises to which extent it is possible to control the dynamics of a quantum system in presence of decoherence and, moreover, whether the effect of decoherence can be minimized by an external time-dependent force, e.g., by a laser field . To achieve this goal, various approaches have been undertaken in recent years. (i) It has been shown that the effect of coherent destruction of tunneling (see above) can be used to slow down the relaxation of a quantum system to its asymptotic equilibrium . (ii) Moreover, a suitable tailored sequence of radio-frequency pulses (“quantum bang-bang” or “parity kicks” ) that repeatedly flip the state of a two-level atom may be used to suppress decoherence. (iii) The cavity-induced spontaneous emission of a two-level atom can be manipulated by a strong rf field which couples to the cavity mode . (iv) The manipulation of the system-bath interaction by a fast frequency modulation also results in slowing down decoherence and relaxation .
The objective of this work is to study the influence of a time-periodic driving field on the dynamics of a two-level atom. In the first part of this work (section II), we deal with the objective to “freeze” a coherent dynamics, i.e., we shall employ an effect known as coherent destruction of tunneling. Most importantly, we investigate this freezing phenomenon from the viewpoint of its dependence on different initial preparations.
In the second part of this work (section III), we do not elaborate further on the effect of coherent destruction tunneling, but instead investigate the control of decoherence of a two-level atom placed in a lossy cavity. Our model consists of a two-state system which is coupled to a time-dependent periodic field. The driven two-state system interacts furthermore with one mode of the cavity having the frequency $`\mathrm{\Omega }`$. This mode is itself damped by the coupling to a bath of harmonic oscillators (lossy cavity). It is known that a Hamiltonian consisting of (1) a system part, (2) a harmonic oscillator with frequency $`\mathrm{\Omega }`$ that is being coupled to the system, and (3) a bath of harmonic oscillators which are coupled to this very single harmonic oscillator can be mapped onto a Hamiltonian composed of the system part coupled to a harmonic bath with an effective spectral density. This effective spectral density possesses a Lorentzian-shaped peak at $`\mathrm{\Omega }`$. The completely isolated atom (no driving, no cavity) evolves in time in a coherent way according to the Schrödinger equation. It is this dynamics which we want to preserve and protect as far as possible from the decoherent influence of the environment. Our major finding is that a cw-control field can indeed be used to (i) reduce decoherence and (ii) to restore to some extent the unperturbed, non-dissipative time-evolution.
## II The driven two-level atom
### A Floquet Formalism
To start we consider a Hamiltonian describing a two-level atom with the ground state $`|1`$ and an excited state $`|2`$. The energy levels are separated by the energy $`\mathrm{}\mathrm{\Delta }_0`$. The atom with the transition dipole moment $`\mu `$ is driven within the long wavelength approximation by an external, time-dependent laser field of the form $`(t)=_0\mathrm{cos}(\omega _\mathrm{L}t)`$ with frequency $`\omega _\mathrm{L}`$ and amplitude $`_0`$, yielding the driven quantum system
$$H(t)=\frac{\mathrm{}}{2}[\mathrm{\Delta }_0\widehat{\sigma }_z+s(t)\widehat{\sigma }_x].$$
(1)
Here, the matrices $`\widehat{\sigma }_i,i=x,y,z`$ denote the Pauli spin matrices. The part involving $`s(t)=s\mathrm{cos}(\omega _\mathrm{L}t)`$ with $`s=2\mu _0/\mathrm{}`$ presents the time-dependent driving which couples to the transition dipole moment $`\mu `$ of the atom. Note that within this scaling the amplitude $`s`$ possesses the dimension of a frequency. The driven time evolution of the populations of the energy levels exhibits an oscillatory behaviour. For an initial preparation of the atom in the ground state and for a resonant driving, i. e. $`\omega _\mathrm{L}=\mathrm{\Delta }_0`$, and with $`s`$ not large we can invoke the rotating wave approximation. The population of each state then oscillates between 0 and 1 with the Rabi frequency $`\mathrm{\Omega }_\mathrm{R}=s/2`$. Because the Hamiltonian (1) is periodic in time with the period $`𝒯=2\pi /\omega _\mathrm{L}`$, i.e., $`H(t+𝒯)=H(t)`$, we next apply for the general case away from resonance the Floquet formalism . The time-dependent Schrödinger equation may be written as
$$\{H(t)\mathrm{i}\mathrm{}/t\}|\psi (t)=0.$$
(2)
According to the Floquet theorem, there exist solutions to eq. (2) of the form
$$|\mathrm{\Psi }_\alpha (t)=\mathrm{exp}(\mathrm{i}\epsilon _\alpha t/\mathrm{})|\mathrm{\Phi }_\alpha (t),$$
(3)
with $`\alpha =1,2`$. The periodic function $`|\mathrm{\Phi }_\alpha (t)`$ are termed the Floquet modes and these obey
$$|\mathrm{\Phi }_\alpha (t+𝒯)=|\mathrm{\Phi }_\alpha (t).$$
(4)
Here, $`\epsilon _\alpha `$ is the so called Floquet characteristic exponent or the quasienergy, which is real-valued and unique up to multiples of $`\mathrm{}\omega _\mathrm{L}`$. Upon substituting eq. (3) into the Schrödinger equation (2) one obtains the eigenvalue equation for the quasienergy $`\epsilon _\alpha `$
$$(t)|\mathrm{\Phi }_\alpha (t)=\epsilon _\alpha |\mathrm{\Phi }_\alpha (t)$$
(5)
with the Hermitian operator
$$(t)H(t)\mathrm{i}\mathrm{}/t.$$
(6)
We stress that the Floquet modes
$$|\mathrm{\Phi }_\alpha ^{}(t)=|\mathrm{\Phi }_\alpha (t)\mathrm{exp}(\mathrm{i}n\omega _\mathrm{L}t)|\mathrm{\Phi }_{\alpha n}(t)$$
(7)
with $`n`$ being an integer number $`n=0,\pm 1,\pm 2,\mathrm{}`$ yield equivalent solutions to eq. (3) but with the shifted quasienergy
$$\epsilon _\alpha \epsilon _\alpha ^{}=\epsilon _\alpha +n\mathrm{}\omega _\mathrm{L}\epsilon _{\alpha n}.$$
(8)
Therefore, the index $`\alpha `$ corresponds to a whole class of solutions indexed by $`\alpha ^{}=(\alpha ,n)`$. The eigenvalues $`\{\epsilon _\alpha \}`$ can thus be mapped into a first Brillouin zone obeying $`\mathrm{}\omega _\mathrm{L}/2\epsilon <\mathrm{}\omega _\mathrm{L}/2`$. It is clear that for our choice of the external driving force, i.e. $`s(t)=s\mathrm{cos}\omega _\mathrm{L}t`$ the quasienergies are functions of the driving amplitude $`s`$ and the driving frequency $`\omega _\mathrm{L}`$. For adiabatically vanishing external driving they merge into the eigenvalues of the time-independent part of the Hamiltonian (1), i.e.,
$$\epsilon _{\alpha n}(s,\omega _\mathrm{L})\stackrel{s0}{}\mathrm{}\mathrm{\Delta }_0/2+n\mathrm{}\omega _\mathrm{L},$$
(9)
where the negative (positive) sign corresponds to $`\alpha =1(\alpha =2)`$. The Floquet modes, correspondingly, turn into the eigenfunctions $`|\alpha `$ multiplied by an additional phase factor, i.e.,
$$|\mathrm{\Phi }_{\alpha n}(t)\stackrel{s0}{}|\alpha \mathrm{exp}(\mathrm{i}\omega _\mathrm{L}nt).$$
(10)
For a finite driving strength $`s0`$, the determination of the quasienergies $`\epsilon _\alpha `$ requires the use of numerical methods. The interested reader is referred in this context to the literature . However, we here state without proof that in the high-frequency regime $`\mathrm{\Delta }_0\text{max}[\omega _\mathrm{L},(s\omega _\mathrm{L})^{1/2}]`$ the difference between the two quasienergies is given by
$$\epsilon _{2,1}\epsilon _{1,1}=\mathrm{}\mathrm{\Delta }_0J_0(s/\omega _\mathrm{L}),$$
(11)
where $`J_0`$ denotes the zeroth-order Bessel function of the first kind.
### B Freezing the coherent dynamics of a driven two-level system
Eq. (11) implies a most interesting consequence for a driven two-level system : If one chooses the driving parameter $`s`$ and $`\omega _\mathrm{L}`$ in such a way that the argument of the Bessel function is at a zero of the Bessel function, the splitting between the quasienergy vanishes. Possible transitions between the Floquet states are then at most induced by the remaining periodic time-dependent parts of the corresponding Floquet modes $`|\mathrm{\Phi }_\alpha (t)`$. This effect has been discovered in the context of tunneling systems. There, a wave packet being an equally weighted superposition of the symmetric and antisymmetric ground state is initially localized at one side of a double-well potential. By applying an external suitably tailored periodic field, the wave packet can be stabilized and can be prevented from coherently tunneling back and forth between the two wells, i.e., one finds coherent destruction of tunneling (CDT) . We emphasize here that the crossing of two tunneling related quasienergy levels yields a necessary (but not sufficient) criterion for the suppression of coherent tunneling .
The challenge we want to address next is as follows: How does the driven dynamics of a two level atom that is being prepared in some arbitrary initial state evolve when the corresponding condition for the parameters obey the CDT-condition (11)? The system dynamics can be described by its density operator $`\widehat{\rho }(t)`$, which is a $`2\times 2`$-matrix, i.e.,
$$\widehat{\rho }(t)=\widehat{I}/2+\underset{i=x,y,z}{}\sigma _i(t)\widehat{\sigma }_i/2,$$
(12)
where the expectation values $`\sigma _i(t):=\mathrm{Tr}\{\widehat{\rho }(t)\widehat{\sigma }_i\},i=x,y,z`$ are the dynamical quantities of interest. $`\widehat{I}`$ denotes the unit matrix and $`\sigma _x(t)`$ and $`\sigma _y(t)`$ are related to the coherences (the off-diagonal elements) of $`\widehat{\rho }(t)`$ while $`\sigma _z(t)`$ is the population difference between the two energy eigenstates $`|\alpha `$. This implies that the state of the quantum system at time $`t`$ is fully determined by the knowledge of the three expectation values $`\sigma _i(t)`$.
To determine the state of the driven two-level system at time $`t`$, we consider the Heisenberg equation of motion for the density matrix. Using the commutation relations for the $`\widehat{\sigma }_i`$, we arrive at the equation of motion for the expectation values $`\sigma _i(t)`$ in (12), i.e.,
$`\dot{\sigma }_x(t)`$ $`=`$ $`\mathrm{\Delta }_0\sigma _y(t),`$ (13)
$`\dot{\sigma }_y(t)`$ $`=`$ $`\mathrm{\Delta }_0\sigma _x(t)s(t)\sigma _z(t),`$ (14)
$`\dot{\sigma }_z(t)`$ $`=`$ $`s(t)\sigma _y(t).`$ (15)
To study the dependence of the effect of coherent destruction of tunneling on the initial preparation we first choose as initial state an equally weighted coherent superposition of the two unperturbed energy eigenstates, i.e.,
$$|\mathrm{\Psi }(t=0)=\frac{1}{\sqrt{2}}(|1+|2),$$
(16)
corresponding to $`\sigma _x(t=0)=1,\sigma _y(t=0)=\sigma _z(t=0)=0`$. We solve the set of coupled differential equations (15) numerically by a standard fourth order Runge-Kutta integration algorithm with adaptive step-size control. In Fig. 1a, the time-dependence of the three expectation values is depicted. The driving parameters are chosen such that the condition (11) is fulfilled: in doing so we use $`\omega _\mathrm{L}=50\mathrm{\Delta }_0`$ and $`s=120.241\mathrm{}\mathrm{\Delta }_0`$. Surprisingly all three expectation values $`\sigma _i(t)`$ can be brought simultaneously to an almost perfect standstill !
Next, we choose the ground state as initial state, i.e., we use $`|\mathrm{\Psi }(t=0)=|1`$. This corresponds to $`\sigma _x(t=0)=\sigma _y(t=0)=0,\sigma _z(t=0)=1`$. The result is depicted in Fig. 1b. Applying to the so prepared two-level system a laser field obeying the CDT-condition (11), we find that the y-component $`\sigma _y(t)`$ and the z-component $`\sigma _z(t)`$ exhibit strong oscillations. This oscillations follow from the numerically evaluated Floquet theory for the driven two-level system, and are not described by the Rabi-oscillations as predicted from a rotating wave approximation; this latter approximation is strongly violated for our chosen set of driving parameters. In contrast, $`\sigma _x(t)`$ can be stabilized around the initial value of zero. This finding is in accordance with the CDT-phenomenon: it reflects the fact that the corresponding two equally weighted (”left” and ”right”) localized parts of the ground state wave function of a double-well potential, as represented within a localized representation, can each be stabilized too.
This CDT-effect opens the doorway to manipulate the influence of an environment on a quantum system. It is known that the coherent destruction of tunneling survives to some extent in presence of a coupling to the environment. Certainly, the system will relax in presence of an environment; however, as it is demonstrated in , the relaxation process can considerably be slowed down in the presence of a CDT-field.
In view of using differing initial preparations, the following remark should be made. From the viewpoint of stabilizing the state of a qubit (characterized by a quantum mechanical two-level system) in a quantum information processor , it is of foremost interest to stabilize the coherent superposition of two states of the qubit. Thus, our first choice (16) is of relevance in the context of the possibility for quantum computing. Moreover, fundamental questions concerning the decoherence of superposition states arise for the physics that occurs when one crosses the interface between the classical and quantum world, and vice versa .
## III Control of decoherence for a two-level atom
In this section we shall study the influence of an applied cw-control field for reducing decoherence of a two-level atom placed in a lossy cavity.
### A Driven two-level atom in a lossy cavity
To start we consider a two-level atom in a dissipative environment, e.g., a lossy cavity wherein the leakage of photons damps the radiation field. Additionally, the atom may be manipulated by a time-dependent external field like a laser beam. In our model, the driven two-level atom is represented by the Hamiltonian (1). It is coupled to one mode of the cavity which is described by one harmonic oscillator with frequency $`\mathrm{\Omega }`$, characterized by the annihilation and creation operators $`\widehat{B}`$ and $`\widehat{B}^{}`$ which fulfill the usual commutation relations for bosonic field operators. The coupling constant is denoted by $`g`$ and has the dimension of a frequency. This cavity mode is damped by a bilinear coupling to a bath of harmonic oscillators of frequencies $`\omega _i`$. They are similarly described by bosonic annihilation and creation operators $`\widehat{b}_i`$ and $`\widehat{b}_i^{}`$. The coupling constants of the cavity mode to the harmonic bath are given by $`\kappa _i`$ and have the dimension of a frequency. The total system-bath Hamiltonian is therefore written as
$`H(t)`$ $`=`$ $`{\displaystyle \frac{\mathrm{}}{2}}[\mathrm{\Delta }_0\widehat{\sigma }_z+s(t)\widehat{\sigma }_x]`$ (19)
$`+\mathrm{}\mathrm{\Omega }(\widehat{B}^{}\widehat{B}+{\displaystyle \frac{1}{2}})+\mathrm{}g(\widehat{B}^{}+\widehat{B})\widehat{\sigma }_x`$
$`+{\displaystyle \underset{i=1}{\overset{N}{}}}\mathrm{}\omega _i(\widehat{b}_i^{}\widehat{b}_i+{\displaystyle \frac{1}{2}})+\mathrm{}(\widehat{B}^{}+\widehat{B}){\displaystyle \underset{i=1}{\overset{N}{}}}\kappa _i(\widehat{b}_i^{}+\widehat{b}_i).`$
The influence of the bath on the two-level atom plus cavity mode is fully characterized by the spectral density
$$J(\omega )=2\pi \underset{i=1}{\overset{N}{}}\kappa _i^2\delta (\omega \omega _i).$$
(20)
We let the number of bath modes going to infinity ($`N\mathrm{}`$) and choose an Ohmic spectral density for the bath oscillators with an exponential cut-off at some large frequency $`\omega _c\mathrm{\Delta }_0,\omega _\mathrm{L},\mathrm{\Omega }`$, i.e.,
$$J(\omega )=\frac{2\mathrm{\Gamma }}{\mathrm{\Omega }}\omega \mathrm{exp}(\omega /\omega _c),$$
(21)
Here, we have introduced the damping constant $`\mathrm{\Gamma }`$ which is related to the quality factor of the cavity. Since the cavity mode as well as the bath oscillators are described by harmonic oscillators, we follow the approach in and map the Hamiltonian (19) onto a Hamiltonian where the central system, i.e., the two-level atom, is now bilinearly coupled to a bath of mutually non-interacting harmonic oscillators with an effective spectral density $`J_{\mathrm{eff}}(\omega )`$. Upon letting the cut-off frequency going to infinity, i.e., $`\omega _c\mathrm{}`$, this effective spectral density emerges as
$$J_{\mathrm{eff}}(\omega )=\frac{16\mathrm{\Gamma }}{\mathrm{\Omega }}\frac{g^2\omega \mathrm{\Omega }^2}{(\mathrm{\Omega }^2\omega ^2)^2+4\omega ^2\mathrm{\Gamma }^2}.$$
(22)
For small frequencies $`\omega `$, it increases linearly like in the original Ohmic spectral density $`J(\omega )`$. However, it has a Lorentzian shaped peak at $`\omega =\mathrm{\Omega }`$ with a line width $`\mathrm{\Gamma }<\mathrm{\Omega }`$.
In the following section, we make extensive use of the bath autocorrelation function $`(t)=^{}(t)+\mathrm{i}^{\prime \prime }(t)`$, which is obtained in terms of the effective spectral density $`J_{\mathrm{eff}}(\omega )`$, i. e.
$$(t)=\frac{1}{\pi }_0^{\mathrm{}}𝑑\omega J_{\mathrm{eff}}(\omega )\left[\text{coth}\left(\frac{\mathrm{}\omega }{2k_\mathrm{B}T}\right)\mathrm{cos}(\omega t)i\mathrm{sin}(\omega t)\right].$$
(23)
All our further considerations treat the case when the bath is at zero temperature, i.e. $`T=0`$. In this limit, and for our choice of the effective spectral density (22), we obtain for the real and imaginary part, respectively, the analytical results
$`^{}(t)`$ $`=`$ $`{\displaystyle \frac{16\mathrm{\Gamma }}{\pi \mathrm{\Omega }}}g^2\mathrm{\Omega }^2\left[{\displaystyle \frac{\pi }{4}}{\displaystyle \frac{1}{\mathrm{\Gamma }\sqrt{\mathrm{\Omega }^2\mathrm{\Gamma }^2}}}e^{\mathrm{\Gamma }t}\mathrm{cos}(\sqrt{\mathrm{\Omega }^2\mathrm{\Gamma }^2}t){\displaystyle _0^{\mathrm{}}}𝑑y{\displaystyle \frac{ye^{yt}}{(y^2+\mathrm{\Omega }^2)^24y^2\mathrm{\Gamma }^2}}\right],`$ (24)
$`^{\prime \prime }(t)`$ $`=`$ $`4g^2{\displaystyle \frac{\mathrm{\Omega }}{\sqrt{\mathrm{\Omega }^2\mathrm{\Gamma }^2}}}e^{\mathrm{\Gamma }t}\mathrm{sin}(\sqrt{\mathrm{\Omega }^2\mathrm{\Gamma }^2}t).`$ (25)
The quantity of interest is the reduced density matrix for the two-level system which we denote – just as in the undamped case – by $`\widehat{\rho }(t)`$. It follows by tracing over the bath degrees of freedom in the full density operator $`\widehat{W}(t)`$ which corresponds to the system-plus-bath Hamiltonian (19), i.e., $`\widehat{\rho }(t)=\mathrm{tr}_\mathrm{B}\widehat{\mathrm{W}}(\mathrm{t})`$. Like in the deterministic case in eq. (12), $`\widehat{\rho }(t)`$ is fully characterized by the expectation values $`\sigma _i(t),i=x,y,z`$. We shall determine their corresponding equations of motions next.
### B Bloch - Redfield Formalism
To deal with quantum dissipative systems, several techniques have been developed . A very efficient numerical algorithm for general quantum system with a discrete eigenvalue spectrum has been developed by Makri and Makarov within the real-time path-integral formalism . It has also been applied to spatially continuous tunneling systems in presence of driving . Moreover, the real-time path-integral formalism has extensively been used to describe a moderate-to-strong (!) two-level system-bath interaction . Recently, the former scheme has been generalized to describe multi-level, driven vibrational and tunneling dynamics in . At weak system-bath coupling the Nakajima-Zwanzig projector operator theory provides a powerful tool to describe the corresponding reduced density matrix dynamics.
For our quantum optical problem at hand, the suitable method of choice in presence of a physically realistic weak system-bath coupling is the projection operator technique: it yields in Born approximation the generalized master equation. It can be simplified further without loss of accuracy in leading order in the (weak) coupling strength $`g`$ by invoking the Markovian approximation . For a strong harmonic driving this objective was formally (only) developed a long time ago by Argyres and Kelley . Following the reasoning in (see in this context also ) we recently have derived for this case of a driven spin-boson problem with an arbitrary control field the explicit set of coupled, Bloch-Redfield type equations
$`\dot{\sigma }_x(t)`$ $`=`$ $`\mathrm{\Delta }_0\sigma _y(t),`$ (26)
$`\dot{\sigma }_y(t)`$ $`=`$ $`\mathrm{\Delta }_0\sigma _x(t)s(t)\sigma _z(t)\mathrm{\Gamma }_1(t)\sigma _y(t)\mathrm{\Gamma }_2(t)\sigma _x(t)A_y(t),`$ (27)
$`\dot{\sigma }_z(t)`$ $`=`$ $`s(t)\sigma _y(t)\mathrm{\Gamma }_1(t)\sigma _z(t)\mathrm{\Gamma }_3(t)\sigma _x(t)A_z(t).`$ (28)
The time-dependent rates $`\mathrm{\Gamma }_i(t)=_0^t𝑑t^{}^{}(tt^{})b_i(t,t^{})`$ , together with the inhomogeneities $`A_y(t)=\mathrm{Re}F(t)`$, $`A_z(t)=\mathrm{Im}F(t)`$, with $`F(t)=(1/2)_0^t𝑑t^{}^{\prime \prime }(tt^{})[u^2(t,t^{})v^2(t,t^{})]`$ determine the dissipative action of the thermal bath on the two-level atom. The functions $`^{}`$ and $`^{\prime \prime }`$ denote the real part and imaginary part, respectively, of the correlation function $``$ given in eqns. (25). The quantities $`u(t,t^{})=1|\widehat{U}(t,t^{})|1+2|\widehat{U}(t,t^{})|1`$ and $`v(t,t^{})=1|\widehat{U}(t,t^{})|2+2|\widehat{U}(t,t^{})|2`$ are sums of matrix elements of the time evolution operator $`\widehat{U}(t,t^{})`$ of the isolated (i.e., $`g=0`$) driven two-level system. The functions $`b_i`$ read $`b_1=\mathrm{Re}uv^{}`$, $`b_2=(1/2)\mathrm{Im}(u^2v^2)`$, and $`b_3=(1/2)\mathrm{Re}(u^2v^2)`$. Note that this set of equations is valid in the parameter regime $`g\mathrm{\Delta }_0/2`$. One can demonstrate that for the undriven case, i.e., $`s=0`$, the analytic solution of eq. (28) in first order in $`g`$ reproduces the analytical path integral weak-damping results in Refs. .
### C Controlling the decoherence of a quantum superposition of states
The idea of controlling the decohering influence of the environment on a quantum system by an external time-dependent field is demonstrated for the case of the two-level atom which is initially prepared in an equally weighted superposition of the two energy eigenstates given by $`\sigma _x(t=0)=1,\sigma _y(t=0)=\sigma _z(t=0)=0`$. In doing so, we consider four different situations: (1) first, we look at the isolated two-level atom dynamics without driving and without coupling the atom to the lossy cavity mode. This case corresponds to setting $`s=0`$ and $`g=0`$. Case (2) is devoted to the driven two-level dynamics. We switch on a coherent driving cw-field but keep the system isolated from the bath, i.e. $`s0`$ and $`g=0`$. In case (3) we investigate how the undriven system dynamics relaxes in presence of a dissipative coupling to the bath. We therefore set $`s=0`$ and $`g0,\mathrm{\Gamma }0`$. Finally, we demonstrate with case (4) how this decoherent dynamics can be manipulated with the help of an externally applied time-dependent control field and set $`s0,g0`$ and $`\mathrm{\Gamma }0`$.
In order to preserve the coherent evolution of the two-level atom and to protect it as far as possible from the decoherent influence of the environment, we choose the following control scheme: Guided by the physics of a rotating wave approximation for the driven system that most closely retains the unperturbed dynamics of an initial superposition state (16) we choose the following parameters: The frequency and the amplitude of the driving field are taken to be in resonance with the level spacing of the two-level system, i.e., $`\omega _\mathrm{L}=\mathrm{\Delta }_0`$ and $`s=\mathrm{\Delta }_0`$ which corresponds to a moderately strong driving strength. This choice implies for the ratio of the corresponding Rabi-frequency and driving strength the value $`0.5`$. This indicates that the rotating wave approximation should be used already with caution. Note that under the CDT-condition in (11), the field strength would assume an even larger value of $`s=2.4048\mathrm{\Delta }_0`$. For the strength of the coupling between the two-level atom and the cavity mode we assume $`g=0.05\mathrm{\Delta }_0`$. This value is consistent with the range of validity of the Bloch-Redfield formalism in Born approximation (see above). The dissipative system-bath mechanism is specified as follows: the frequency of the cavity mode is chosen to be in resonance as well, i.e, $`\mathrm{\Omega }=\mathrm{\Delta }_0`$. By doing so, we in essence maximize the influence of the bath. For the line width of the cavity mode we set $`\mathrm{\Gamma }=0.1\mathrm{\Delta }_0`$. This rather large value mimics (on purpose) an extreme situation because the line width in most realistic situations is in general much smaller: Nevertheless, such smaller values would intensify our appealing finding of a driving-induced, enhanced recovery of coherence even more. Moreover, the temperature is always set to $`T=0`$.
Our novel results are depicted in the Fig. 2 (a)-(c) and the Fig. 3 (a)-(c). Fig. 2a depicts the time-evolution of the x-component $`\sigma _x(t)`$. The isolated two-level dynamics (dashed line) shows coherent oscillations between -1 and 1 at the frequency of the level spacing $`\mathrm{\Delta }_0`$. On top of this line one finds (barely visible dotted line) the results for the driven two-level dynamics. This good agreement follows also from the corresponding rotating wave approximation, yielding for this preparation just the undriven result. The decoherence in presence of a finite bath coupling ($`g=0.05\mathrm{\Delta }_0,\mathrm{\Gamma }=0.1\mathrm{\Delta }_0`$), see the dashed-dotted line, yields an oscillatory decay towards equilibrium $`\sigma _x(t\mathrm{})=0`$, whose envelope is made visible by the connecting solid line. Next we switch on the cw-laser control field. As a main result we find that the decoherence becomes considerably slowed down – following closely the isolated driven dynamics. This enhanced recovery of coherence for the dissipative driven dynamics is made visible to the eye by the connecting weakly decaying and oscillating envelope. This surprising result is rooted in the following facts: The dissipative, non-driven dynamics experiences a most effective dissipation. This is due to the resonant coupling at $`\mathrm{\Omega }=\mathrm{\Delta }_0`$ of the two-level atom to bath with the effective spectral density in (22) which peaks at $`\omega =\mathrm{\Omega }`$. In contrast, the strong driving now dresses this level spacing, and moves it out of resonance with the lossy cavity mode. This results in a considerable slow down of driven decoherence for $`\sigma _x(t)`$.
The decoherent dynamics of the $`y`$-component $`\sigma _y(t)`$ is qualitatively similar to $`\sigma _x(t)`$. It is depicted in Fig. 2b for the same choice of parameters.
The population difference $`\sigma _z(t)`$ is shown in Fig. 2c. For the isolated two-level dynamics, $`\sigma _z(t)`$ remains constant at zero (dashed line) since the system is in an equally weighted superposition of two eigenstates, yielding an obvious zero population difference. In presence of the cw-laser control field, the driven dynamics (dotted line) yields a finite oscillation of population difference. This deviation from zero also reflects the deviation from the corresponding rotating wave solution (being identically zero for this preparation). Nevertheless, this driven dynamics still exhibits an approximate periodicity that closely coincides with the Rabi-value $`\mathrm{\Omega }_R=\mathrm{\Delta }_0/2`$.
The undriven, dissipative relaxation to equilibrium (dashed-dotted line) proceeds with temperature $`T=0`$ almost completely towards the ground state with corresponding maximal population difference $`\sigma _z(t\mathrm{})1`$. Due to the coupling to the cavity mode performing zero point oscillations, the value of 1 is not fully reached. The driven, dissipative relaxation (solid line) to the time periodic asymptotic state exhibits oscillations around zero – following initially (up to $`\mathrm{\Delta }_0t50`$) closely the driven coherent dynamics. In virtue of Floquet theory for the long-time limit of the time-periodic generalized Bloch-Redfield equations in (11), this asymptotic periodicity matches in the lon-time limit the frequency of driving, i.e., $`\omega _L=\mathrm{\Delta }_0`$ (not depicted).
### D Controlling the decoherence from the atom ground state
To answer the question whether the proposed control scheme works as well in the opposite limit of an initial state which is an eigenstate we next choose the ground state as the initial preparation, i.e., we use $`\sigma _x(t=0)=\sigma _y(t=0)=0,\sigma _z(t=0)=1`$. The remaining parameters are taken to be the same as in the previous subsection III C.
Fig. 3a shows the decoherent dynamics for $`\sigma _x(t)`$. Since the chosen initial state is an eigenstate of the isolated two-level system no dynamics is exhibited (note the filled squares on the line at zero in the figure). This situation remains unaltered in presence of a dissipative coupling of the quantum system, as indicated by the asterisks on the line at zero. At zero temperature the system at weak dissipation remains essentially in its ground state. Upon switching on the driving with no coupling to the lossy cavity present, the driven two-level dynamics exhibits a Rabi-like quasiperiodic, oscillatory behaviour (dotted line). This nonperiodic behavior is rooted in the deviation of the full Floquet dynamics from a rotating wave prediction. With our strong driving strength we a priori cannot expect good agreement with the corresponding rotating wave approximation. The coupling to the lossy cavity mode damps this quasiperiodic behaviour, following for short times the driven isolated dynamics (see solid line), before settling down to asymptotic, long-time oscillations at the frequency of driving $`\omega _\mathrm{L}=\mathrm{\Delta }_0`$ with a finite, but strongly reduced amplitude (not depicted).
The decoherent dynamics of the $`y`$-component $`\sigma _y(t)`$ is again qualitatively similar to that of $`\sigma _x(t)`$. It is presented in Fig. 3b for the same set of coupling and driving parameters.
Finally, the time evolution of the population difference $`\sigma _z(t)`$ is depicted with Fig. 3c. Clearly, the isolated dynamics from a prepared initial ground state remains constant at $`\sigma _z(t)=1`$ (filled squares). The driven dynamics of the two-level system exhibits strong non-detuned Rabi oscillations at frequency $`\mathrm{\Omega }_R=\mathrm{\Delta }_0/2`$ between -1 and 1 (dotted line). In this case the rotatating wave prediction (not depicted) actually yields surprisingly good qualitative agreement with the exact dynamics.
The case of no driving ($`s=0`$) but with a coupling to the bath ($`g=0.05\mathrm{\Delta }_0,\mathrm{\Gamma }=0.1\mathrm{\Delta }_0`$) shows again a trivial dynamics. It relaxes in this case of weak dissipation with a small relaxation rate towards a slightly reduced constant value close to 1 (indicated by the asterisks).
The case with resonant driving ($`s=\mathrm{\Delta }_0,\omega _\mathrm{L}=\mathrm{\Delta }_0`$) switched on and simultaneous coupling to the lossy cavity mode (with $`g=0.05\mathrm{\Delta }_0,\mathrm{\Gamma }=0.1\mathrm{\Delta }_0`$) exhibits damped Rabi-oscillations (solid line); it eventually settles down in the asymptotic long-time limit to periodic asymptotic oscillations at twice the Rabi frequency and amplitude smaller than 1 (not depicted).
## IV Conclusions
In this work we have investigated the possibility to control the time-evolution of a two-level atom by time-dependent external, periodic control forces. We have demonstrated that the coherent dynamics of the system can be brought to an almost perfect standstill by choosing the ratio of driving amplitude $`s`$ and driving frequency $`\omega _\mathrm{L}`$ at a zero of the Bessel function $`J_0(s/\omega _\mathrm{L})`$ (coherent destruction of tunneling). For an initially prepared quantum superposition of states all three components $`\sigma _i,i=x,y,z`$ and therefore the entire density matrix $`\widehat{\rho }`$ can be locked simultaneously. For the initially prepared ground-state, the x-component $`\sigma _x`$ can be stabilized; the other two components $`\sigma _y`$ and $`\sigma _z`$, however, depict strong (non-Rabi) oscillations.
In presence of decoherence in a lossy cavity we illustrate that the atomic states can be dressed by a time-dependent force which moves the atom and the cavity mode out of resonance. As a consequence, decoherence becomes strongly suppressed. We have illustrated this effect for two different initial preparations of the atom: (i) for a quantum superposition of states we show that the decoherence can be suppressed efficiently. (ii) The second preparation uses the ground-state wave function of the isolated system. In that case the decoherence may also be slowed down, but the decohering dynamics never approaches again the initial state.
These findings put the idea across that the method can be used to bring back the state of the atom close to its initial preparation. For the case (i) of a superposition state as initial state the decoherent dynamics of the $`x`$\- and $`y`$-component $`\sigma _x,\sigma _y`$ are similar to the undriven dynamics of the isolated two-level system (qubit). Even the $`z`$-component $`\sigma _z`$ of the driven dissipative dynamics matches at distinct instants of time the undriven non-dissipative dynamics. For the second case (ii) of the ground state as initial state this idea, however, seems to fail for the $`z`$-component $`\sigma _z`$.
To summarize, our proposed scheme for controlling the coherent and decoherent dynamics of a two-level atom works very well for initially prepared quantum superpositions of states. This presents good news for the manipulations of quantum bits (two-level systems) being in a superposition of states. It is this very feature which makes quantum computation interesting and superior to classical computation.
## Acknowledgement
This work has been supported by the Deutsche Forschungsgemeinschaft within the Schwerpunktsprogramm Zeitabängige Phänomene und Methoden in Quantensystemen der Physik und Chemie, HA1517/14-3 (L.H., I.G., P.H.), within the Schwerpunktsprogramm Quanten-Informationsverarbeitung HA1517/19-1, (M.T., P.H.) and in part by the Sonderforschungsbereich 486 of the Deutsche Forschungsgemeinschaft (I.G., P.H.).
FIGURES
|
no-problem/0003/hep-ex0003031.html
|
ar5iv
|
text
|
# The ϕ→𝜂𝜋⁰𝛾 decay
## Abstract
Rare radiative decay $`\varphi \eta \pi ^0\gamma `$ was studied with SND detector at VEPP-2M electron-positron collider and its branching ratio was measured: $`B(\varphi \eta \pi ^0\gamma )=(0.88\pm 0.14\pm 0.09)10^4`$. Significant contribution of the $`a_0(980)\gamma `$ intermediate state was observed in the decay. The result is based on total integrated luminosity corresponding to $`210^7`$ produced $`\varphi `$ mesons.
PACS: 13.25.-k; 13.65.+i; 14.40.-n
Keywords: $`e^+e^{}`$ collisions; Vector meson; Detector
Introduction. The first observation of the rare radiative decay
$$\varphi \eta \pi ^0\gamma ,$$
(1)
and measurement of its branching ratio were performed in Novosibirsk by SND detector at VEPP-2M $`e^+e^{}`$ collider . The analysis was based on data collected by SND in 1996 . Later the results were confirmed by CMD-2 group in their recent publication . Results of the present work are based on analysis of all SND data collected in the vicinity of $`\varphi `$ meson in 1996–1998 .
Reaction (1) is especially interesting in connection with the scalar $`a_0(980)`$ meson problem, which is being actively discussed in the literature. At present there is no generally accepted viewpoint on the nature of $`a_0`$, its quark structure is still not well established, and several models exist including modification of $`q\overline{q}`$ scheme , $`K\overline{K}`$ molecular model , and 4-quark model . It was suggested in that the decay $`\varphi a_0(980)\gamma \eta \pi ^0\gamma `$ may serve as a probe of the $`a_0`$-meson quark structure. Theoretical predictions for the decay branching ratio vary from $`10^5`$ for the simple two-quark and $`K\overline{K}`$ molecular models up to $`10^4`$ in the 4-quark model . There also exist some models in which high rate of the decay (1) can be achieved without assumption of 4-quark structure of the $`a_0`$ meson . Another possible mechanism of the $`\varphi \eta \pi ^0\gamma `$ decay is $`\varphi \rho ^0\pi ^0,\rho \eta \gamma `$. The vector meson dominance model prediction for this branching ratio is $`510^6`$ . Detailed study of the $`\varphi \eta \pi ^0\gamma `$ decay may provide decisive information on $`a_0(980)`$ meson problem.
Experiment. The SND is a universal nonmagnetic detector. Its main part is a 3-layer electromagnetic calorimeter consisted of 1630 NaI(Tl) crystals. The energy resolution of the calorimeter for photons can be described as $`\sigma _E/E=4.2\%/\sqrt[4]{E(GeV)}`$ , the angular resolution is close to $`1.5^{}`$. The solid angle coverage is $`90\%`$ of $`4\pi `$ steradian.
The data used for the study of $`\varphi \eta \pi ^0\gamma `$ decay were collected in 1996-1998 . Nine successive scans of the energy range 980–1040 MeV were performed. The data were collected at 16 beam energy points. The total integrated luminosity in the experiment is 12 $`pb^1`$ and total number of produced $`\varphi `$ mesons — $`210^7`$.
Event Selection. Main sources of background for the process under study
$$e^+e^{}\varphi \eta \pi ^0\gamma 5\gamma $$
(2)
are the following $`\varphi `$-meson decays:
$$e^+e^{}\varphi \pi ^0\pi ^0\gamma 5\gamma $$
(3)
$$e^+e^{}\varphi \eta \gamma 3\pi ^0\gamma 7\gamma ,$$
(4)
$$e^+e^{}\varphi K_SK_Lneutrals$$
(5)
and a nonresonant process
$$e^+e^{}\omega \pi ^0\pi ^0\pi ^0\gamma 5\gamma .$$
(6)
The process (4) does not produce 5$`\gamma `$ events directly but can mimic the process (2) due to either merging of close photons or loss of soft photons through openings in the calorimeter. The process (5) contributes due to $`K_S\pi ^0\pi ^0`$ decay accompanied by either nuclear interaction of the $`K_L`$ meson or its decay in flight.
Primary event selection was based on simple criteria: the number of reconstructed photons is equal to five, there are no tracks in the drift chamber, the total energy deposition $`E_{tot}`$ ranges from 0.8 up to 1.1 of the center of mass energy $`2E_0`$, the total transverse momentum of photons is less than $`0.15E_{tot}/c`$. In order to suppress background from the processes (4) and (5) a special “photon quality” parameter $`\zeta `$ was used. For $`i`$-th reconstructed photon the $`\zeta _i`$ is a minus logarithm of likelihood for the corresponding transverse energy deposition profile observed in the calorimeter to be produced by a single photon. For multiphoton events $`\zeta `$ is defined as a maximum $`\zeta _i`$. The cut $`\zeta <0`$ suppresses the background from the process (4) by a factor of two, reducing the detection efficiency for actual 5-$`\gamma `$ events by only 8%. To suppress beam background photons which appear mostly in the calorimeter areas closest to the beam and are relatively soft, additional cut was imposed on polar angles of the two softest photons in an event: $`32^{}<\theta _4,\theta _5<148^{}`$.
For events which passed the cuts described above kinematic fitting under two alternative hypotheses was performed and corresponding values of $`\chi ^2`$ calculated:
* an event is one of the process $`e^+e^{}5\gamma `$; the $`\chi ^2`$ value is denoted as $`\chi _{5\gamma }^2`$;
* an event is one of the process $`e^+e^{}3\gamma `$ with two additional stray photons; the $`\chi ^2`$ value is denoted as $`\chi _{3\gamma }^2`$.
The following restrictions on the $`\chi _{5\gamma }^2`$ and $`\chi _{3\gamma }^2`$ parameters were imposed: $`\chi _{5\gamma }^2<25,\chi _{3\gamma }^2>20.`$ The first restriction causing only 5% loss of actual $`\varphi \eta \pi ^0\gamma `$ events reduces background from the process (4) by approximately $`30\%`$ and almost completely removes background from the process (5). The second cut suppresses background from the processes $`\varphi \eta \gamma 3\gamma `$, $`\varphi \pi ^0\gamma 3\gamma `$, $`e^+e^{}2\gamma ,3\gamma `$ (QED).
For further background suppression, an event configuration (the photon energies and angles after 5-$`\gamma `$ kinematic fitting) was compared using modification of kinematic fitting technique developed in the work with ones expected for the process (2) and background processes (3), (6). Corresponding measure of discrepancy $`P`$ is an increase of $`\chi ^2`$ for a 5-$`\gamma `$ event after application of additional requirements on intermediate states for each tested hypothesis. The following hypotheses were considered:
* an event is a cascade reaction $`e^+e^{}X\gamma ,X\eta \pi ^0`$ where $`X`$ is some intermediate particle; the $`P_{\eta \pi \gamma }`$ parameter and invariant masses of photon pairs, presumably produced in the decays of $`\pi ^0`$ and $`\eta `$ mesons ( $`M_\pi `$ and $`M_\eta `$ ) were calculated;
* an event is a cascade reaction $`e^+e^{}X\gamma ,X\pi ^0\pi ^0`$; the $`P_{\pi \pi \gamma }`$ parameter was calculated;
* an event is of the process $`e^+e^{}\omega \pi ^0,\omega \pi ^0\gamma `$; parameter $`M_\omega `$ — an invariant mass of $`\pi ^0\gamma `$ pair from the $`\omega \pi ^0\gamma `$ decay, was calculated;
Relative contributions from the background processes (3) and (6) vary with $`m_{\eta \pi }`$ — invariant mass of $`\eta \pi ^0`$ pair. At $`m_{\eta \pi }<975`$ MeV the background from the process (3) becomes significant. Additional cut $`P_{\pi \pi \gamma }>2`$ suppresses it by a factor of three reducing detection efficiency for the process (2) by only $`12\%`$. At $`m_{\eta \pi }900`$ MeV the dominant background comes from the process (6). In this region restriction $`M_\omega <725`$ MeV removes background almost completely.
The scatter plot for $`M_\eta `$ invariant mass versus $`E_{\gamma max}/E_{beam}`$ — normalized energy of the most energetic photon in the selected events is shown in fig.1, where two regions are distinguishable:
$`E_{\gamma max}/E_{beam}>0.68`$ dominated by background from the reaction (4) with a nearly uniform $`M_\eta `$ distribution and
$`E_{\gamma max}/E_{beam}<0.68`$ where the background is small and the points are grouped close to $`\eta `$-meson mass. The $`M_\eta `$ vs. $`M_{\pi ^0}`$ distribution in events with $`E_{\gamma max}/E_{beam}<0.68`$ is shown in fig.2. It is clearly peaked at $`\pi ^0`$ and $`\eta `$-meson masses.
For final selection of $`\varphi \eta \pi \gamma `$ events, in addition to the cuts described above the restriction $`E_{\gamma max}/E_{beam}<0.68`$ practically completely removing background from the process (4) and $`P_{\eta \pi \gamma }<7`$ were applied. The $`P_{\eta \pi \gamma }`$ distribution for the selected events is shown in fig.3. The cut $`P_{\eta \pi \gamma }<7`$ roughly corresponds to the restrictions $`|M_{\pi ^0}m_{\pi ^0}|<30`$ MeV and $`|M_\eta m_\eta |<30`$ MeV, where $`m_{\pi ^0}`$ and $`m_\eta `$ are the $`\pi ^0`$ and $`\eta `$ masses. Total of 39 events were found with expected background of $`3.2\pm 0.7`$ events. In the region $`7<P_{\eta \pi \gamma }<15`$ ten events were found in agreement with estimated $`8.9\pm 0.4`$ $`\eta \pi ^0\gamma `$ events plus $`6\pm 1`$ background events. Thus, after cuts described above the event sample still contains background of about $`10\%`$. After background subtraction $`35.8\pm 6.3`$ events of the process $`e^+e^{}\eta \pi ^0\gamma `$ are left.
Data analysis. Fig.4 shows $`\mathrm{cos}\alpha `$ distribution for the selected events, where $`\alpha `$ is an angle between the recoil photon in the reaction (2) and $`\eta `$-meson momentum in the $`\eta \pi ^0`$ rest frame. Estimated background of 3.2 events is subtracted. Experimental distribution is in a good agreement with the simulated one, which was initially isotropic as expected for a scalar intermediate state. Its visible slope is a consequence of the $`E_{\gamma max}<0.68`$ cut. Such an agreement may be considered as an evidence that $`\eta \pi ^0`$-system is produced in a scalar state ($`P(\chi ^2)=61\%`$). In fig.5 the $`\mathrm{cos}\theta _\gamma `$ distribution is shown. The $`\theta _\gamma `$ is a polar angle of the recoil photon in the reaction (2). It also agrees ($`P(\chi ^2)=22\%`$) with the simulated distribution $`(1+\mathrm{cos}^2\theta _\gamma )`$ expected for production of a scalar particle and photon.
Detection efficiency obtained by simulation must be corrected for event loss due to additional spurious photons and for imprecise simulation of parameters used in the event selection cuts. Corresponding correction factor was obtained from experimental data. To this end cross section of the process (6) was measured using the selection criteria similar to those described above for the process (2). The result was compared with our earlier measurement . It was found that simulation overestimates detection efficiency for the process (2) by $`5\%`$.
In the Table 1 the numbers of selected events, detection efficiencies, and measured differential branching ratios $`dB/dm`$ as a function of $`m_{\eta \pi }`$ invariant mass are listed. The detection efficiencies and $`dB/dm`$ values are given at middle points of the corresponding invariant mass bins. Uniformly distributed background of 3 events was subtracted. The detection efficiency averaged over the experimental invariant mass spectrum is equal to $`2.1\%`$.
Fitting of energy dependence of the experimental cross section by the sum of the resonant cross section of the process (2) and energy-independent background results in
$$B(\varphi \eta \pi ^0\gamma )=(0.88\pm 0.14\pm 0.09)10^4,$$
(7)
The main sources of systematic error here are uncertainties in the measured cross section of the process (6) and in average detection efficiency due to large statisical error of the observed $`\eta \pi ^0`$ invariant mass spectrum.
In fig.6 the dependence of the measured $`\varphi \eta \pi ^0\gamma `$ decay branching ratio on the invariant mass of the $`\eta \pi ^0`$ pair is shown. In spite of smaller recoil photon phase space at high $`\eta \pi ^0`$ invariant masses the observed mass spectrum shows enhancement in this region. This means that $`\eta \pi ^0`$ system is produced in some resonant state. The only known resonance which have relevant mass and quantum numbers is $`a_0(980)`$ and the observed enhancement at large $`\eta \pi ^0`$ invariant masses can be described as manifestation of $`\varphi a_0\gamma `$ decay. Known $`\varphi \rho ^0\pi ^0,\rho \eta \gamma `$ decay mechanism must produce $`\eta \pi ^0`$ pairs with smaller invariant masses and as was already mentioned its branching ratio is much smaller than observed one, although its amplitude should be taken into account in the approximation of the whole mass spectrum in future high statistics experiments. For $`M_{\eta \pi }>900`$ MeV we have:
$$B(\varphi \eta \pi ^0\gamma )=(0.46\pm 0.13)10^4.$$
(8)
Discussion. Since the experimental data show large contribution from the $`\varphi a_0\gamma `$ decay an attempt was made to approximate observed invariant mass spectrum in assumption of pure $`\varphi a_0\gamma `$ and assuming decay dynamics as described in the work . This hypothesis gives rather good approximation of the experimental data. The fitting curve is shown in fig.6 ($`P(\chi ^2)=61\%`$). The following optimal values of the $`a_0`$-meson parameters were obtained:
$$\begin{array}{c}M_{a_0}=995_{10}^{+52}\text{MeV}\hfill \\ g_{a_0K^+K^{}}^2/4\pi =(1.4_{0.9}^{+9.4})\text{GeV}^2\hfill \\ g_{a_0\eta \pi }^2/4\pi =(0.77_{0.20}^{+1.29})\text{GeV}^2\hfill \end{array}$$
(9)
The corresponding fitting curve is shown in fig.6. The optimum value of the ratio $`g_{a_0\eta \pi }/g_{a_0K^+K^{}}=0.75_{0.32}^{+0.52}`$ within experimental errors satisfies the relation between coupling constants $`g_{a_0\eta \pi }=\sqrt{2/3}g_{a_0K^+K^{}}`$ obtained in in the assumption of 4-quark structure of $`a_0`$ meson. If we fix this ratio at its 4-quark model prediction, the optimal values of other fit parameters become:
$$\begin{array}{c}M_{a_0}=994_8^{+33}\text{MeV}\hfill \\ g_{a_0K^+K^{}}^2/4\pi =(1.05_{0.25}^{+0.36})\text{GeV}^2\hfill \end{array}$$
(10)
The mass $`M_{a_0}`$ is in agreement with the PDG value $`983.4`$ MeV .
Comparison with (8) shows that about 50% of the observed branching ratio (7) corresponds to $`M_{\eta \pi }>900`$ MeV and the observed invariant mass spectrum is consistent with the model . Thus it can be assumed, that the $`a_0\gamma `$ intermediate state dominates in this decay. Other decay mechanisms may contribute, for example $`\varphi \rho ^0\pi ^0,\rho \eta \gamma `$, although rough estimation shows, that its contribution can be neglected at present level of experimental errors (7). Assuming pure $`\varphi a_0\gamma `$ decay, we get
$$B(\varphi a_0\gamma )=(0.88\pm 0.17)10^4.$$
(11)
Conclusions. In this work, using experimental data corresponding to about $`210^7`$ produced $`\varphi `$ mesons, 36 events of the $`\varphi \eta \pi ^0\gamma `$ decay were found. The measured branching ratio of this decay $`B(\varphi \eta \pi ^0\gamma )=(0.88\pm 0.14\pm 0.09)10^4`$ is in agreement with our previous result $`B(\varphi \eta \pi ^0\gamma )=(0.83\pm 0.23)10^4`$ , based on analysis of a part of the experimental statistics, as well as with the CMD-2 measurement: $`B(\varphi \eta \pi ^0\gamma )=(0.90\pm 0.24\pm 0.10)10^4`$ . Observed enhancement in the $`\eta \pi ^0`$-pair invariant mass spectrum at large masses shows large contribution of $`a_0\gamma `$ intermediate state. Assuming dominance of this mechanism we obtain $`B(\varphi a_0\gamma )=(0.88\pm 0.17)10^4`$.
Acknowledgement. The authors express their gratitude to N.N.Achasov for fruitful discussions.
This work is supported in part by “Russian Fund for Basic Researches” (Grant No. 99-02-16813), “Russia Universities” Fund (Grant No. 3H-339-00) and STP “Integration” Fund (Grant No 274).
Figure captions
* Figure 1: Distribution of $`M_\eta `$ — reconstructed mass of $`\eta `$ meson versus the energy of the most energetic photon in the event $`E_{\gamma max}/E_{beam}`$.
* Figure 2: Distribution of $`M_\eta `$ — reconstructed mass of $`\eta `$ meson versus $`M_\pi `$ — reconstructed mass of $`\pi ^0`$ for events with $`E_{\gamma max}/E_{beam}<0.68`$.
* Figure 3 The $`P_{\eta \pi \gamma }`$ distribution. Points with error bars - experimental data. Histogram — simulated signal from $`\varphi \eta \pi ^0\gamma `$ decay corresponding to branching ratio of $`0.910^4`$, shaded histogram — estimated background from the $`e^+e^{}\omega \pi ^0`$ and $`\varphi \eta \gamma ,f_0(980)\gamma `$ processes.
* Figure 4 The $`\mathrm{cos}\alpha `$ distribution. $`\alpha `$ is an angle between recoil photon and $`\eta `$ meson in the $`\eta \pi ^0`$ rest frame for selected $`\eta \pi ^0\gamma `$ events. Points with error bars - experimental data, histogram - simulation of the process (1) with $`BR=0.910^4`$.
* Figure 5 Recoil photon polar angle distribution for selected $`\eta \pi ^0\gamma `$ events. Points with error bars - experimental data, histogram - simulation of the process (1) with $`BR=0.910^4`$.
* Figure 6 $`M_{\eta \pi }`$ invariant mass spectrum. The fitted curve corresponds to $`M_{a_0}=995_{10}^{+52}`$ MeV $`g_{a_0K^+K^{}}^2/4\pi =(1.4_{0.9}^{+9.4})\text{GeV}^2`$ $`g_{a_0\eta \pi }^2/4\pi =(0.77_{0.20}^{+1.29})\text{GeV}^2`$
|
no-problem/0003/hep-th0003219.html
|
ar5iv
|
text
|
# Untitled Document
hep-th/0003219 TIFR/TH/00-12
A Stable Non-BPS Configuration From Intersecting Branes and Antibranes
Sunil MukhiE-mail: mukhi@tifr.res.in, nemani@tifr.res.in and Nemani V. Suryanarayana
Tata Institute of Fundamental Research,
Homi Bhabha Rd, Mumbai 400 005, India
ABSTRACT
We describe a tachyon-free stable non-BPS brane configuration in type IIA string theory. The configuration is an elliptic model involving rotated NS5 branes, D4 branes and anti-D4 branes, and is dual to a fractional brane-antibrane pair placed at a conifold singularity. This configuration exhibits an interesting behaviour as we vary the radius of the compact direction. Below a critical radius the D4 and anti-D4 branes are aligned, but as the radius increases above the critical value the potential between them develops a minimum away from zero. This signals a phase transition to a configuration with finitely separated branes.
March 2000
Introduction and Review
Much has been learned in recent times about the physics of brane-antibrane pairs and non-BPS branes in superstring theory\[1--32\]. Parallel, infinitely extended pairs attract each other, and can annihilate into the vacuum by a process of tachyon condensation into a constant minimum. An analogous decay process takes place for single or multiple non-BPS branes. Condensation of the tachyon as a kink, vortex or more general soliton is associated to brane-antibrane annihilation into branes of lower dimension.
The above considerations have been extended to backgrounds with lower supersymmetry (orientifolds, orbifolds and smooth Calabi-Yau manifolds), where one finds new phenomena including the existence of stable, non-BPS branes. As parameters of the background are varied, there can also be phase transitions between qualitatively different configurations. The reader may consult the reviews in Refs.\[33--36\].
A different direction, explored in Ref.\[37\], is to consider non-BPS configurations of intersecting branes and antibranes in the fully supersymmetric type II spacetime background. Here one encounters novel phenomena including both attractive and repulsive interactions among branes and antibranes. Such configurations could be useful to study non-supersymmetric field theories, and also to understand better the basic underlying structure of superstring theory.
In the present note we examine a variant of a configuration of “adjacent brane-antibrane pairs” that was discussed in Ref.\[37\]. Let us describe the original configuration. In type IIA theory we start with a pair of parallel NS5-branes extended along $`x^1,x^2,x^3,x^4,x^5`$ and separated along $`x^6`$. The $`x^6`$ direction is compact, with circumference $`2L`$. Now stretch a $`D4`$-brane along $`x^6`$ from the first NS5-brane to the second, and a $`\overline{\mathrm{D}}`$4-brane along $`x^6`$ from the second NS5-brane to the first (Fig.1).
Fig.1: Adjacent D4 and $`\overline{\mathrm{D}}`$4 between parallel branes.
In Ref.\[37\] it was argued that the D4-brane and $`\overline{\mathrm{D}}`$4-brane exert a net repulsive force on each other, with the result that the configuration is unstable. From the point of view of the field theory on the NS5 world volume, this repulsion is essentially due to the fact that the D4 and $`\overline{\mathrm{D}}`$4 end on the NS5 brane from opposite sides, and their ends are charged 3-branes in the NS5 world volume. If we dimensionally reduce everything over these three directions, then the ends become vortices living on the reduced NS5 world volume. These vortices carry the same charge under the gauge field, hence they repel, and since the configuration is non-supersymmetric there is no reason to expect that this repulsion is cancelled by exchange of other massless fields. Because the NS5-branes are parallel, the repelling D4- and $`\overline{\mathrm{D}}`$4-branes can run away from each other to infinity.
This instability can also be understood in the T-dual picture, where the D4- and $`\overline{\mathrm{D}}`$4-branes are actually two types of fractional branes (denoted $`1_f`$ and $`\overline{1}_f^{}`$ respectively in \[37\]) at a $`Z_2`$ ALE singularity. These two fractional branes repel, as they each carry a full unit of twisted RR charge. There is also an attraction due to the fractional untwisted RR charge, but an explicit computation of the amplitude using orbifold techniques\[37\] reveals that the repulsive force dominates.
The variant of this configuration that we will describe in the next section involves rotating the NS5-branes. This converts the T-dual ALE space into a conifold\[38,,39\], hence this brane construction is now T-dual to fractional branes at a conifold, for which we cannot use orbifold techniques to compute the force. We will analyse the model using some observations in Refs.\[38,,39,,40,,15,,37\] and argue that this time a stable non-BPS configuration is obtained.
A Stable Configuration
Consider the following brane configuration in Type IIA: an NS5-brane filling $`x^1,x^2,x^3,x^4,x^5`$ and located at $`(x^8,x^9)=(0,0)`$, and another NS5-brane (denoted by NS5’) filling $`x^1,x^2,x^3,x^8,x^9`$ and located at $`(x^4,x^5)=(0,0)`$. The two branes are placed at the same point in the $`x^7`$ direction and are separated along the compact $`x^6`$ direction of circumference $`R_6`$.
Suspend a D4-brane between NS5-NS5’ in one of the two intervals and a $`\overline{\mathrm{D}}`$4 in the other, so that the 4-branes extend along $`x^1,x^2,x^3`$ and $`x^6`$ (Fig.2). This configuration breaks all the supersymmetries of Type IIA, though each of the D4 and $`\overline{\mathrm{D}}`$4, together with the NS5-branes, separately preserves some supersymmetry.
Fig.2: Adjacent D4 and $`\overline{\mathrm{D}}`$4 between rotated branes.
Unlike the case discussed in the previous section, here the configuration of NS5-branes is no longer dual to an ALE space but rather to a conifold\[38,,39\]. Nevertheless, one can still argue that the stretched D4- and $`\overline{\mathrm{D}}`$4-branes repel. In the case of parallel NS5-branes, the repulsion was identified as coming from like charges carried by the ends of the 4-branes. In this picture the repulsive effect is localised on each NS5-brane separately, hence introducing a relative rotation should not matter. Moreover, from the string theory point of view, the repulsion between the D4 and $`\overline{\mathrm{D}}`$4 is obtained by calculating a closed-string tree amplitude (cylinder amplitude). Translated into the open-string channel, this is a one-loop open-string amplitude. This amplitude depends only on the spectrum obtained by quantizing the open strings connecting D4 and $`\overline{\mathrm{D}}`$4 across any one NS5 brane. This again suggests that the force is localized near one NS5 brane at a time<sup>1</sup> This feature of open strings across NS5-branes was exploited in Refs.\[38,,39\] to obtain the spectrum of the gauge theory living on related (BPS) brane configurations.. Using these arguments, we conclude that the D4- and $`\overline{\mathrm{D}}`$4-branes in Fig.2 repel each other, and that the repulsion is the same as that between adjacent D4- and $`\overline{\mathrm{D}}`$4-branes when the NS5-branes are not rotated with respect to each other.
With rotated NS5-branes, the important difference is that the the D4-branes no longer have moduli to move away from each other. As they move with their ends on the NS5-branes, the D4-branes get stretched. In the process their effective 3-brane tension increases, providing a restoring force for the repelling ends of the adjacent $`\mathrm{D4}\overline{\mathrm{D}}4`$. Thus one can expect a configuration in which the repulsive force and the restoring force due to the increased tension of the adjacent $`\mathrm{D4}\overline{\mathrm{D}}4`$ pair exactly cancel, giving rise to a configuration that is stable at least under small perturbations.
In fact, as we now show explicitly, such a stable configuration exists for some range of values of the circumference $`R_6`$. For simplicity, let us assume that the NS5 and NS5’-branes are located at diametrically opposite points on the compact $`x^6`$ direction, with the separation between them being $`L=\frac{1}{2}R_6`$. With this, and the fact that the branes are rotated at 90 degrees to each other, there is a high degree of symmetry in the problem. If we let $`r`$ be the displacement of the end of the D4 brane from the origin in the $`x^4`$ (or $`x^5`$) direction, then the $`\overline{\mathrm{D}}`$4-brane will also be displaced by an equal amount $`r`$, and the displacement of the other ends of the 4-branes along $`x^8`$ (or $`x^9`$) will also be $`r`$ (Fig.3).
Fig.3: Equilibrium configuration after displacement of 4-branes.
With the above data, the net tension of the stretched D4($`\overline{\mathrm{D}}`$4) is $`𝒱T_4\sqrt{L^2+2r^2}`$ where $`𝒱`$ is the (infinite) 3-volume of the $`(x^1,x^2,x^3)`$ directions and $`T_4=\frac{1}{g_s(2\pi )^4}`$ is the tension of a BPS D4-brane. The contribution from the repulsion between the ends of D4- and $`\overline{\mathrm{D}}`$4-branes on an NS5-brane to the energy of the system is given by\[37\]:
$$\frac{𝒱}{16(2\pi )^4}_0^{\mathrm{}}\frac{dt}{t^3}e^{\frac{2X^2t}{\pi }}(q)$$
Here $`X=2r`$ is the separation of D4 and $`\overline{\mathrm{D}}`$4 along the NS5 brane, and $`q=\mathrm{exp}(\pi t)`$. The function $`(q)`$ is given by
$$(q)=\frac{f_4(q)^8}{f_1(q)^8}\left[14\frac{f_1(q)^4f_3(q)^4}{f_2(q)^4f_4(q)^4}\right]$$
where the $`f_i(q)`$ are defined as:
$$\begin{array}{cc}& f_1(q)=q^{\frac{1}{12}}\underset{n=1}{\overset{\mathrm{}}{}}(1q^{2n})f_2(q)=\sqrt{2}q^{\frac{1}{12}}\underset{n=1}{\overset{\mathrm{}}{}}(1+q^{2n})\hfill \\ & f_3(q)=q^{\frac{1}{24}}\underset{n=1}{\overset{\mathrm{}}{}}(1+q^{2n1})f_4(q)=q^{\frac{1}{24}}\underset{n=1}{\overset{\mathrm{}}{}}(1q^{2n1})\hfill \end{array}$$
Dropping the common factor $`𝒱`$, the total potential energy of the system of branes in Fig.3 is
$$\begin{array}{cc}\hfill V(r)& =\frac{1}{g_s(2\pi )^4}\sqrt{L^2+2r^2}\frac{1}{16(2\pi )^4}_0^{\mathrm{}}\frac{dt}{t^3}e^{\frac{8r^2t}{\pi }}(q)\hfill \\ & =V^{(1)}(r)+V^{(2)}(r)\hfill \end{array}$$
This expression can be minimized to get the condition for the equilibrium value for $`r`$.
The second term in Eqn.(1) is somewhat complicated. So we first analyze it in two different limits and extract some physical information. In the $`r\mathrm{}`$ limit, the most significant contribution comes from the $`t0`$ behaviour of $`(q)`$.
$$(e^{\pi t})16t^2\text{as}t0$$
Therefore we have
$$V^{(2)}(r)=\frac{1}{16(2\pi )^4}_0^{\mathrm{}}\frac{dt}{t^3}e^{\frac{8r^2t}{\pi }}(16t^2),r1$$
(we are measuring distances in units of $`\sqrt{\alpha ^{}}`$). This integral diverges logarithmically because of the behaviour of the integrand in the $`t0`$ limit. To extract the behaviour of this quantity as a function of $`r`$, let us regulate it by putting a cut-off $`ϵ`$ for the lower limit of the integration variable $`t`$, and then take $`ϵ0`$. Thus Eqn.(1) becomes
$$V^{(2)}(r)=\frac{1}{(2\pi )^4}\left[\gamma \mathrm{log}(\frac{8r^2ϵ}{\pi })\underset{n=1}{\overset{\mathrm{}}{}}\frac{(1)^n(\frac{8r^2ϵ}{\pi })^n}{n.n!}\right],r1$$
where $`\gamma `$ is the Euler constant. In the limit $`ϵ0`$ the third term vanishes and we are left with a potential of the form:
$$V^{(2)}(r)=AB\mathrm{log}(r),r1$$
where $`A`$ is an infinite constant and $`B=\frac{2}{(2\pi )^4}`$. From this the contribution of the second term in Eq.(1) to the force between the two $`D4`$-branes, given by $`\frac{dV^{(2)}}{dr}`$, is
$$F^{(2)}(r)=\frac{B}{r},r1$$
Thus in the large-separation limit this contribution to the force between two 4-brane segments is repulsive, as expected.
Now let us look at the behaviour of this contribution for small values of $`r`$. In this limit we can expand the the exponential in Eqn.(1) in powers of $`r`$ to get
$$V^{(2)}(r)=CDr^2,r1$$
where
$$\begin{array}{cc}\hfill C& =\frac{1}{8(2\pi )^4}_0^{\mathrm{}}\frac{dt}{t^3}(q)\hfill \\ \hfill D& =\frac{1}{(2\pi )^5}_0^{\mathrm{}}\frac{dt}{t^2}(q)\hfill \end{array}$$
Notice that $`C`$ is a divergent integral whereas $`D`$ is convergent. $`D`$ is also positive because $`(q)`$ is negative all through the range of integration. From Eqn.(1) the small-$`r`$ behaviour of the force turns out to be
$$F^{(2)}(r)=2Dr,r1$$
which is also repulsive, as expected. From Eqn.(1), the restoring force is:
$$F^{(1)}(r)=\frac{dV^{(1)}}{dr}=\frac{1}{g_s(2\pi )^4}\frac{2r}{\sqrt{2L^2+(2r)^2}}$$
which is attractive as explained above. The strength of attraction depends on the value of $`L`$, related to the size of the compact $`x^6`$ direction.
We want to know whether there is a stable minimum of the total potential, and under what conditions this minimum is attained away from $`r=0`$. In order to argue for the presence of a stable minimum at nonzero separation of the brane-antibrane pair, it is sufficient to show that the potential has an unstable turning point at the origin. Combined with the attractive behaviour for large $`r`$, this suffices to show that the potential develops a stable minimum somewhere in between.
From Eqns.(1) and (1), we have for $`r1`$,
$$V(r)\frac{1}{g_s(2\pi )^4}\frac{r^2}{L}Dr^2$$
upto additive constants. Here, $`D`$ is the positive constant given in Eqn.(1). It follows that $`V`$ has a turning point at the origin that is unstable (tachyonic) when $`L`$ is greater than a critical value $`L_c`$, namely:
$$L>L_c=\frac{1}{g_s(2\pi )^4D}$$
The function $`(q)`$ defined in Eqn.(1) tends to the constant value $`8`$ as $`t\mathrm{}`$. Hence an estimate for $`D`$ can be made by approximating $``$ in the integrand in Eqn.(1) by $`16t^2`$ for $`0<t<\frac{1}{\sqrt{2}}`$ and $`8`$ for $`\frac{1}{\sqrt{2}}<t<\mathrm{}`$. With this, we find
$$(2\pi )^4D\frac{16\sqrt{2}}{2\pi }3.60$$
so the phase transition takes place at $`L_c0.28g_s^1`$.
We expect that there will be no loop corrections to the restoring potential $`V^{(1)}`$, as this depends only on the D-brane tension which is unrenormalized. The repulsive potential $`V^{(2)}`$ will, on the other hand, receive loop corrections, but they are independent of $`L`$, and can be expected to be small for sufficiently small $`g_s`$. Hence we do not expect stringy corrections to invalidate the conclusions of this section.
Some further analysis of this potential can be found in the Appendix.
The T-dual Configuration
It has been argued that the elliptic configuration involving rotated NS5-branes is T-dual to the conifold geometry\[38,,39\]. Above we have studied an adjacent $`\mathrm{D4}\overline{\mathrm{D}}4`$ pair intersecting with this elliptic configuration. Hence one may ask what is the precise configuration, involving a suitable $`\mathrm{D3}\overline{\mathrm{D}}3`$ pair at a conifold, obtained by T-duality along the compact $`x^6`$ direction. Such a configuration should describe a stable non-BPS system exhibiting a phase transition.
As discussed above in the introduction, there is a simpler situation where the analogous T-duality relation holds: the elliptic model of two parallel NS5-branes\[41\], which is T-dual\[42\] to a $`Z_2`$ ALE geometry. An adjacent $`\mathrm{D4}\overline{\mathrm{D}}4`$ pair in this geometry is dual to a particular pair of fractional branes at an ALE space\[37\]. In this system, the adjacent brane-antibrane pair can separate along the $`(x^4,x^5)`$ directions, which lie within the bounding NS5-branes. In the T-dual picture the fractional branes live in the 5-plane transverse to the ALE space, and due to their mutual repulsion, they separate along the same directions $`(x^4,x^5)`$, that are transverse both to their own worldvolume and to the ALE space. In this discussion, one gets a satisfactory physical picture without having to take into account the back-reaction of the D-branes on the NS5-branes, or on the geometry.
For rotated NS5-branes the situation is somewhat different. On the one hand, the model of a wrapped D4-brane intersecting with rotated NS5-branes is fairly similar to the one with parallel NS5-branes: locally there is always a D4-brane ending on a codimension-2 locus inside an NS5-brane. On the other hand, in the T-dual conifold geometry we know\[43\] that D3-branes completely smoothen out the conifold singularity: the near-horizon geometry becomes $`AdS_5\times T_{1,1}`$. Thus in the latter picture the back reaction of the branes on the geometry is qualitatively very important. This can be traced to the fact that the branes completely fill the space transverse to the singularity.
The stable non-BPS configuration discussed in the previous section is an example of this type. While it can be visualised explicitly in the brane-construction picture, it is not so easy to describe in terms of branes at a conifold. For very weak string coupling (and fixed $`L`$) the problem is somewhat easier, since in this case the $`\mathrm{D4}\overline{\mathrm{D}}4`$ pair is aligned. The T-dual configuration will consist of a pair of fractional branes (more precisely, the first fractional part of a BPS brane, along with the second fractional part of a BPS antibrane) at the conifold. As for the case in Ref.\[43\], here too we expect the conifold geometry to be smoothed out near the origin. As the string coupling increases, a phase transition takes place and the $`\mathrm{D4}\overline{\mathrm{D}}4`$ separates, as discussed above. In this case the T-dual configuration is harder to visualise. The fractional pair cannot separate in any direction transverse to the conifold, so it must be thought of as separating within the conifold directions. Asymptotically this should look like a $`\mathrm{D3}\overline{\mathrm{D}}3`$ pair at separated locations away from the conifold singularity. But close to the origin the behaviour could be more complicated, with the conifold geometry being replaced by a more nontrivial one.
Both these situations should be amenable to study as supergravity solutions. With $`N_1`$ branes in the first segment and $`N_2`$ antibranes in the second, and for sufficiently large $`N_1`$ and $`N_2`$, there should be a trustworthy non-supersymmetric supergravity solution dual to the RG flow of a non-supersymmetric $`SU(N_1)\times SU(N_2)`$ gauge theory. This situation is very similar to the one recently considered in Ref.\[44,,45,,46,,47\], except that these authors considered supersymmetric configurations with full branes and fractional branes. If each full brane is replaced by a fractional brane-antibrane pair (in the sense discussed above) then our desired configuration is obtained. Since in this process supersymmetry is completely broken, it remains to be seen whether an explicit solution can be found. This should be a fascinating direction to explore, as one would hope to see our phase transition as an instability of the supergravity solution when some parameter is varied.
Summary and Discussion
We have exhibited a configuration of adjacent D4 and $`\overline{\mathrm{D}}`$4 branes in type IIA string theory, suspended between relatively rotated NS5-branes, which corresponds to a stable non-BPS state. A crucial assumption was that the force between adjacent brane-antibrane pairs can be estimated using a “locality” property, according to which it originates from the repulsion between the ends of these 4-branes in the NS5-brane worldvolumes. This repulsion can in turn be computed using standard orbifold techniques, valid for the model with parallel NS5-branes which is dual to a $`Z_2`$ ALE singularity.
While we do not know at present how to estimate the validity of this assumption, it is encouraging that it gives a definite and physically reasonable answer. As we have indicated in the previous section, a supergravity calculation might be one route to provide an independent check of our conclusions.
The $`3+1`$-dimensional field theory on the common worldvolume in our brane construction will be a non-supersymmetric, tachyon-free theory. Because the model is elliptic, it should flow to a CFT. One can generalise the model to include $`N_1`$ D4-branes in one segment and $`N_2`$ $`\overline{\mathrm{D}}`$4-branes in the other segment. For $`N_1=N_2=N`$ this will again flow to a CFT. Its large-$`N`$ limit should be interesting.
There are various other generalisations of our model which we have not discussed here but should be quite straightforward to analyse. This includes choosing the relative rotation of the NS5-branes to lie somewhere between 0 and $`\pi /2`$, introducing more NS5-branes rotated at various angles\[38\], and varying the spacing between the NS5-branes. One can also study non-elliptic models and incorporate semi-infinite D4-branes and $`\overline{\mathrm{D}}`$4-branes.
Above the critical radius, our model provides a situation where a brane and an antibrane are at a finite separation that is calculable in terms of various parameters including the string coupling and the radius of a compact direction. Such configurations might perhaps be useful to construct novel “brane-world” type models.
Acknowledgements:
We would like to thank Atish Dabholkar and Sandip Trivedi for helpful discussions. We are particularly grateful to Sandip Trivedi for a careful reading of the manuscript.
Appendix
The numerical value of $`L_c`$ below Eqn.(1) is only approximate, since we have made a crude estimate for the integral in Eqn.(1). An improvement on this estimate can be made by taking
$$\begin{array}{cc}\hfill (q)& 16t^2+16t^4,t\mathrm{small}\hfill \\ \hfill (q)& 8+45q,t\mathrm{large}\hfill \end{array}$$
and then finding an intermediate value of $`t`$ at which these two functions match. We find that $`t`$ is shifted from $`\frac{1}{\sqrt{2}}`$ to $`0.76`$, and the value of $`(2\pi )^4D`$ decreases from $`3.60`$ to $`3.02`$. As a result, $`L_c`$ moves up to about $`0.33g_s^1`$, an increase of $`18\%`$. This suggests that at least the order of magnitude of $`L_c`$ has been correctly estimated.
One may wonder if the potential has a unique minimum away from 0 for $`L>L_c`$ or whether there are several minima, some of them metastable. For this, it is convenient to make the same approximation above, but not just for the $`r1`$ behaviour. We take the term $`V^{(2)}(r)`$ in Eqn.(1) and write it as follows:
$$\begin{array}{cc}\hfill V^{(2)}(r)& =\frac{1}{16(2\pi )^4}_0^{\mathrm{}}\frac{dt}{t^3}e^{\frac{8r^2t}{\pi }}(q)\hfill \\ & \frac{1}{16(2\pi )^4}_0^{\frac{1}{\sqrt{2}}}\frac{dt}{t^3}e^{\frac{8r^2t}{\pi }}(16t^2)\frac{1}{16(2\pi )^4}_{\frac{1}{\sqrt{2}}}^{\mathrm{}}\frac{dt}{t^3}e^{\frac{8r^2t}{\pi }}(8)\hfill \end{array}$$
The integrals can now be evaluated. It is convenient to rescale the distance by defining $`y=\frac{2^{\frac{5}{4}}}{\sqrt{\pi }}r`$, then we find:
$$V^{(2)}\frac{1}{(2\pi )^4}\left(\frac{1}{2}(1y^2)e^{y^2}+(1\frac{y^4}{2})\mathrm{Ei}(y^2)2\mathrm{log}y\frac{1}{2}\gamma \right)$$
Here, $`\mathrm{Ei}`$ is the exponential-integral function and $`\gamma `$ is the Euler constant. We have dropped an infinite constant associated to the logarithmic term in the potential, and subtracted a finite constant $`\frac{1}{2}\gamma `$ to make the potential vanish at the origin.
Now we add the first term in Eqn.(1), which we write:
$$V^{(1)}=\frac{1}{g_s(2\pi )^4}\sqrt{L^2+2r^2}L=\frac{\sqrt{\pi }}{2^{\frac{3}{4}}g_s(2\pi )^4}\left(\sqrt{\stackrel{~}{L}^2+y^2}\stackrel{~}{L}\right)$$
where $`\stackrel{~}{L}=\frac{2^{\frac{3}{4}}}{\sqrt{\pi }}L`$, and again a constant has been subtracted to make the function vanish at the origin. It is now straightforward to plot $`V^{(1)}(y)+V^{(2)}(y)`$ for different values of $`\stackrel{~}{L}`$ (Figs.4,5). In these plots we have set $`g_s=0.1`$. For this value of $`g_s`$, the phase transition is at $`\stackrel{~}{L}_c2.65`$. We see that, at least for the $`\stackrel{~}{L}`$ values in the plots, there seems to be a unique and nonzero minimum when $`\stackrel{~}{L}>\stackrel{~}{L}_c`$.
Fig.4: Potential for $`\stackrel{~}{L}=2.6,2.7`$.
Fig.5: Potential for $`\stackrel{~}{L}=10,10000`$.
References
relax T. Banks and L. Susskind, “Brane–Antibrane Forces”, hep-th/9511194. relax E. Gava, K.S. Narain and M.H. Sarmadi, “On the Bound States of p-Branes and (p+2)-Branes”, hep-th/9704006; Nucl. Phys. B504 (1997) 214. relax A. Sen, “Tachyon Condensation on the Brane Anti-Brane System”, hep-th/9805170; JHEP 08 (1998) 012. relax M. Srednicki, “IIB or Not IIB”, hep-th/9807138; JHEP 08 (1998) 005. relax A. Sen, “SO(32) spinors of type I and other solitons on brane - anti-brane pair”, hep-th/9808141; JHEP 09 (1998) 023. relax E. Witten, “D-branes and K-theory”, hep-th/9810188; JHEP 12 (1998) 019. relax P. Horava, “Type IIA D-Branes, K-Theory and Matrix Theory”, hep-th/9812135; Adv. Theor. Math. Phys. 2 (1999) 1373. relax P. Yi, “Membranes from Five-Branes and Fundamental Strings from Dp-Branes”, hep-th/9901159; Nucl. Phys. B550 (1999) 214. relax A. Sen, “Descent Relations Among Bosonic D-branes”, hep-th/9902105; Int. J. Mod. Phys. A14 (1999) 4061. relax H. Awata, S. Hirano and Y. Hyakutake, “Tachyon Condensation and Graviton Production in Matrix Theory”, hep-th/9902158. relax O. Bergman, E. Gimon and P. Horava, “Brane Transfer Operations and T-duality of Non-BPS States”, hep-th/9902160; JHEP 04 (1999) 010. relax I. Pesando, “On the Effective Potential of the Dp–anti-Dp System in Type II Theories”, hep-th/9902181; Mod. Phys. Lett. A14 (1999) 1545. relax M. Frau, L. Gallot, A. Lerda and P. Strigazzi, “Stable Non-BPS D-branes in Type I String Theory”, hep-th/9903123; Nucl. Phys. B564 (2000) 60. relax N. Kim, S.-J. Rey and J.-T. Yee, “Stable Non-BPS Membranes on M(atrix) Orientifold”, hep-th/9903129; JHEP 04 (1999) 003. relax K. Dasgupta and S. Mukhi, “Brane Constructions, Fractional Branes and Anti-de Sitter Domain Walls”, hep-th/9904131; JHEP 07 (1999) 008. relax S.P. de Alwis, “Tachyon Condensation in Rotated Brane Configurations”, hep-th/9905080; Phys. Lett. B461 (1999) 329. relax C. Kennedy and A. Wilkins, “Ramond-Ramond Couplings on Brane-Antibrane Systems”, hep-th/9905195; Phys. Lett. B464 (1999) 206. relax G. Aldazabal and A.M. Uranga, “Tachyon Free Nonsupersymmetric Type IIB Orientifolds Via Brane-AntiBrane Systems”, hep-th/9908072; JHEP 10 (1999) 024. relax M. Gaberdiel and B. Stefanski, “Dirichlet Branes on Orbifolds”, hep-th/9910109. relax J. Majumder and A. Sen, “ ‘Blowing Up’ D-Branes on Nonsupersymmetric Cycles”, hep-th/9906109; JHEP 09 (1999) 004. relax C. Angelantonj, “Non-supersymmetric Open String Vacua”, hep-th/9907054. relax I. Antoniadis, E. Dudas and A. Sagnotti, “Brane Supersymmetry Breaking”, hep-th/9908023; Phys. Lett. B464 (1999) 38. relax M. Gaberdiel and A. Sen, “Nonsupersymmetric D-Brane Configurations with Bose-Fermi Degenerate Open String Spectrum”, hep-th/9908060; JHEP 11 (1999) 008. relax D. Youm, “Delocalized Supergravity Solutions for Brane/Anti-brane Systems and Their Bound States”, hep-th/9908182. relax A. Sen, “Supersymmetric World-Volume Action For Non-BPS D-Branes”, hep-th/9909062; JHEP 10 (1999) 008. relax M. Mihailescu, K. Oh and R. Tatar, “Non-BPS Branes on a Calabi-Yau Threefold and Bose-Fermi Degeneracy”, hep-th/9910249. relax L. Houart and Y. Lozano, “Type II Branes from Brane-antibrane in M-theory”, hep-th/9910266. relax C. Angelantonj, I. Antoniadis, G. D’Appollonio, E. Dudas and A. Sagnotti, “Type I Vacua with Brane Supersymmetry Breaking”, hep-th/9911081. relax R. Russo and C.A. Scrucca, “On the Effective Action of Stable Non-BPS Branes”, hep-th/9912090. relax A. Sen, “Universality of the Tachyon Potential”, hep-th/9911116; JHEP 12 (1999) 027. relax A. Sagnotti, “Open String Models with Broken Supersymmetry”, hep-th/0001077. relax O. Bergman, K. Hori and P. Yi, “Confinement on the Brane”, hep-th/0002223. relax A. Sen, “Non-BPS States and Branes in String Theory”, hep-th/9904207. relax A. Lerda and R. Russo, “Stable Non-BPS States in String Theory: A Pedagogical Review”, hep-th/9905006. relax O. Bergman and M. Gaberdiel, “Non-BPS Dirichlet Branes”, hep-th/9908126. relax J. H. Schwarz, “TASI Lectures on Non-BPS D-brane systems”, hep-th/9908144. relax S. Mukhi, N.V. Suryanarayana and D. Tong, “Brane-Antibrane Constructions”, hep-th/0001066. relax A. Uranga, “Brane Configurations for Branes at Conifolds”, hep-th/9811004; JHEP 01 (1999) 022. relax K. Dasgupta and S. Mukhi, “Brane Constructions, Conifolds and M-Theory”, hep-th/9811139; Nucl. Phys. B551 (1999) 204. relax S. Gubser, N. Nekrasov and S. Shatashvili, “Generalized Conifolds and 4-dimensional N=1 Superconformal Field Theory”, hep-th/9811230; JHEP 05 (1999) 003. relax E. Witten, “Solutions of Four Dimensional Field Theories Via M-theory”, hep-th/9703166; Nucl. Phys. B500 (1997) 3. relax H. Ooguri and C. Vafa, “Two-Dimensional Black Hole and Singularities of CY Manifolds”, hep-th/9511164; Nucl. Phys. B463 (1996) 55; B. Andreas, G. Curio and D. Lüst, “The Neveu-Schwarz Five-Brane and its Dual Geometries”, hep-th/9807008; JHEP 10 (1998) 022; A. Karch, D. Lüst and D. Smith, “Equivalence of Geometric Engineering and Hanany-Witten via Fractional Branes”, hep-th/9803232; Nucl. Phys. B533 (1998) 348. relax A. Kehagias, “New Type IIB Vacua and Their F-theory Interpretation”, Phys. Lett. B435 (1998) 337; hep-th/9805131. relax I. Klebanov and N. Nekrasov, “Gravity Duals of Fractional Branes and Logarithmic RG Flow”, hep-th/9911096. relax C.V. Johnson, A.W. Peet and J. Polchinski, “Gauge Theory and the Excision of Repulson Singularities”, hep-th/9911161. relax I. Klebanov and A.A. Tseytlin, “Gravity Duals of Supersymmetric $`SU(N)\times SU(N+M)`$ Gauge Theories”, hep-th/0002159. relax K. Oh and R. Tatar, “Renormalization Group Flows on D3 branes at an Orbifolded Conifold”, hep-th/0003183.
|
no-problem/0003/cond-mat0003164.html
|
ar5iv
|
text
|
# Classical mechanics technique for quantum linear response
## Abstract
It is shown that the lowest excitation energies of a quantum many-fermion system in the random phase approximation (RPA) can be obtained by minimizing an effective classical energy functional. The minimum can be found very efficiently using generalized Lanczos technique. Application of the new technique to molecular spectra allows to compute excited states at the expense comparable to the ground-state calculations. As an example, the first-principle RPA excitation spectrum of C60 molecule is computed taking into account all 240 valence electrons in the full valence space of the molecule. The results match linear absorption experiment within percents.
Random phase approximation is central to the theory of electronic excitations in molecules and in extended systems . More generally, it resides in the core of the theory of linear response of correlated many-particle systems, and is widely used to describe correlation effects in the excitations of nuclei , molecules , semiconductor quantum wells , quantum dots , and bulk materials . Large systems of RPA-type equations are of particular significance in photochemistry of large molecules. Understanding such processes as photosynthesis and light reception in vision requires detailed description of the evolution of large biological molecular complexes upon optical excitation , which is governed by the configuration of excited-state adiabatic surfaces . Yet, unlike ground-state molecular calculations, which are now considered a routine job with the powerful program packages at hand , the modeling of the excited states is a much more difficult task .
The reason behind this difficulty is that electronic correlations are usually much more pronounced in the excited states. In other words, when the ground state electronic wavefunction can be reliably approximated using Hartree-Fock (HF) or density functional theory (DFT), the electronic wavefunction in the excited state cannot be described as a single Slater determinant.
RPA is one of the standard tools to treat electronic correlations in the excited sates of quantum many-particle systems. It belongs to a broader family of so-called time-dependent techniques, such as time-dependent HF (TDHF) and time-dependent DFT (TDDFT). These methods target directly the excitation energies of the system, by associating these energies with the frequencies of oscillations of the ansatz parameters when the system is driven out of equilibrium. The static HF or DFT ground state is the best Slater determinant that gives minimum to a certain energy functional. The TDHF and TDDFT describe the time evolution of the respective Slater determinant near the equilibrium.
An assumption is made that the time-dependent wavefunction remains a single Slater determinant at each moment in time. Projection of the time-dependent Schroedinger equation onto the set of all Slater determinants yields a set of essentially classical Hamiltonian equations of motion . In linear response, when the deviation from equilibrium is small, the motion represents small oscillations, whose frequencies are associated naturally with the excitation energies of the system. In the small-oscillation limit TDHF is equivalent to RPA.
RPA excitation energies are obtained as the eigenvalues of a non-Hermitian matrix
$$\left(\begin{array}{cc}A& B\\ B& A\end{array}\right)$$
(1)
The $`N\times N`$ symmetric matrices $`A`$ and $`B`$ describe particle-particle interaction. Their matrix elements are simple combinations of the two-particle interaction matrix elements of the Hamiltonian in the basis of HF orbitals.
Configuration interaction singles (CIS), which is another commonly used technique to describe correlation effects, is in fact an approximation to RPA, and can be recovered by setting $`B=0`$ .
In contrast to the static HF, where the number of equations scales linearly with the number of particles, the size $`2N`$ of the matrix (1) grows quadratically with the size of the single-particle Hilbert space. This impedes diagonalization of the matrix (1) for relatively large systems. For example, RPA equations for the singlet excited states of C60 molecule in the full valence space basis lead to the matrix of the size $`2N=28,800`$. On the other hand, complete solution of the TDHF equations is not always necessary. In many cases, only a few low-energy excitonic states are of interest.
This amounts to computing only a few extremal eigenvectors of the matrix (1) — a task similar to a standard problem in quantum mechanics of computing a few low-energy eigenstates $`\psi `$ of a Hermitian matrix $`H`$. The latter problem can be solved very efficiently using Hermitian Lanczos algorithm , or any of its various modifications. In essence, the algorithm builds a Krylov subspace $`𝒦_n`$ of the matrix $`H`$, and then finds the best approximation to $`\psi `$ in $`𝒦_n`$ by minimizing the expectation value of the energy $`(\psi H\psi )`$ .
There exist many variations of the Lanczos algorithm that allow to find eigenpairs of non-Hermitian matrices . These methods, however, loose very much of the performance of the Hermitian Lanczos algorithm, because of the lack of a minimum principle. Indeed, no general minimum principle exists that yields eigenvalues of non-Hermitian matrices .
A lot of effort has been put into developing reliable methods for computing selected eigenvalues of RPA-type matrices (1) . In the Davidson algorithm has been extended to solve RPA equations as a general non-Hermitian eigenvalue problem. In it was modified to preserve the special paired structure of the matrix (1). Tretiak et al. have developed a density-matrix-spectral-moments (DSMA) algorithm based on generalized sum rules for the response theory. The symplectic Lanczos algorithm suggested by Mei and improved by Benner exploits the analogy between the unitary transformations that preserve Hermiticity and the symplectic transformations that preserve the paired structure of (1). A Newton-Raphson-type iterative procedure has been developed in . Finally, the oblique Lanczos algorithm for general non-Hermitian matrices was applied to the TDHF problem in .
It has been majorly overlooked that, although the RPA-type matrix is non-Hermitian, its block paired structure gives it some properties similar to the Hermitian matrices. In particular, there does exist a minimum principle that yields the lowest positive eigenvalue of (1). It was suggested by Thouless back in 1961 and reads
$$\omega _{\mathrm{min}}=\underset{\{x,y\}}{\mathrm{min}}\frac{(x,y)\left(\begin{array}{cc}A& B\\ B& A\end{array}\right)\left(\begin{array}{c}x\\ y\end{array}\right)}{|(xx)(yy)|}$$
(2)
The minimum is to be taken over all $`N`$-vectors $`x`$ and $`y`$. The minimum always exists, since the HF stability condition keeps the numerator positive . Note, that the denominator can be arbitrarily small, and therefore the expression has no maximum.
The eigenvalue equation for the matrix (1) can be transformed into the form of Hamiltonian equations of motion for classical oscillations by substitution $`T=A+B`$ and $`K=AB`$:
$$Tp=\omega q,Kq=\omega p,$$
(3)
Here vectors $`q`$ and $`p`$ play the role of the conjugate canonical coordinates and momenta, while $`K`$ and $`T`$ are the matrices of stiffness and kinetic coefficients respectively.
The lowest frequency of a harmonic Hamiltonian system can be obtained as a minimum of its total energy over all phase-space configurations $`\{p,q\}`$ normalized by $`(pq)=1`$:
$$\omega _{\mathrm{min}}=\underset{(pq)=1}{\mathrm{min}}\frac{(pTp)}{2}+\frac{(qKq)}{2}.$$
(4)
Indeed, variation of (4) with respect to $`p`$ and $`q`$ yields Hamiltonian equations of motion (3). The minimum principle (4) is equivalent to the Thouless minimum principle (2), where $`x=p+q`$ and $`y=pq`$.
Two terms in the right-hand side of (4) are the kinetic and the potential energies at the configuration $`\{p,q\}`$. Both are positive for any $`p`$ and $`q`$ when the equilibrium is stable. The positive definiteness of $`K`$ and $`T`$ leads to the positive definiteness of the matrix in (2), and thus, to the HF stability condition, making all eigenfrequencies real .
Minimum (4) can be easily found using the generalized Lanczos recursion
$`q_{i+1}`$ $`=`$ $`\beta _{i+1}^1(Tp_i\alpha _iq_i\beta _iq_{i1})`$ (6)
$`p_{i+1}`$ $`=`$ $`\delta _{i+1}^1(Kq_i\gamma _ip_i\delta _ip_{i1}),`$ (7)
which generates configuration space vectors $`(q_i,p_i)`$ that span the Krylov subspace of the eigenvalue problem (3). When four coefficients $`\alpha _i`$, $`\beta _i`$, $`\gamma _i`$, and $`\delta _i`$ are chosen at each step $`i`$ to ensure $`(q_{i+1}p_i)=(q_{i+1}p_{i1})=(p_{i+1}q_i)=(p_{i+1}q_{i1})=0`$, the vectors $`p_i`$, $`q_i`$ form a biorthogonal basis, $`(p_iq_j)=\delta _{ij}`$ and the matrices $`\stackrel{~}{K}_{ij}=(q_iKq_j)`$ and $`\stackrel{~}{T}_{ij}=(p_iTp_j)`$ are symmetric tridiagonal, with the only nonzero matrix elements $`\stackrel{~}{K}_{ii}=\alpha _i`$, $`\stackrel{~}{K}_{i,i1}=\stackrel{~}{K}_{i1,i}=\beta _i`$, $`\stackrel{~}{T}_{ii}=\gamma _i`$, and $`\stackrel{~}{T}_{i,i1}=\stackrel{~}{T}_{i1,i}=\delta _i`$. Expanding $`q=c_iq_i`$ and $`p=d_ip_i`$, we arrive at the $`2n\times 2n`$ eigenvalue problem
$$\stackrel{~}{T}d=\stackrel{~}{\omega }c,\stackrel{~}{K}c=\stackrel{~}{\omega }d,$$
(8)
which has the same structure as (3). The lowest positive eigenvalue $`\stackrel{~}{\omega }_{\mathrm{min}}`$ of (8) give the approximation to the true lowest frequency $`\omega _{\mathrm{min}}`$. The accuracy is found to improve exponentially with increasing $`n`$.
When the lowest-frequency normal mode $`q^{(1)},p^{(1)}`$ is found, the second-lowest normal mode $`q^{(2)},p^{(2)}`$ can be obtained by choosing initial vectors $`q_1`$ and $`p_1`$ orthogonal to $`p^{(1)}`$ and $`q^{(1)}`$ respectively. As follows from Eq. (4) such a choice causes all vectors $`q_i`$ and $`p_i`$ to remain orthogonal to $`p^{(1)}`$ and $`q^{(1)}`$. An oblique projection can be used to correct for the loss of orthogonality with respect to $`p^{(1)}`$ and $`q^{(1)}`$ that may occur at large $`n`$. Namely, the necessary amounts of $`q^{(1)}`$ and $`p^{(1)}`$ should be subtracted from $`q_i`$ and $`p_i`$ respectively to ensure $`(q_ip^{(1)})=(p_iq^{(1)})=0`$. Higher-frequency TDHF solutions can be found one by one in this way.
In order to demonstrate the power of the new technique, it has been applied to compute the excitation spectrum of the fullerene C60 molecule. Albeit the great attention this molecule has received in the past several years , no adequate correlated calculation of the excited-states of C60 has been reported so far.
As noted above, CIS in the entire particle-hole configuration space can be seen as an approximation to RPA, where the matrix $`B`$ in (1) is neglected. Yet, large size of the molecle has prevented full diagonalization of CIS matrix in the entire space. Calculations for C60 have been reported using CIS with the CNDO/S Hamiltonian in the truncated space of up to 1295 out of 14400 particle-hole configurations , and using TDDFT with B-P86 functional . Also a surprisingly high lowest optically-allowed transition energy of 5.13 eV has been reported using ab-initio Rettrup-type RPA calculation .
The technique outlined above has allowed to solve RPA equations in the entire valence particle-hole configuration space of the molecule (matrix size $`N=2\times 14400`$). INDO/S semiempirical parameterization of the Hamiltonian was used, which is essentially better than CNDO/S, and was shown to give especially good description of the excitation spectra of $`\pi `$-conjugated molecules at the CIS/RPA level of theory .
Experimental values of 1.46 and 1.455 Å were chosen for single and double bond lengths respectively, which completely determines the geometry of the molecule (see Fig. 1). INDO/S Hamiltonian matrix elements were generated using ZINDO program . Only the singlet states of C60 were studied. The calculation was performed on DEC Alpha 500au workstation. Solution of static HF equations took about 2 min. CPU time compared to about 6 min. per each excited state.
The present results allow for the first time for the direct comparison to the experiment. As shown in Table I, the energies of optically-allowed transitions obtained are within percents from the features observed in the linear absorption of C60 in solution . In Ref. the modes were found to have a systematic red shift by 0.35 eV. No systematic shift was observed in the present study. Almost perfect match of all transition energies to the features seen in linear absorption allows to resolve the controversy of the assignment of the lowest optically allowed transition towards the value of 2.87 eV, which is opposite to the conclusion of .
Fig. 2 shows the complete excitation spectrum obtained. A total of 500 singlet excited states have been computed. The excitation energies were found to be degenerate 1, 3, 4, or 5 times in accordance with multiplicities of irreducible representations of the I<sub>h</sub> symmetry group. No symmetry-induced simplification of the problem has been used. A symmetry analysis was performed for each mode after it was computed and the irreducible representation of the symmetry group was assigned.
Table I. C60 Experimental and theoretical electonic excitation energies and experimental oscillator strengths. Experimental values are from linear absorption in n-hexane . Percent values are the deviations with respect to the experiment.
Absorption RPA Experiment INDO/S (full space) $`\mathrm{}\omega `$, eV $`f_{osc}`$ $`\mathrm{}\omega `$, eV 3.04 0.015 2.874 (5%) 3.30 3.505 (6%) 3.78 0.37 3.782 (0%) 4.06 0.10 3.924 (3%) 4.35 4.287 (1%) 4.84 5.031 (4%) 5.46 2.27 5.150 (6%) 5.88 5.816 (1%) 6.008 6.078 6.36 6.202 (2%)
High symmetry of the molecule causes the majority of states to be optically dark. Only the states of $`T_{1u}`$ symmetry may have nonzero oscillator strengths and show up in linear absorption . It seems that the abundance of the singlet optically dark states below the first optically allowed transitions is not fully realized. The present result may, therefore, shed some light onto a controversial issue of an apparent anomalously fast singlet-to-triplet relaxation .
The problem could have been simplified by taking symmetry considerations into account before the RPA equations are solved. It would be, however, opposite to the purpose of this letter, which is to demonstrate in the first place the performance of the method for a complex problem. In particular, specific difficulties could have been expected from the high level of degeneracies in the spectrum. No problems of that kind have been noticed.
In conclusion, a new method is proposed for solving RPA-type equations with the computational effort comparable to that required to solve static self-consistent field equations for the ground state. The method allows to compute low-energy excitonic states at the level of theory which may be hard or impossible to achieve using conventional techniques.
As suggested in Ref. , calculation of the electronic excitation energy at various nuclear configurations yields effectively the excited-state adiabatic surface of the molecule, provided that the ground-state adiabatic surface is known. Thus, an ability to compute the excitation energy at computational expense comparable to the ground-state calculation can provide a long-sought opportunity to perform realistic molecular-dynamics simulations of photochemical reactions of large biological molecules.
Acknowledgements.
I am grateful to I.L. Aleiner for discussions and for the opportunity to pursue this research.
|
no-problem/0003/hep-ph0003102.html
|
ar5iv
|
text
|
# REMARKS ON ANOMALOUS U(1) SYMMETRIES IN STRING THEORY 11footnote 1Talk given at COSMO-99, International Workshop on Particle Physics and the Early Universe, 27 September - 2 October 1999, ICTP, Trieste, Italy
## 1 Introduction
The appearance of anomalous U(1) gauge symmetries in the framework of string theory has received considerable attention. While primarily the motivation to study such symmetries was of theoretical origin, it was soon realized that there could be interesting applications to model building. This included the possible role of induced Fayet-Iliopoulos terms for gauge and supersymmetry breakdown and the appearance of global symmetries relevant for the strong CP-problem and questions of baryon and lepton number conservation. Cosmological applications can be found in a discussion of D-term inflation and the creation of the cosmological baryon asymmetry.
In string theory, anomalous U(1) gauge symmetries can serve as tools to study detailed properties of duality symmetries and the question of supersymmetry breakdown. Most recently this became apparent in attempts to relate orbifold compactifications of the perturbative heterotic string to orientifolds of Type II string theory. In the present talk I shall report on results obtained in collaboration with Z. Lalak and S. Lavignac. Lack of space and time allows just a summary of basic results. For details and a more complete list of references we refer the reader to the original publications .
## 2 Anomalous $`U(1)`$’s in heterotic string theory
In field theoretic models we were taught to discard anomalous gauge symmetries in order to avoid inconsistencies. This was even extended for the condition on the trace of the charges $`_iQ_i=0`$ of a $`U(1)`$ gauge symmetry because of mixed gauge and gravitational anomalies . Moreover a nonvanishing trace of the $`U(1)`$ charges would reintroduce quadratic divergencies in supersymmetric theories through a one-loop Fayet-Iliopoulos term . In string theory we then learned that one can tolerate anomalous $`U(1)`$ gauge symmetries due to the appearance of the Green-Schwarz mechanism that provides a mass for the anomalous gauge boson. In fact, anomalous $`U(1)`$ gauge symmetries are common in string theories and could be useful for various reasons.
In the case of the heterotic string one obtains models with at most one anomalous $`U(1)`$, and the Green-Schwarz mechanism involves the so-called model independent axion (the pseudoscalar of the dilaton superfield $`S`$). The number of potentially anomalous gauge bosons is in general limited by the number of antisymmetric tensor fields in the ten-dimensional ($`d=10`$) string theory. This explains the appearance of only one such gauge boson in the perturbative heterotic string theory and leads to specific correlations between the various (mixed) anomalies . This universal anomaly structure is tied to the coupling of the dilaton multiplet to the various gauge bosons.
The appearance of a nonvanishing trace of the $`U(1)`$ charges leads to the generation of a Fayet-Ilopoulos term $`\xi ^2`$ at one loop. In the low energy effective field theory this would be quadratically divergent, but in string theory this divergence is cut off through the inherent regularization due to modular invariance. One obtains
$$\xi ^2\frac{1}{(S+S^{})}M_{\mathrm{Planck}}^2M_{\mathrm{String}}^2$$
(1)
where $`(S+S^{})1/g^2`$ with the string coupling constant $`g`$. The Fayet-Iliopoulos term of order of the string scale $`M_{\mathrm{String}}`$ is thus generated in perturbation theory. This could in principle lead to a breakdown of supersymmetry, but in all known cases there exists a supersymmetric minimum in which charged scalar fields receive nonvanishing vacuum expectation values (vevs), that break $`U(1)_A`$ (and even other gauge groups) spontaneously. This then leads to a mixing of the goldstone boson (as a member of a matter supermultiplet) of this spontaneous breakdown and the model-independent axion (as a member of the dilaton multiplet) of the Green-Schwarz mechanism. One of the linear combinations will provide a mass to the anomalous gauge boson. The other combination will obtain a mass via nonperturbative effects that might even be related to an axion-solution of the strong CP-problem . As we can see from (1), both the mass of the $`U(1)_A`$ gauge boson and the value of the Fayet-Iliopoulos term $`\xi `$ are of the order of the string scale. Nonetheless, models with an anomalous $`U(1)`$ have been considered under various circumstances and lead to a number of desirable consequences. Among those are the breakdown of some additional nonanomalous gauge groups , a mechanism to parametrize the fermion mass spectrum in an economical way , the possibility to induce a breakdown of supersymmetry , a satisfactory incorporation of D-term inflation , and the possibility for an axion solution of the strong CP-problem .
The nice property of the perturbative heterotic string theory in the presence of an anomalous $`U(1)`$ is the fact that both $`\xi `$ and the mass of the anomalous gauge boson are induced dynamically and not just put in by hand. Both of them, though, are of order of the string scale $`M_{\mathrm{String}}`$, which might be too high for some of the applications. We will now compare this for the case of type I and type II orientifolds.
## 3 Anomalous $`U(1)`$’s in type I and type II theories
We consider $`d=4`$ string models of both open and closed strings that are derived from either type I or type II string theories in $`d=10`$ by appropriate orbifold or orientifold projections . It was noticed, that in these cases more than a single anomalous $`U(1)`$ symmetry could be obtained . This lead to the belief that here we can deal with a new playground of various sizes of $`\xi `$’s and gauge boson masses in the phenomenological applications.
The appearance of several anomalous $`U(1)`$’s is a consequence of the fact that these models contain various antisymmetric tensor fields in the higher dimensional theory and the presence of a generalized Green-Schwarz mechanism involving axion fields in new supermultiplets $`M`$. In the type II orientifolds under consideration these new axion fields correspond to twisted fields in the Ramond-Ramond sector of the theory.
From experience with the heterotic case it was then assumed that for each anomalous $`U(1)`$ a Fayet-Iliopoulos term was induced dynamically. With a mixing of the superfields $`M`$ and the dilaton superfield $`S`$ one hoped for $`U(1)_A`$ gauge boson masses of various sizes in connection with various sizes of the $`\xi `$’s.
The picture of duality between heterotic orbifolds and type II orientifolds as postulated in seemed to work even in the presence of several anomalous $`U(1)`$ gauge bosons assuming the presence of Fayet-Iliopoulos terms in perturbation theory and the presence of the generalized Green-Schwarz mechanism. So superficially everything seemed to be understood. But apparently the situation turned out to be more interesting than anticipated.
## 4 Some Surprises
There appeared two decisive results that initiated renewed interest in these questions and forced us to reanalyse this situation . The first one concerns the inspection of the anomaly cancellation mechanism in various type II orientifolds. As was observed by Ibáñez, Rabadan and Uranga , in this class of models there is no mixing between the dilaton multiplet and the $`M`$-fields. It is solely the latter that contribute to the anomaly cancellation. Thus the dilaton that is at the origin of the Green-Schwarz mechanism in the heterotic theory does not participate in that mechanism in the dual orientifold picture. The second new result concerns the appearance of the Fayet-Iliopoulos terms. As was shown by Poppitz in a specific model, there were no $`\xi `$’s generated in one-loop perturbation theory. The one loop contribution vanishes because of tadpole cancellation in the given theory. This result seems to be of more general validity and could have been anticipated from general arguments, since in type I theory a (one-loop) contribution to a Fayet-Iliopoulos term either vanishes or is quadratically divergent, and the latter divergence is avoided by the requirement of tadpole cancellation. Of course, there is a possibility to have tree level contributions to the $`\xi `$’s, but they are undetermined, in contrast to the heterotic case where $`\xi `$ is necessarily nonzero because of the one loop contribution. In type II theory such a contribution would have to be of nonperturbative origin. In the heterotic theory the mass of the anomalous gauge boson was proportional to the value of $`\xi `$. If a similar result would hold in the orientifold picture, this would mean that some of the $`U(1)`$ gauge bosons could become arbitrary light or even massless, a situation somewhat unexpected from our experience with consistent quantum field theories. In any case, a careful reevaluation of several questions is necessary in the light of this new situation. Among those are: the size of the $`\xi `$’s, the size of the masses of anomalous $`U(1)`$ gauge bosons, the relation of $`\xi `$ and gauge boson mass, as well as the fate of heterotic - type IIB orientifold duality, which we will discuss in the remainder of this talk.
The questions concerning the anomalous gauge boson masses have been answered in . Generically they are large, of order of the string scale, even if the corresponding Fayet-Iliopoulos terms vanish. This is in agreement with the field theoretic expectation that the masses of anomalous gauge bosons cannot be small or even zero. There is one possible exception, however. In the limit that gauge coupling constant tends to zero, one could have vanishing masses. In this case, one would deal with a global U(1) that can be tolerated in field theory even if it is anomalous.
## 5 Heterotic-Type I Duality
Models containing anomalous $`U(1)`$ factors offer an arena to study details of Type I/II - Heterotic duality in four dimensions. This duality, is of the weak coupling - strong coupling type in ten dimensions. In four dimensions the relation between the heterotic and type I dilatons is
$$\varphi _H=\frac{1}{2}\varphi _I\frac{1}{8}\mathrm{log}(G_I)$$
(2)
where $`G_I`$ is the determinant of the metric of the compact 6d space, which depends on moduli fields. For certain relations between the dilaton and these moduli fields we thus have a duality in four dimensions which maps a weakly coupled theory to another weakly coupled theory.
For the remainder of the discussion we have to be very careful with the definition of heterotic - type I duality. Such a duality has first been discussed in in ten dimensions. It was explicitely understood as a duality between the original $`SO(32)`$ type I theory and the heterotic theory with the same gauge group, that is a duality between two theories that both have one antisymmetric tensor field in ten dimensions. This is a very well established duality symmetry which will not be the focus of our discussion here. We would like to concentrate on a four dimensional duality symmetry between more general type II orientifolds and the heterotic $`SO(32)`$ theory first discussed in . We call this heterotic - type II orientifold duality. It would relate theories that have a different number of antisymmetric tensor field in their ten dimensional origin.
The pairs of models which we study are type IIB orientifolds models in 4d and their candidate heterotic duals which can be found in the existing literature . As an example consider the $`Z_3`$ orientifold/orbifold . The type IIB orientifold model has the gauge group $`G=SU(12)\times SO(8)\times U(1)_A`$ where the $`U(1)_A`$ factor is anomalous. The anomalies are non-universal and get cancelled by means of the generalized Green-Schwarz mechanism. This mechanism involves twenty-seven twisted singlets $`M_{\alpha \beta \gamma }`$, a particular combination of which combines with the anomalous vector superfield to form a massive multiplet. After the decoupling of this heavy vector multiplet we obtain the nonanomalous model with the gauge goup $`G^{}=SU(12)\times SO(8)`$.
On the heterotic side, with the heterotic $`SO(32)`$ superstring compactified on the orbifold $`T^6/Z_3`$, the gauge group is $`G=SU(12)\times SO(8)\times U(1)_A`$ and the $`U(1)_A`$ is again anomalous. Its anomalies, however, are universal in this case, and a universal, only dilaton-dependent, Fayet-Iliopoulos term is generated. In this case there are also fields which are charged only under the anomalous $`U(1)`$ that can compensate for the Fayet-Iliopoulos term by assuming a nontrivial vacuum expectation value, without breaking the gauge group any further; a combination of these fields and of the dilaton supermultiplet is absorbed by the anomalous vector multiplet. These nonabelian singlets are the counterparts of the $`M_{\alpha \beta \gamma }`$ moduli of the orientifold model. However, on the heterotic side we have additional states charged under $`U(1)_A`$ (and also under $`SO(8)`$) the counterparts of which are not present in the orientifold model. These unwanted states become heavy in a supersymmetric manner through the superpotential couplings . Below the scale of the heavy gauge boson mass we have a pair of models whose spectra fulfil the duality criteria.
There are, however, arguments that this duality symmetry might not be universally valid. The first doubts came from a study of the $`Z_7`$ examples in . There it was shown that the spectra of the two candidate duals did not match for certain values of the moduli fields. These doubts were confirmed in a calculation of gauge coupling constants . Finally it was shown that certain global symmetries that were found to hold on the heterotic side did not have counterparts in the orientifold picture .
## 6 Outlook
The presence of anomalous $`U(1)`$ symmetries can have interesting phenomenological applications both in the heterotic and the type I case. In heterotic string compactifications, the presence of an anomalous $`U(1)`$ shows up primarily in the existance of a nonvanishing Fayet-Iliopoulos term $`\xi `$. If such a term is somewhat smaller than the Planck scale this could explain the origin and hierarchies of the small dimensionless parameters in the low-energy lagrangian, such as the Yukawa couplings , in terms of the ratio $`\xi /M_{Pl}`$. In explicit string models, $`\xi `$ is found to be of the order of magnitude necessary to account for the value of the Cabibbo angle. Furthermore, the universality of the mixed gauge anomalies implies a successful relation between the value of the weak mixing angle at unification and the observed fermion mass hierarchies . The anomalous $`U(1)`$ could also play an important role in supersymmetry breaking: not only does it take part in its mediation from the hidden sector to the observable sector (as implied by the universal Green-Schwarz relation among mixed gauge anomalies), but also it can trigger the breaking of supersymmetry itself, due to an interplay between the anomalous $`D`$-term and gaugino condensation . It would be interesting to look at this questions in the framework of the heterotic $`E_8\times E_8`$ M-theory in the presence of anomalous $`U(1)`$ symmetries, generalizing previous results of supersymmetry breakdown . Cosmologically, the presence of an anomalous $`U(1)`$ might have important applications in the discussion of inflationary models: in particular its Fayet-Iliopoulos term can dominate the vacuum energy of the early Universe, leading to so-called D-term inflation . Finally, the heterotic anomalous $`U(1)`$ might be at the origin of a solution of the strong CP problem , while providing an acceptable dark matter candidate. Since there is no exact heterotic - type II orientifold duality one may now ask whether the anomalous $`U(1)`$’s present in type IIB orientifolds are likely to have similar consequences - or even have the potential to solve some of the problems encountered in the heterotic case. Certainly, the implications will differ somewhat. In the heterotic case, the phenomenological implications of the $`U(1)_X`$ rely on the appearance of a Fayet-Iliopoulos term whose value, a few orders of magnitude below the Planck mass, is fixed by the anomaly. The situation is different in in the orientifold case, where the Fayet-Iliopoulos terms are moduli-dependent. The freedom that is gained by the possible adjustment of the Fayet-Iliopoulos term allows, for example, to cure the problems of $`D`$-term inflation in heterotic models , where $`\xi `$ turned out to be too large. This possible choice of $`\xi `$ is payed for by a loss of predictivity. In that respect, one may conclude that the orientifold anomalous $`U(1)`$’s are not that different from anomaly-free $`U(1)`$’s, whose Fayet-Iliopoulos terms are unconstrained and can be chosen at will. This might also question the possible use of these $`U(1)`$’s for an axion solution of the strong CP-problem. Still, these anomalous U(1) symmetries might play an important role in phenomenological applications.
## Acknowledgements
I would like to thank Z. Lalak and S. Lavignac for interesting discussions and collaboration. This work was partially supported by the European Commission programs ERBFMRX-CT96-0045 and CT96-0090.
## References
|
no-problem/0003/cond-mat0003144.html
|
ar5iv
|
text
|
# On the Mooij Rule
## I Introduction
Although weak localization has greatly deepened our understanding of the normal state of disordered metals,<sup>1,2,3</sup> its effect on superconductivity and electron-phonon interaction has not been understood well.<sup>2</sup> Recently, it has been shown that weak localization leads to the same correction to the conductivity and the phonon-mediated interaction.<sup>4,5</sup> It is then anticipated that the electron-phonon interaction will also be influenced strongly by weak localization. For instance, phonon-limited electrical resistance, attenuation of a sound wave, thermal resistance, and a shift in phonon frequencies may change due to weak localization.<sup>6</sup>
In fact, the Mooij rule<sup>7</sup> in strongly disordered metallic systems seems to be a manifestation of the effect of weak localization on the electron-phonon interaction and the conductivity. In early seventies, Mooij found a correlation between the residual resistivity and the temperature coefficient of resistivity (TCR). In particular, TCR is decreasing with increasing the residual resistivity. Then it becomes negative above $`150\mu \mathrm{\Omega }cm`$. There are already several theoretical works on this problem. Jonson and Girvin<sup>8</sup> performed numerical calculations for an Anderson model on a Cayley tree and found that the adiabatic phonon approximation breaks down in the high-resistivity regime producing the negative TCR. Imry<sup>9</sup> pointed out the importance of incipient Anderson localization (weak localization) in the resistivities of highly disordered metals. He argued that when the inelastic mean free path, $`\mathrm{}_{ph}`$, is smaller than the coherence length, $`\xi `$, the conductivity increases with temperature like $`\mathrm{}_{ph}^1`$ and thereby leads to the negative TCR. On the other hand, Kaveh and Mott<sup>10</sup> generalized the Mooij rule. Their results are as follows: The temperature dependence of the conductivity of a disordered metal as a function of temperature changes slope due to weak localization effects, and if interaction effects are included, the conductivity changes its slope three times. Götze, Belitz, and Schirmacher<sup>11,12</sup> introduced a theory with phonon-induced tunneling. There is also the extended Ziman theory.<sup>13</sup>
In this paper, we propose an explanation of the Mooij rule based on the effect of weak localization on the electron-phonon interaction. If we assume the decrease of the electron-phonon interaction due to weak localization, we can understand the decrease of TCR with increasing the residual resistivity. The negative TCR is therefore due to weak localization correction to the Boltzmann conductivity, since when TCR is approaching zero there is no temperature-dependent resistivity left. (This latter point is similar to Kaveh and Mott’s interpretation.<sup>10</sup>) Matthiessen’s rule seems to remain intact to a large extent even in the highly disordered systems. In Sec. II, we briefly describe the Mooij rule. In Sec. III, weak localization correction to the electron-phonon coupling constant $`\lambda `$ and $`\lambda _{tr}`$ is calculated. A possible explanation of the Mooij rule is given in Sec. IV, and its implication is briefly discussed in Sec. V. In particular, this study may provide a means to probe the phonon-mechanism in exotic superconductors.
## II The Mooij Rule
According to Matthiessen’s rule, resistivity $`\rho (T)`$ caused by static and thermal disorder is additive, i.e.,
$$\rho (T)=\rho _o+\rho _{ph}(T),$$
(1)
where $`\rho _{ph}`$ is mostly due to electron-phonon scattering. Mooij found (at high temperatures) that the size and sign of the temperature coefficient of resistivity (TCR) in many disordered systems correlate with its residual resistivity $`\rho _o`$ as follows:
$`d\rho /dT`$ $`>`$ $`0\mathrm{if}\rho _\mathrm{o}<\rho _\mathrm{M}`$ (2)
$`d\rho /dT`$ $`<`$ $`0\mathrm{if}\rho _\mathrm{o}>\rho _\mathrm{M}.`$ (3)
Thus, TCR changes sign when $`\rho _o`$ reaches the Mooij resistivity $`\rho _M150\mu \mathrm{\Omega }cm`$. Figure 1 shows the temperature coefficient of resistance $`\alpha `$ versus resistivity for transition-metal alloys obtained by Mooij. It is clear $`\alpha `$ (and TCR) is correlated with the residual resistivity. Note that above $`150\mu \mathrm{\Omega }cm`$ most $`\alpha `$’s are negative. Figure 2 shows the resistivity as a function of temperature for pure Ti and TiAl alloys containing 3, 6, 11, and 33% Al. TCR is decreasing as the residual resistivity is increasing. For TiAl alloy with 33% Al shows the negative TCR. Since this behavior is generally found in strongly disordered metals and alloys, amorphous metals, and metallic glasses, it is called the Mooij rule. However, the physical origin of this rule has remained unexplained until now.
## III Weak Localization Correction to Electron-Phonon Interaction
Since the electron-phonon interaction in metals gives rise to both the (high temperature) resistivity and superconductivity, these properties are closely related, which was noticed by many workers.<sup>14-17</sup> In this Section, we show that weak localization leads to the same correction to the conductivity and the electron-phonon coupling constant $`\lambda `$ and $`\lambda _{tr}`$.
### A High Temperature resistivity
At high temperatures, the phonon limited electrical resistivity is given by<sup>17</sup>
$`\rho _{ph}(T)`$ $`=`$ $`{\displaystyle \frac{4\pi mk_BT}{ne^2\mathrm{}}}{\displaystyle \frac{\alpha _{tr}^2F(\omega )}{\omega }𝑑\omega },`$ (4)
$`=`$ $`{\displaystyle \frac{2\pi mk_BT}{ne^2\mathrm{}}}\lambda _{tr},`$ (5)
where $`\alpha _{tr}`$ includes an average of a geometrical factor $`1cos\theta _{\stackrel{}{k}\stackrel{}{k}^{}}`$ and $`F(\omega )`$ is the phonon density of states. On the other hand, in the strong-coupling theory of superconductivity,<sup>18,19</sup> the electron-phonon coupling constant is defined by<sup>19</sup>
$`\lambda =2{\displaystyle \frac{\alpha ^2(\omega )F(\omega )}{\omega }𝑑\omega }.`$ (6)
Assuming $`\alpha _{tr}^2\alpha ^2`$, we obtain
$`\rho _{ph}(T)`$ $`=`$ $`{\displaystyle \frac{2\pi mk_BT}{ne^2\mathrm{}}}\lambda _{tr}`$ (7)
$``$ $`{\displaystyle \frac{2\pi mk_BT}{ne^2\mathrm{}}}\lambda .`$ (8)
Consequently the electron-phonon coupling constant $`\lambda `$ determines also the size and sign of TCR. Table I shows the comparison of $`\lambda _{tr}`$ and $`\lambda `$ for various materials.<sup>20,21</sup> The overall agreement between $`\lambda _{tr}`$ and $`\lambda `$ is impressive.
### B Weak localization correction to $`\lambda `$ and $`\lambda _{tr}`$
Now we need to calculate the electron-phonon coupling constant $`\lambda `$ for highly disordered systems. We follow the approach by Park and Kim.<sup>5</sup> (For simplicity we consider an Einstein model with frequency $`\omega _D`$). Note that $`\lambda `$ can be written as<sup>19</sup>
$`\lambda `$ $`=`$ $`2{\displaystyle \frac{\alpha ^2(\omega )F(\omega )}{\omega }𝑑\omega }`$ (9)
$`=`$ $`N_o{\displaystyle \frac{<I^2>}{M<\omega ^2>}},`$ (10)
where $`M`$ is the ionic mass and $`N_o`$ is the electron density of states at the Fermi level. $`<I^2>`$ is the average over the Fermi surface of the square of the electronic matrix element and $`<\omega ^2>=\omega _D^2`$. In the presence of impurities, weak localization leads to a correction to $`\alpha ^2`$ or $`<I^2>`$, (disregarding the changes of $`F(\omega )`$ and $`N_o`$).
The equivalent electron-electron potential in the electron-phonon problem is given by,<sup>22,23</sup>
$$V(xx^{})\frac{I_o^2}{M\omega _D^2}D(xx^{}),$$
(11)
where $`x=(𝐫,t)`$ and $`I_o`$ is the electronic matrix element for the plane wave states. The Fröhlich interaction at finite temperatures is then obtained by
$`V_{nn^{}}(\omega ,\omega ^{})`$ $`=`$ $`{\displaystyle \frac{I_o^2}{M\omega _D^2}}{\displaystyle 𝑑𝐫𝑑𝐫^{}\psi _n^{}^{}(𝐫)\psi _{\overline{n}^{}}^{}(𝐫^{})D(𝐫𝐫^{},\omega \omega ^{})\psi _{\overline{n}}(𝐫^{})\psi _n(𝐫)}`$ (12)
$`=`$ $`{\displaystyle \frac{I_o^2}{M\omega _D^2}}{\displaystyle |\psi _n^{}(𝐫)|^2|\psi _n(𝐫)|^2𝑑𝐫\frac{\omega _D^2}{\omega _D^2+(\omega \omega ^{})^2}}`$ (13)
$`=`$ $`V_{nn^{}}{\displaystyle \frac{\omega _D^2}{\omega _D^2+(\omega \omega ^{})^2}},`$ (14)
where<sup>23</sup>
$`D(𝐫𝐫^{},\omega \omega ^{})`$ $`=`$ $`{\displaystyle \underset{\stackrel{}{q}}{}}{\displaystyle \frac{\omega _D^2}{(\omega \omega ^{})^2+\omega _D^2}}e^{i\stackrel{}{q}(𝐫𝐫^{})}`$ (15)
$`=`$ $`{\displaystyle \frac{\omega _D^2}{(\omega \omega ^{})^2+\omega _D^2}}\delta (𝐫𝐫^{}).`$ (16)
Here $`\omega `$ means the Matsubara frequency and $`\psi _n`$ denotes the scattered state. Subsequently, the strong-coupling gap equation can be easily obtained.<sup>5</sup> Note that the spatial part of the phonon Green’s function $`D(𝐫𝐫^{},\omega \omega ^{})`$ becomes the Dirac delta function, since the phonon frequency does not depend on the momentum. Accordingly, the electron-phonon interaction coupling constant $`\lambda `$ is given by
$$\lambda =N_o<V_{nn^{}}(0,0)>=N_o\frac{I_o^2}{M\omega _D^2}<|\psi _n(𝐫)|^2|\psi _n^{}(𝐫)|^2𝑑𝐫>.$$
(17)
This result agrees with the BCS theory with a point interaction $`V\delta (𝐫_1𝐫_2)`$, i.e.,
$$\lambda _{eff}=N_oV<|\psi _n(𝐫)|^2|\psi _n^{}(𝐫)|^2𝑑𝐫>,$$
(18)
where $`V=I_o^2/M\omega _D^2`$.
Note that in the presence of impurities, the correlation function has a free-particle form for $`t<\tau `$ (scattering time) and a diffusive form for $`t>\tau `$.<sup>24</sup> As a result, for $`t>\tau `$ (or $`r>\mathrm{}`$), one finds<sup>25</sup>
$`R`$ $`=`$ $`{\displaystyle _{t>\tau }}|\psi _n(𝐫)|^2|\psi _n^{}(𝐫)|^2𝑑𝐫`$ (19)
$`=`$ $`{\displaystyle \underset{\stackrel{}{q}}{}}|<\psi _n|e^{i\stackrel{}{q}𝐫}|\psi _n^{}>|_{AV}^2`$ (20)
$`=`$ $`{\displaystyle \underset{\pi /L<\stackrel{}{q}<\pi /\mathrm{}}{}}{\displaystyle \frac{1}{2\pi \mathrm{}N_oD\stackrel{}{q}^2}}`$ (21)
$`=`$ $`{\displaystyle \frac{3}{2(k_F\mathrm{})^2}}(1{\displaystyle \frac{\mathrm{}}{L}}).`$ (22)
Here $`\mathrm{}`$ is the mean free path and $`L`$ is the inelastic diffusion length. Whereas the contribution from the free-particle-like density correlation for $`t<\tau `$ is<sup>5,25</sup>
$`V_{nn^{}}`$ $`=`$ $`V{\displaystyle _{t<\tau }}|\psi _n(𝐫)|^2|\psi _n^{}(𝐫)|^2𝑑𝐫`$ (23)
$``$ $`V[1{\displaystyle \frac{3}{(k_F\mathrm{})^2}}(1{\displaystyle \frac{\mathrm{}}{L}})].`$ (24)
Since the phonon-mediated interaction is retarded for $`t_{ret}1/\omega _D`$, only the free-particle-like density correlation contributes to the pairing matrix element. Thus, we obtain
$`\lambda `$ $`=`$ $`N_oV[1{\displaystyle \frac{3}{(k_F\mathrm{})^2}}(1{\displaystyle \frac{\mathrm{}}{L}})]`$ (25)
$`=`$ $`\lambda _o[1{\displaystyle \frac{3}{(k_F\mathrm{})^2}}(1{\displaystyle \frac{\mathrm{}}{L}})].`$ (26)
Here $`\lambda _o`$ is the BCS $`\lambda `$ for the pure system. Subsequently, one finds
$`\lambda _{tr}`$ $`=`$ $`2{\displaystyle \frac{\alpha _{tr}^2(\omega )F(\omega )}{\omega }𝑑\omega }`$ (27)
$``$ $`\lambda _o[1{\displaystyle \frac{3}{(k_F\mathrm{})^2}}(1{\displaystyle \frac{\mathrm{}}{L}})]`$ (28)
$`=`$ $`\lambda _o[1{\displaystyle \frac{3}{(k_F\mathrm{})^2}}].`$ (29)
We have used the fact that $`L`$ is effectively infinite at $`T=0`$. Note that the weak localization correction term is the same as that of the conductivity.
## IV Explanation of the Mooij Rule
The high temperature resistivity is then
$`\rho _{ph}(T)`$ $``$ $`{\displaystyle \frac{2\pi mk_BT}{ne^2\mathrm{}}}\lambda `$ (30)
$``$ $`{\displaystyle \frac{2\pi mk_BT}{ne^2\mathrm{}}}\lambda _o[1{\displaystyle \frac{3}{(k_F\mathrm{})^2}}].`$ (31)
On the other hand, the conductivity and the residual resistivity are given by
$`\sigma `$ $`=`$ $`\sigma _B[1{\displaystyle \frac{3}{(k_F\mathrm{})^2}}(1{\displaystyle \frac{\mathrm{}}{L}})],`$ (32)
and
$$\rho _o=\frac{1}{\sigma _B[1\frac{3}{(k_F\mathrm{})^2}(1\frac{\mathrm{}}{L})]},$$
(33)
where $`\sigma _B=ne^2\tau /m`$. According to Matthiessen’s rule, we may add both resistivities,
$`\rho `$ $``$ $`\rho _o+\rho _{ph}(T)`$ (34)
$`=`$ $`{\displaystyle \frac{1}{\sigma _B[1\frac{3}{(k_F\mathrm{})^2}(1\frac{\mathrm{}}{L})]}}+{\displaystyle \frac{2\pi mk_BT}{ne^2\mathrm{}}}\lambda _o[1{\displaystyle \frac{3}{(k_F\mathrm{})^2}}].`$ (35)
As the disorder parameter $`1/k_F\mathrm{}`$ is increasing, the system is more disordered and the residual resistivity is getting higher. It is remarkable that the slope of the high temperature resistivity is decreasing concomitantly, in good agreement with experiment. Note that the slope varies as $`1/(k_F\mathrm{})^2`$. This point has not been noticed before. When $`1/k_F\mathrm{}`$ becomes comparable to $`1`$, the magnitude and slope of $`\rho _{ph}(T)`$ is becoming too small. In that case, only the residual resistivity will play an important role. Therefore, the observed negative TCR may be understood from the residual part. With decreasing $`T`$, since the inelastic diffusion length $`L`$ increases, the residual resistivity will also increase, leading to the negative TCR.
Now we calculate Eq. (21) numerically to see the detailed temperature dependence of the resistivity of disordered systems. Figure 3 shows the resistivity as a function of temperature. We used $`k_F=0.8\AA ^1`$, $`n=k_F^3/3\pi ^2`$, and $`\lambda =0.5`$. Since it is difficult to evaluate $`k_F\mathrm{}`$ up to a factor of 2,<sup>26</sup> we assume that $`\rho =100\mu \mathrm{\Omega }cm`$ corresponds to $`k_F\mathrm{}=3.2`$. We also used $`L=\sqrt{D\tau _i}=\sqrt{\mathrm{}}\times 350/T(\AA )`$. Here $`D`$ is the diffusion constant and $`\tau _i`$ denotes the inelastic scattering time. For low temperatures $`\tau _i`$ is determined by electron-electron scattering while for high temperatures it is determined by the electron-phonon scattering. Since we are interested in rather high temperatures, we assumed $`\tau _iT^1`$ corresponding to the electron-phonon scattering. Considering the crudeness of our calculation, the overall behavior is in good agreement with experiment.
## V Discussion
It is clear that weak localization effect on the electron-phonon interaction needs more theoretical and experimental studies. In particular, weak localization effect on the attenuation of a sound wave, shear modulus, thermal resistance, and a shift in phonon frequencies will be very interesting. Since superconductivity is also caused by the electron-phonon interaction, comparative study of the normal and superconducting properties of the metallic samples will be beneficial. There is already compelling evidence that this is the case. For instance, Testardi and his coworkers<sup>27-30</sup> found the universal correlation of $`T_c`$ and the resistance ratio. They also found that decreasing $`T_c`$ is accompanied by the decrease of the thermal electrical resistivity.<sup>27</sup>
Note that this study may provide a means of probing the phonon-mechanism in exotic superconductors, such as, heavy fermion superconductors, organic superconductors, fullerene superconductors, and high $`T_c`$ cuprates. For superconductors caused by the electron-phonon interaction we expect the following behavior. As the electrons are weakly localized by impurities or radiation damage, the electron-phonon interaction is weakened. As a result, both $`T_C`$ and TCR are decreasing at the same rate. When $`\lambda `$ is approaching zero, both $`T_c`$ and TCR drops to zero almost simultaneously. When this happens we may say that the electron-phonon interaction is the origin of the pairing in the superconductors. This behavior was already confirmed in A15 superconductors<sup>27-30</sup> and Ternary superconductors.<sup>31</sup> More details will be published elsewhere.
## VI Conclusion
It is shown that weak localization decreases both the conductivity and the electron-phonon interaction at the same rate and thereby leads to the Mooij rule. As the residual resistivity is increasing due to weak localization, so the thermal electrical resistivity is decreasing, producing the decrease of TCR. When the electron-phonon interaction is near zero, only the residual resistivity is left and therefore the negative TCR is obtained. Matthiessen’s rule seems to be intact to a large extent even in highly disordered systems. This study may provide a means of probing the phonon-mechanism in exotic superconductors, such as, heavy fermion superconductors, organic superconductors, fullerene superconductors, and high $`T_c`$ superconductors.
ACKNOWLEDGMENTS
YJK is grateful to Prof. Bilal Tanatar for discussions and encouragement. M. Park thanks the FOPI at the University of Puerto Rico-Humacao for release time.
Table I. Comparison of $`\lambda _{tr}`$ and $`\lambda `$ as given in Ref. 20.
| Metal | $`\lambda _{tr}`$ | $`\lambda `$ | Metal | $`\lambda _{tr}`$ | $`\lambda `$ |
| --- | --- | --- | --- | --- | --- |
| Li | .40 | .41$`\pm `$.15 | Na | .16 | .16$`\pm `$.04 |
| K | .14 | .13$`\pm `$.03 | Rb | .19 | .16$`\pm `$.04 |
| Cs | .26 | .16$`\pm `$.06 | Mg | .32 | .35$`\pm `$.04 |
| Zn | .67 | .42$`\pm `$.05 | Cd | .51 | .40$`\pm `$.05 |
| Al | .41 | .43$`\pm `$.05 | Pb | 1.79 | 1.55 |
| In | .85 | .805 | Hg | 2.3 | 1.6 |
| Cu | .13 | .14$`\pm `$.03 | Ag | .13 | .10$`\pm `$.04 |
| Au | .08 | .14$`\pm `$.05 | Nb | 1.11 | .9$`\pm `$.2 |
|
no-problem/0003/cs0003054.html
|
ar5iv
|
text
|
# A Problem-Specific Fault-Tolerance Mechanism for Asynchronous, Distributed Systems
## 1 Introduction
For solving new, more difficult search problems, scientists need better search heuristics and/or more powerful resources. The need for hundreds or even thousands of processors is justified in the case of branch-and-bound search algorithms by problems that could not be solved after months of execution on tens of processors .
Rarely, however, are thousands of processors assembled in a single location and available for a single problem. Thus, techniques are needed that would allow us to aggregate processors at many different Internet-connected locations. These processors are likely often to be required for other purposes; hence their availability will be episodic, and any algorithm designed to take advantage of these resources must be opportunistic. Furthermore, the Internet environment is likely to be unreliable and heterogeneous.
Various groups have demonstrated the feasibility of using Internet-connected computers for solving embarrassingly parallel problems . In our work, we investigate the feasibility of applying Internet-connected resources to more tightly coupled problems, in which a centralized scheme is not computationally efficient. Our approach is to develop specialized algorithms that incorporate scalability and reliability mechanisms.
For providing reliable services over unreliable architectures, researchers usually choose one of the following approaches: (1) embed fault-tolerance mechanisms within the middleware software layer, as in ISIS or CORBA Transaction Service, or as in systems like Condor or Legion ; or (2) embed fault-tolerance mechanisms within algorithms. The former approach is more general. Successful results in this domain guarantee communication and hardware reliability to a large number of applications. But its generality imposes problems that sometimes turn out to be unsolvable or very expensive. The latter alternative is applicable to specific problem classes and is therefore less general. But exploiting the characteristics of a class of problems may ease the design of fault-tolerance mechanisms, yielding simpler and more efficient algorithms. Note that middleware can still be of assistance in this case, by providing appropriate fault-detection services .
In our work, we focus on problem-specific fault-tolerance mechanism. Specifically, we propose a fault-tolerant, totally distributed branch-and-bound algorithm designed for unreliable architectures, with a dynamically variable number of resources. The description of the branch-and-bound problem (Section 2) and the target architecture (Section 4) provide the motivation for our work. We describe our branch-and-bound algorithm in Section 5, focusing particularly on the fault-tolerance mechanism. Related work (Section 3) refers to other fault-tolerance techniques embedded in tree-based, distributed, asynchronous algorithms. For testing our solution, we have developed a simulation framework, which is presented in Section 6, along with the results obtained. We conclude with a discussion of what we learned from trying to solve this problem and how we intend to continue this work.
## 2 Branch and Bound
The search for optimal solutions is one of the most important searching problems. Since exhaustive search is often impracticable in NP-hard problems, heuristics are employed to improve search performance. Branch-and-bound (which we will hereafter refer to as B&B) is an intelligent search method often used for optimization problems. It uses a successive decomposition of the original problem into smaller disjoint subproblems, while reducing (pruning) the search space by recognizing unpromising problems before starting to solve them.
A sequential B&B algorithm consists of a sequence of iterations in which four basic operators are applied over a list of problems, called a pool of active problems:
* Decompose. Splits a problem into a set of new subproblems. A problem that cannot be split (either because it has no solution or because a solution is found) is fathomed. A problem decomposed into new subproblems is branched.
* Bound. Computes a bound value $`l(v)`$ on the optimal solution of subproblem $`v`$. This bound value will be used by Select and Eliminate operations.
* Select. Selects which problem to branch from next, as a function of some heuristic priority function. Selection may depend on bound values, such as in the best-first selection rule, or not, as in the case of depth-first or breadth-first rules.
* Eliminate. Eliminates problems that cannot lead to an optimal solution of the original problem (i.e., problems for which $`l(v)U`$, where $`U`$ is the best known solution).
Successive decomposition operations create a tree of problems rooted in the original problem. The value of the best solution found thus far is used to recognize the unpromising problems and prune the tree. If the bound value of the current problem is not better than the best-known solution, then the problem is eliminated. Otherwise, it is stored into the pool of active problems. The best-known solution is updated when a better feasible solution is found. The leaves of the tree are infeasible problems, or pruned problems, or problems that lead to locally optimal solutions. The size and shape of the tree strongly depends on the quality of the heuristic function for the selection rule.
In B&B algorithms, parallelism can be achieved in different ways . We consider the most general approach, in which the B&B tree is built in parallel by performing operations on different subproblems simultaneously.
Three design choices most influence the performance of parallel B&B algorithms: the choice of a synchronous or an asynchronous algorithm, the work sharing mechanism, and the information sharing mechanism. Synchronous vs. asynchronous design defines what processes do upon completion of a work unit—they wait for each other (in the case of synchronous algorithms) or not (in asynchronous algorithms). Work sharing is the method used to assign work to processes in order to fully and efficiently exploit available parallelism. Information sharing refers to the methods used to publish and update the best-known solution. Using an up-to-date best-known solution improves the efficiency of the selection and elimination rule and hence has an important effect on the size of the search space.
## 3 Related Work
Many investigations of parallel B&B for distributed-memory systems have adopted a centralized approach in which a single manager maintains the tree and hands out tasks to workers . While clearly not scalable, this approach simplifies the management of information and multiple processes. Scalability can be improved through a hierarchical organization of processes or by varying the size of work units, but the central manager remains an obstacle to both scalability and fault tolerance. Reliability can be achieved through checkpointing, but this approach assumes that there exists at least one reliable process/machine, able to manage the failure recovery process.
Because of the highly variable number of resources in the architecture we consider, we need more flexibility than that offered by the centralized design. Hence we chose a fully decentralized design.
The only fully decentralized, fault-tolerant B&B algorithm for distributed-memory architectures is DIB (Distributed Implementation of Backtracking) . DIB was designed for a wide range of tree-based applications, such as recursive backtrack, branch-and-bound, and alpha-beta pruning. It is a distributed, asynchronous algorithm that uses a dynamic load-balancing technique. Its failure recovery mechanism is based on keeping track of which machine is responsible for each unsolved problem. Each machine memorizes the problems for which it is responsible, as well as the machines to which it sent problems or from which it received problems. The completion of a problem is reported to the machine the problem came from. Hence, each machine can determine whether the work for which it is responsible is still unsolved, and can redo that work in the case of failure.
## 4 Target Architecture
The target architecture for our algorithm is a collection of Internet-connected computers. The distinctive characteristics of this environment, when compared with a conventional parallel computer, are as follows:
* Scale. The number of resources available can potentially be much larger than on a conventional parallel computer.
* Dynamic availability. The quantity of resources available may vary over time, as may the amount of computation delivered by a single resource.
* Unreliability. Resources may become unreachable without notice because of system or network failures.
* Communication characteristics. Latencies may be high, variable, and unpredictable; bandwidth may be low, variable, and unpredictable. Connectivity (as measured, for example, by bisection bandwidth) may be particularly low.
* Heterogeneity. Resources may have varying physical characteristics (for example, amount of memory, speed).
* Lack of centralized control. There is no central authority for quality control or operational management.
The failure model we consider is Crash , in which a processor fails by halting. Once it halts, the processor remains in that state. The fact that a processor has failed may not be detectable by other processors. We make minimum assumptions about the system:
– There is no bound on message delivery time.
– Messages may be lost altogether.
– A network link does not duplicate, corrupt, or spontaneously create messages.
– The clock rate on each host is close to accurate (we do not assume that the clocks are synchronized). This condition is assumed in many works in the fault-tolerance domain and it does not represent a practical restriction .
## 5 The Algorithm
We propose a fully decentralized, asynchronous, fault-tolerant parallel B&B algorithm suited for the environment described above. Asynchrony is required by the heterogeneity of the architecture and allowed by the B&B problem . Each process maintains its local pool of problems to be solved. When the local pool is empty, the process sends work requests to other processes. A process that receives a work request and has enough problems in its pool removes some of those problems and sends them to the requester. This on-demand dynamic load-balancing scheme was chosen to reduce unnecessary communication. The fully decentralized scheme was preferred for better scalability and for greater reliability. The information sharing issue is solved by circulating the best-known solution among processes, embedded in the most frequently sent messages. Processes update the local value for the best-known solution every time they receive it, and use it when the next decision is to be made.
For adapting this rather conventional B&B algorithm to the environment described above, we extend it with (1) a group membership protocol to allow dynamic variation in the number of resources and (2) a fault-tolerance mechanism. The novelty of this paper is the decentralized fault-tolerance mechanism that uses a tree-based encoding of the B&B subproblems. This strategy for problem encoding also offers a simple mechanism for termination detection, described in Section 5.4. A brief description of the epidemic communication mechanism (Section 5.1) will help in understanding how the group membership protocol (Section 5.2) and fault-tolerance mechanism (Section 5.3) function. A comparison with DIB, the decentralized B&B algorithm mentioned in Section 3, concludes this section.
### 5.1 Epidemic Communication for Group Membership and Fault Tolerance
Epidemic communication allows temporary inconsistencies in shared data in exchange for low-overhead implementation. More specifically, information changes are spread gradually throughout the processes, without the overhead and communication costs typically used to achieve a high degree of consistency.
Both our group membership and fault-tolerance mechanisms use epidemic communication. Since these mechanisms do not require data consistency, epidemic communication is a convenient algorithm for spreading information. However, epidemic communication guarantees that eventually consistency is achieved; that is, all processes will eventually see the same data when no more new information is brought into the system, independent of system failures . This observation is exploited for termination detection.
The epidemic algorithms used are variants of the rumor-mongering algorithm (analyzed in ): when a site receives a new update (rumor), it becomes “infectious” and is willing to share—it repeatedly chooses another member, to which it sends the rumor. Upon receipt of a rumor, a member updates its local information and sends its own version after some time interval. In the membership protocol, the rumor received is sent farther, without being processed. In the fault-tolerance mechanism, the rumor is stored for local processing, may be processed locally, and is spread infrequently.
### 5.2 Group Membership Protocol
The group membership protocol is used for collecting and updating information about which resources participate in the computation at any given time. The impossibility of guaranteeing consistent views of group membership in asynchronous, unreliable systems was proven in . Even in reliable systems, membership protocols are expensive, requiring several phases for consistency.
A group is defined as a set of members. It is initialized when the first member enters the group and ceases to exist when the last member leaves. A process joins a group by finding one or more members of the group and leaves it either by leaving or by failing. We assume the existence of a fault-tolerant method by which processes can find other processes, such as broadcasting (when applicable), known addresses of gossip servers (described below), or a location service. For the moment, we assume that gossip servers exist.
Our membership protocol is inspired by the failure-detection mechanism based on epidemic communication presented in . Other membership protocols based on epidemic communication are more elaborate and introduce constraints or costs that are not justified in our case.
The membership protocol works as follows: when a new computer joins the group of resources, it sends its address to some known gossip servers. The gossip servers act as any other member of the group, except that at least one of them is guaranteed to be active at any given moment during the computation. This is a loose fault-tolerance constraint, easily achievable, without extra costs, by increasing the number of gossip servers in the system. The main task of these servers is to propagate information about the newly arrived members.
Each member process maintains a view of group membership. The view defines a set of processes that the member believes are part of the group at any given time. In addition, it contains specific information designed to log the members’ activity by keeping track of when it last heard of each (known) member, directly from it or through the gossip system. The parameters involved in this mechanism (for example, the frequency of gossiping and the timeout period used to deduce failure of a passive member) are chosen to keep communication and the probability of false membership information under some threshold values .
Among the advantages of using this membership protocol are (1) scalability in network load with the size of the group, (2) tolerance to a small percentage of message loss or failed members, and (3) scalability in accuracy with the number of members.
### 5.3 Fault-Tolerance Mechanism
For B&B algorithms, the loss of a subproblem is unacceptable when the accuracy of the solution is important.
Our proposed fault-tolerance mechanism does not attempt to detect failures of computers and restore their data, but rather focuses on detecting missing results. Given that the B&B tree of problems is dynamic, how is it possible to know the set of existing problems, so that, knowing the problems completed, one can infer the set of not-completed problems?
Our solution exploits the fact that the subproblems dynamically generated by the B&B algorithm are nodes of a tree. Each node can be uniquely represented by its position in the tree. If we encode the position of the nodes in the tree, we obtain a unique code for each subproblem. Furthermore, given a set of nodes of the tree, we can easily find its complement, that is, the list of nodes of the tree that are not in the given set.
#### 5.3.1 Problem Representation
Without loss of generality, we assume that the branching factor for the search tree is 2 and that each branch is a decision on a condition variable. Therefore, a subproblem is entirely described by a sequence of pairs $`x_i,value`$ where $`x_i`$ is a condition variable and $`value`$ is $`0`$ or $`1`$, indicating the left or the right branch, respectively. We need to include condition variables in the subproblem encoding because the order in which condition variables are considered may vary over the tree. For example, the left subtree of a node that branches upon $`x_k`$ may consider $`x_i`$ first and therefore will generate the subproblems $`(x_k,1,x_i,0)`$ and $`(x_k,1,x_i,1)`$, whereas the right subtree may branch upon $`x_j`$ first, producing the subproblems $`(x_k,0,x_j,0)`$ and $`(x_k,0,x_j,1)`$.
Each pair $`x_i,value`$ introduces and assigns a condition to a new variable. That is what makes the codes (subproblems) self-contained: the code (along with the initial data, which is provided by a gossip server when a process joins the computation) is enough to initiate a problem on any processor.
#### 5.3.2 Mechanism Description
Our failure-recovery mechanism allows each process to detect missing problems independently, based on local information about completed problems.
We consider a subproblem solved after the branching operation has been performed on it. Solved subproblems are not necessarily completed: we consider a subproblem to be completed if it is solved and either it is a leaf or both its children are completed (see Figure 2).
Every process maintains a list of new locally completed subproblems and a table of the completed problems it knows about. When a problem is completed, it is included in the local list. When $`c`$ problems (codes) are in the list or the list has not been updated for a long time, it is sent to $`m`$ of the other members as a work report message. When a member receives a work report, it stores the report in its table. Occasionally, in order to inform new members of the current state of the execution and to increase the degree of consistency, a member sends its table of completed problems to a randomly chosen member.
The size and the number of the problem codes vary with the shape and number of nodes of the B&B tree. The deeper the node in the tree, the larger the size of its code; the more nodes in the tree, the larger the number of codes. Since the completion of a parent node implies the completion of its children, communication costs can be reduced by compressing work report messages, via the recursive replacement of pairs of sibling codes with the code of their parent, and the deletion of codes whose ancestors are also in the list. Simulations performed on real B&B trees confirmed that the compression rate is better when processors are sufficiently loaded: the taller the subtree completed locally, the larger the number of codes that do not need to be sent.
Failure recovery is achieved as follows. When a member runs out of work and an attempt to get work through the load-balancing mechanism fails, it chooses an uncompleted problem (by complementing the code of a solved problem whose sibling is not solved) and solves it. The mechanism “repairs” system failures due to, for example, a computer that failed before sending work reports or work reports that were lost before reaching any machine. Note that this mechanism also works in the case of temporary network partitions.
This simple, fully distributed mechanism can lead to redundant work in two situations: (a) the lag in updating information can lead to faulty presumptions on failure; and (b) the lack of coordination among processors permits multiple members to work on the same problem. The former case can be fixed easily by interrupting the redundant work when information is updated. The costs of the latter situation can be reduced by employing more sophisticated methods for choosing work, such as using the location of the last problem completed locally. Notice, however, that redundant computation may be inevitable.
If information about completed problems is spread uniformly, then the loss of a percentage of members may not lead to information loss: if the information about the problems reported to be completed still exists in the system, they will not have to be redone.
### 5.4 Almost Implicit Termination Detection
The problem encoding used for implementing the fault-tolerance mechanism also has the advantage of implicitly solving the termination detection problem. When successive code compressions of local lists and tables lead to the code of the root problem, termination is detected. Since none of the communication mechanisms used guarantees data consistency, it is possible that some members do not have enough information to detect termination. That is why, before termination, each member that detected the termination will have to send one more work report, that is, the code of the root problem, to all members from its local membership list.
### 5.5 Comparison with DIB
Both DIB and the algorithm we propose are decentralized and fault-tolerant algorithms that work on a dynamic, tree-like search space. Both algorithms implement low-cost, simple fault-tolerance protocols for the price of potentially redundant work. However, the two algorithms have different failure-recovery mechanisms and react differently in the case of failure.
DIB uses a hierarchical structure for failure detection and recovery that imposes the need for a reliable or duplicated node for the root of this hierarchy. Moreover, the failure of a node affects not only the problems solved locally and not reported as solved yet, but also the problems given to other nodes, whose completion cannot be reported (and therefore considered) anymore.
In our algorithm, all processes are equally responsible for the behavior of the system in case of failure. Our simulation studies confirm that the failure of all processes but one still allows the problem to be correctly solved. The mechanism is also reliable in the case of faulty network links or temporary network partitions.
However, the homogeneity involved in our algorithm has a communication cost: information about the completion of a problem is eventually spread to all processes, directly (by reporting the code of the problem) or indirectly (by reporting the completion of one of its ancestors).
Performance comparisons of DIB and our algorithm are of limited interest for two reasons. Because DIB was designed for a wide range of applications, such as recursive backtrack, alpha-beta search and branch-and-bound, its speedup is “excellent for exhaustive traversal and quite good for branch-and-bound” . Furthermore, speedup results are given for maximum 16 processors, while we are interested in many more resources.
## 6 Experimental Studies
We use simulations rather than a real implementation to evaluate our algorithm, as the use of simulation techniques provides great flexibility in testing a wide range of B&B strategies in a variety of Internet-like environments.
### 6.1 Experimental Goals
The goals of our experimental work are as follows: (1) to verify reliability and evaluate the overall performance of the algorithm, focusing on the costs introduced by the fault-tolerance mechanism; and (2) to evaluate scalability for different problem classes and environments. Our work to date has focused primarily on the first of these two issues.
We studied algorithm reliability by testing various failure scenarios. The costs introduced by our fault-tolerance mechanism are communication costs, storage space, tree contraction time, and redundant work. Because we avoid centralized control by spreading information throughout the system, communication costs may be significant. Redundant work may increase when communication conditions are poor (messages are delayed or lost) or when work load is low. Storage space may become a serious concern for large problems because the algorithm permits (and benefits from) the replication of data. However, the results we obtained encourage us to continue our research in this direction.
### 6.2 Simulation Framework
We used Parsec to develop our simulation system. Parsec is a C-based simulation language for sequential and parallel execution of discrete-event simulation models. Processes are modeled by objects; interactions among objects are modeled by time stamped message exchanges.
Our simulation system incorporates a detailed representation of load balancing, failure recovery, and termination detection mechanisms. We do not include yet the membership protocol: hence, the pool of resources is predetermined and varies only with failures. Each process, after it has solved a B&B subproblem, checks to see whether any messages are pending. If it received a work request, it satisfies the request if there are enough problems in its active pool. If it received a work report, it merges that report with its local information on completed problems and contracts the result.
The simulation was configured so that it could be driven either by real (precomputed) B&B trees or by random trees. For real problems, we tested our algorithm on a set of basic trees that we obtained from an instrumented B&B code. Basic trees are trees generated by executing a branch-and-bound algorithm without eliminating the unpromising nodes.
For each node in the tree, we have the following information: (1) the node identifier, (2) its bound value, (3) the time needed for computing the bound value and expanding the node or determining infeasibility, and (4) a value specifying whether the bound value is a feasible solution. The bound values are used for pruning the test tree and obtaining the B&B tree, and for computing the optimal solution. The time value is used for simulating the execution time needed for the bounding operation. Notice that the time values determine the granularity of the subproblems. During our experiments, we tuned this granularity by multiplying all time values by a constant factor, and we studied how granularity affects the overall performance of the B&B algorithm.
Running simulations on basic trees leaves enough room for generating different B&B trees, depending on communication characteristics (for example, up-to-date information about the best-known solution influences pruning decision) and on the number of processors (because the number of nodes expanded may vary with the number of processors). Note that the basic branch-and-bound operation decompose is recorded within the basic tree structure.
Because the amount of communication and storage space depends on the shape and the size of the tree, testing trees resulted from solving real problems provides better accuracy. However, creation of basic trees is computationally infeasible for anything but small problems. But for testing reliability, and later scalability, the number of nodes is the only important feature of the test tree. Therefore, we enriched our set of test trees with randomly created trees of various sizes and tested them without eliminating the unpromising nodes.
### 6.3 Results
Our simulator measured execution time, communication costs, and storage space. We tested the algorithm on relatively small problems (up to tens of thousands of nodes expanded), with no optimization efforts: work reports are sent to randomly chosen resources, without eliminating redundant messages. When out of work, resources ask randomly chosen resources for work, without using previous experience to increase performance.
#### 6.3.1 Algorithm Performance
Figure 3 shows results obtained for a small problem (approximately 3500 nodes expanded) with average granularity of 0.01 seconds per node. For this problem, the overhead introduced by the algorithm reaches 36% for 8 processors. This is determined by three factors: (1) the relatively high communication costs considered ($`1.5+0.005\times L`$ milliseconds for messages of size $`L`$ bytes); (2) the cost of the dynamic load balancing mechanism for a network of workstations ; and (3) the small granularity of the subproblems. We will see that for a larger problem (Table 1) the overhead is much lower (15.58% of the total execution for 100 processors, from which 13.67% are load balancing costs, 0.78% communication time and 1.13% list contraction time). Furthermore, this overhead can be controlled by tuning various execution parameters. For example, less frequent termination verification leads to lower list contraction costs but may increase idle time. Sending work reports more rarely may decrease communication time and list contraction costs but may increase termination detection time, because of lack of information. If the failure recovery mechanism is activated (decides that a problem was lost and recreates it) less often, the overhead introduced (list contraction and redundant work costs) is lower, but recovery in case of failure is also slower.
The tests we performed on larger problems (total uniprocessor execution time of around 75 hours) show that communication and storage space costs remain negligible (Table 1). We find that good performance is achieved on up to 100 processors. These preliminary results encourage us to continue evaluating our algorithm on larger problems, with larger number of resources.
Communication per processor increases with the number of processors because the number of work reports sent per processor increases: since the work load is lower, and therefore processes are idle longer periods of time, they suspect termination and send more work reports. Storage space is measured for the entire system. The results obtained—43 MB storage space for 100 machines—are promising.
A normal trend would be that the amount of time spent on list contraction increases with the number of processors, since the number of messages circulated within the system increases and the receiving of a work report message requires a list contraction procedure. But because this depends on how the subproblems are assigned on processors, a lucky configuration may lead to unexpected good results (as for 100 processors, Table 1).
The amount of redundant work performed is another interesting measure of our algorithm that remains to be evaluated. However, this amount can be reduced by tuning parameters (for example, how soon failure is suspected after a machine unsuccessfully tries to get work) or by designing more sophisticated methods for picking up unsolved problems.
When varying problem granularity (by multiplying the time needed to solve a problem with some constant values), we observed the following (not unexpected) behavior: The number of nodes expanded may vary, because the information of the best-known solution is computed at different moments. Load balance is better when granularity is coarser. Communication increases unnecessarily because work reports are sent at fixed time intervals. This last observation taught us that for scalability, we need to design an adaptive mechanism for deciding how often work reports should be sent, based on information collected at runtime: for example, information about execution time per subproblem and frequency of messages received.
#### 6.3.2 Fault Tolerance
Because our termination detection mechanism operates by detecting that all expanded problems have been completed, it is straightforward to verify that our fault-tolerance algorithm is working correctly—we simply verify that termination is detected. For visualizing the behavior of the algorithm, we used Jumpshot, a graphical visualization tool for clog log file format. We used the MPE library developed by the MPICH team at Argonne National Laboratory for logging the execution profile.
Figures 5 and 6 are snapshots of the execution of the algorithm on a very small problem. Figure 5 shows the behavior of the algorithm in the absence of failures. The same problem is presented in Figure 6, where two of the three processors fail at about 85% of the execution time. The only processor available after this moment is able to solve the problem and terminate.
## 7 Conclusions and Future Work
We presented a failure-recovery mechanism suited for a tree-like problem space. This mechanism and a low-cost group membership protocol are the ingredients that transform a rather conventional parallel branch-and-bound algorithm into a scalable, reliable, more powerful algorithm, able to exploit the computational power of hundreds of Internet-connected resources. Scalability is achieved through a fully distributed design. The algorithm is fault tolerant under our assumptions and can execute and terminate correctly even if only a single resource remains available.
We solved the difficult problems of fault tolerance and termination detection in distributed environments by exploiting problem-specific features, specifically the tree structure of the problem space. While the mechanism we propose is not applicable to all distributed computations, we believe that a large class of problems can benefit from it.
We have used simulation studies to explore the behavior of our algorithm. Initial results on relatively small problems and up to 100 processors are promising: performance is good despite the lack of optimization. Communication costs are reasonable, storage space costs are negligible. However, we need results on a much larger number of processors. We plan to introduce the group membership protocol into our simulations and to test the algorithm under various network conditions. An interesting issue to study is how the network characteristics influence the performance of the algorithm in general and the costs introduced by the failure-recovery mechanism in particular. Also, in order to accurately analyze scalability issues, we plan to design a flexible scheme for adapting parameters to runtime informations, such as total execution time and execution time per problem.
|
no-problem/0003/nucl-th0003041.html
|
ar5iv
|
text
|
# Isoscalar dipole mode in relativistic random phase approximation
## Abstract
The isoscalar giant dipole resonance structure in <sup>208</sup>Pb is calculated in the framework of a fully consistent relativistic random phase approximation, based on effective mean-field Lagrangians with nonlinear meson self-interaction terms. The results are compared with recent experimental data and with calculations performed in the Hartree-Fock plus RPA framework. Two basic isoscalar dipole modes are identified from the analysis of the velocity distributions. The discrepancy between the calculated strength distributions and current experimental data is discussed, as well as the implications for the determination of the nuclear matter incompressibility.
The study of the isoscalar giant dipole resonance (IS GDR) might provide important information on the nuclear matter compression modulus $`K_{\mathrm{nm}}`$. This, somewhat elusive, quantity defines basic properties of nuclei, supernovae explosions, neutron stars and heavy-ion collisions. The range of values of $`K_{\mathrm{nm}}`$ has been deduced from the measured energies of the isoscalar giant monopole resonance (GMR) in spherical nuclei. The complete experimental data set on isoscalar GMR, however, does not limit the range of $`K_{\mathrm{nm}}`$ to better than $`200300`$ MeV. Also microscopic calculations of GMR excitation energies have not really restricted the range of allowed values for the nuclear matter compression modulus. On one hand, modern non-relativistic Hartree-Fock plus random phase approximation (RPA) calculations, using both Skyrme and Gogny effective interactions, indicate that the value of $`K_{\mathrm{nm}}`$ should be in the range 210-220 MeV . In relativistic mean-field models on the other hand, results of both time-dependent and constrained calculations suggest that empirical GMR energies are best reproduced by an effective force with $`K_{\mathrm{nm}}250270`$ MeV .
In principle, complementary information about the nuclear incompressibility, and therefore by extension about the nuclear matter compression modulus, could be obtained from the other compression mode: giant isoscalar dipole oscillations. In first order the isoscalar dipole mode corresponds to spurious center-of-mass motion. The IS GDR is a second order effect, built on $`3\mathrm{}\omega `$, or higher configurations. It can be visualized as a compression wave traveling back and forth through the nucleus along a definite direction: the ”squeezing mode” . There are very few data on IS GDR in nuclei (the current experimental status has been reviewed in Ref. ). In particular, recent results on IS GDR obtained by using inelastic scattering of $`\alpha `$ particles have been reported for <sup>208</sup>Pb , and for <sup>90</sup>Zr, <sup>116</sup>Sn, <sup>144</sup>Sm, and <sup>208</sup>Pb . As in the case of giant monopole resonances, data on heavy spherical nuclei are particularly significant for the determination of the nuclear matter compression modulus: for example <sup>208</sup>Pb. However, recent experimental data on IS GDR excitation energies in this nucleus disagree: the centroid energy of the isoscalar dipole strength distribution is at $`22.4\pm 0.5`$ MeV in Ref. , while the value $`19.3\pm 0.3`$ MeV has been reported in Ref. . In the analysis of Ref. , the ”difference of spectra” technique was employed to separate the IS GDR from the high-energy octupole resonance (HEOR) in the $`0^\mathrm{o}2^\mathrm{o}`$ $`\alpha `$-scattering spectrum for <sup>208</sup>Pb. On the other hand, in the experiment on <sup>90</sup>Zr, <sup>116</sup>Sn, <sup>144</sup>Sm, and <sup>208</sup>Pb of Ref. , the mixture of isoscalar $`L=1`$ (IS GDR) and $`L=3`$ (HEOR) multipole strength could not be separated by a peak fitting technique. Instead, the data were analyzed by a multipole analysis of 1 MeV slices of the data over the giant resonance structure, obtained by removing the underlying continuum.
In Ref. it has been also pointed out that the experimental IS GDR centroid energies, and therefore the corresponding values of the nuclear incompressibility $`K_A`$, are not consistent with those derived from the measured energies of the isoscalar GMR in <sup>208</sup>Pb. In the sum rule approach to the compression modes , two different models have been considered for the description of the collective motion: the hydrodynamical model and the generalized scaling model. The assumption of the scaling model leads to a difference of more than $`40\%`$ between the values of the finite nucleus incompressibility $`K_A`$, when extracted from the experimental energies of the IS GDR and the GMR in <sup>208</sup>Pb. A consistent value for $`K_A`$ can be derived from the experimental excitation energies, only if the two compression modes are described in the hydrodynamical model. The resulting value of $`K_A220`$ MeV, however, is much too high, and in fact it corresponds to the nuclear matter compression modulus $`K_{\mathrm{nm}}`$, as derived from non-relativistic Hartree-Fock plus RPA calculations. This is not difficult to understand, since the expressions for $`K_A`$ in both models were derived in the limit of large systems and consequently do not account for surface effects . Both models, however, are approximations to a full quantum description: the time-dependent Hartree-Fock or, equivalently, the RPA. Therefore, fully microscopic calculations might be necessary, in order to resolve the apparent discrepancy between the values of $`K_A`$ extracted from the IS GDR and the GMR in <sup>208</sup>Pb.
Non-relativistic self-consistent Hartree-Fock plus RPA calculations of dipole compression modes in nuclei were reported in the work of Van Giai and Sagawa , and more recently in Refs. and . A number of different Skyrme parameterizations were used in these calculations, and the result is that all of them systematically overestimate the experimental values of the IS GDR centroid energies, not only for <sup>208</sup>Pb, but also for lighter nuclei. In particular, those interactions that reproduce the experimental excitation energies of the GMR (SGII and SKM), predict centroid energies of the IS GDR in <sup>208</sup>Pb that are $`45`$ MeV higher than those extracted from small angle $`\alpha `$-scattering spectra. In Ref. effects that go beyond the mean-field approximation have been considered: the inclusion of the continuum and $`2p2h`$ coupling. It has been shown that the coupling of RPA states to $`2p2h`$ configurations, although it reproduces the total width, results in a downward shift of the resonance energy of less than 1 MeV with respect to the RPA value. It appears, therefore, that the presently available data on excitation energies of the compression modes in nuclei: the GMR and the IS GDR, cannot be consistently reproduced by theoretical models.
In Ref. we have performed time-dependent and constrained relativistic mean-field calculations for the monopole giant resonances in a number of spherical closed shell nuclei, from <sup>16</sup>O to <sup>208</sup>Pb. It has been shown that, in the framework of relativistic mean field theory, the nuclear matter compression modulus $`K_{\mathrm{nm}}250270`$ MeV is in reasonable agreement with the available data on spherical nuclei. This value is approximately 20% larger than the values deduced from non-relativistic density dependent Hartree-Fock calculations with Skyrme or Gogny forces. In particular, among the presently available effective Lagrangian parameterizations, the NL3 effective force with $`K_{\mathrm{nm}}=271.8`$ MeV, provides the best description of the mass dependence of the GMR excitation energies. Preliminary calculations with the time-dependent relativistic mean-field model , indicate that the NL3 effective interaction, which reproduces exactly the excitation energy of the GMR in <sup>208</sup>Pb (14.1 MeV), overestimates the reported centroid energy of the IS GDR by at least 4 MeV. However, due to complications arising from the spurious center-of-mass motion, the time-dependent relativistic mean-field model computer code develops a numerical instability which prevents the precise determination of the IS GDR excitation energy. In the present analysis, therefore, we apply the relativistic random phase approximation (RRPA) to the description of the isoscalar dipole oscillations in <sup>208</sup>Pb.
The RRPA represents the small amplitude limit of the time-dependent relativistic mean-field theory. Self-consistency will therefore ensure that the same correlations which define the ground-state properties, also determine the behavior of small deviations from the equilibrium. The same effective Lagrangian generates the Dirac-Hartree single-particle spectrum and the residual particle-hole interaction. Some of the earliest applications of the RRPA to finite nuclei include the description of low-lying negative parity excitations in <sup>16</sup>, and studies of isoscalar giant resonances in light and medium nuclei . These RRPA calculations, however, were based on the most simple, linear $`\sigma \omega `$ relativistic mean field model. It is well known that for a quantitative description of ground- and excited states in finite nuclei, density dependent interactions have to be included in the effective Lagrangian through the meson non-linear self interaction terms. The RRPA response functions with nonlinear meson terms have been derived in Refs. , and applied in studies of isoscalar and isovector giant resonances. However, the calculated excitation energies did not reproduce the values obtained with the time-dependent relativistic mean-field model . The reason was that the RRPA configuration spaces used in Refs. did not include the negative energy Dirac states. In Ref. it has been shown that an RRPA calculation, consistent with the mean-field model in the $`nosea`$ approximation, necessitates configuration spaces that include both particle-hole pairs and pairs formed from occupied states and negative-energy states. The contributions from configurations built from occupied positive-energy states and negative-energy states are essential for current conservation and the decoupling of the spurious state. In addition, configurations which include negative-energy states give an important contribution to the collectivity of excited states. In a recent study we have shown that, in order to reproduce results of time-dependent relativistic mean-field calculations for giant resonances, the RRPA configuration space must contain negative-energy Dirac states, and the two-body matrix elements must include contributions from the spatial components of the vector meson fields. The effects of the Dirac sea on the excitation energy of the giant monopole states have been also recently studied in an analytic way within the $`\sigma \omega `$ model .
In Fig. 1 we display the IS GDR strength distributions in <sup>208</sup>Pb:
$$B^{T=0}(E1,1_i0_f)=\frac{1}{3}|0_f||\widehat{Q}_1^{T=0}||1_i|^2,$$
(1)
where the isocalar dipole operator is
$$\widehat{Q}_{1\mu }^{T=0}=e\underset{i=1}{\overset{A}{}}\gamma _0(r^3\eta r)Y_{1\mu }(\theta _i,\phi _i),$$
(2)
and
$$\eta =\frac{5}{3}<r^2>__0.$$
(3)
The calculations have been performed within the framework of the self-consistent Dirac-Hartree plus relativistic RPA. The effective mean-field Lagrangian contains nonlinear meson self-interaction terms, and the configuration space includes both particle-hole pairs, and pairs formed from hole states and negative-energy states. The choice of the dipole operator (2), with the parameter $`\eta `$ determined by the condition of translational invariance, ensures that the IS GDR strength distribution does not contain spurious components that correspond to the center-of-mass motion . The strength distributions in Fig. 1 have been calculated with the NL1 ($`K_{\mathrm{nm}}=211.7`$ MeV) , NL3 ($`K_{\mathrm{nm}}=271.8`$ MeV), and NL-SH ($`K_{\mathrm{nm}}=355.0`$ MeV) effective interactions. These three forces, in order of increasing values of the nuclear matter compressibility modulus, have been extensively used in the description of a variety of properties of finite nuclei, not only those along the valley of $`\beta `$-stability, but also of exotic nuclei close to the particle drip lines. In particular, in Ref. it has been shown that the NL3 ($`K_{\mathrm{nm}}=271.8`$ MeV) effective interaction provides the best description of experimental data on isoscalar giant monopole resonances.
The calculated strength distributions are similar to those obtained within the non-relativistic Hartree-Fock plus RPA framework, using Skyrme effective forces . In disagreement with reported experimental results, all theoretical models predict a substantial amount of isoscalar dipole strength in the $`814`$ MeV region. The centroid energies of the distributions in the high-energy region between 20 and 30 MeV, are $`45`$ MeV higher than those extracted from the experimental spectra. It also appears that the centroid energies of the low-energy distribution do not depend on the nuclear matter incompressibility of the effective interactions. On the other hand, the IS GDR strength distributions in the low-energy region display the expected mass dependence. We have also performed calculations for a number of lighter spherical nuclei, and verified that with increasing mass the centroid is indeed shifted to lower energy. When comparing with experimental data, it should be pointed out that the usable excitation energy bite in the experiment reported in Ref. was $`1429`$ MeV, and therefore a low-energy isoscalar dipole strength could not be observed. In this respect, somewhat more useful are the data from the experiment reported in Ref. , where spectra in the energy range $`4<E_x<60`$ MeV have been observed. The results of an DWBA analysis of the experimental spectra, however, attribute the isoscalar strength in the $`1015`$ MeV region exclusively to the giant monopole (GMR) and giant quadrupole (GQR) resonances. It should be emphasized that a possible excitation of isoscalar dipole strength in this energy region and its interference with the GQR cannot be excluded .
In the high energy region the calculated dipole strength exhibits the expected dependence on the nuclear matter compressibility modulus of the effective interactions (NL1, NL3, NL-SH). The centroid of the strength distribution is shifted to higher energy with increasing values of $`K_{\mathrm{nm}}`$. These energies, however, are considerably higher than the corresponding experimental IS GDR centroids . Though, in order to precisely determine the IS GDR excitation energy from the experimental spectrum, the dipole strength has to be separated from the high-energy octupole resonance (HEOR), and this is not always possible . Using the NL3 effective interaction, we have calculated the octupole strength distribution. The centroid of the HEOR is found at $`22`$ MeV, well below the IS GDR main peak, but more than 2 MeV above the experimental value for the HEOR centroid . Incidentally, our calculated HEOR peak approximately coincides with the experimental value of the IS GDR centroid .
The IS GDR transition densities for <sup>208</sup>Pb are shown in Fig. 2. The transition densities correspond to the NL3 strength distribution in Fig. 1. Since it appears that none of the effective interactions reproduces the experimental position of the IS GDR, the remainder of the present analysis will be only qualitative, and we choose to display only results obtained with the NL3 set of Lagrangian parameters. On the qualitative level, the other two effective interactions produce similar results. In Fig. 2. we plot proton (dot-dashed), neutron (dashed), and total (solid) transition densities for two representative peaks from Fig. 1: 10.35 MeV (a) is the central peak in the low-energy region, and 26.01 MeV (b) is the energy of the main peak in the region above 20 MeV. The transition densities for both peaks exhibit a radial dependence characteristic for the isoscalar dipole mode, and they can be compared with the corresponding transition densities in the scaling model, or with those which result from constrained calculations . While for the high-energy peak the proton and neutron transition densities display an almost identical radial dependence, the pattern is more complicated for the peak at 10.35 MeV.
RPA calculations, therefore, predict the fragmentation of the isoscalar dipole strength distribution into two broad structures: one in the energy window between $`814`$ MeV, and the other in the high-energy region around $`25`$ MeV. The position of the low-energy structure does not depend on the compressibility modulus, i.e. it does not correspond to a compression mode. Additional information on the underlying collective dynamics can be obtained through a study of transition currents. In Fig. 3 we plot the velocity fields for the two peaks at 10.35 MeV (a) and 26.01 MeV (b). The velocity distributions are derived from the corresponding transition densities, following the procedure described in Ref. . The ”squeezing” compression mode is identified from the flow pattern which corresponds to the high-energy peak at 26.01 MeV. The flow lines concentrate in the two ”poles” on the symmetry axis at $`z\pm 2.5`$ fm. The velocity field corresponds to a density distribution which is being compressed in the lower half plane, and expands in the upper half plane. The centers of compression and expansion are located on the symmetry axis, at approximately half the distance between the center and the surface of the nucleus. It is obvious that the excitation energy of this mode will strongly depend on the compressibility modulus. The flow pattern for the lower peak at 10.35 MeV is very different. The flow lines describe a kind of toroidal motion, which is caused by the surface effect of the finite nucleus. The density wave travels through the nucleus along the symmetry axis. The reflection of the wave on the surface, however, induces radial components in the velocity field. Although it corresponds to dipole oscillations, this is not a compression mode. We have verified that also other dipole states in this energy region display similar velocity fields.
In conclusion, the isoscalar giant dipole resonance in <sup>208</sup>Pb has been calculated in the framework of the relativistic RPA, based on effective mean-field Lagrangians with meson self-interaction terms. The results have been compared with recent experimental data and with calculations performed in the Hartree-Fock plus RPA framework. While the results of the present RRPA study are consistent with previous theoretical analyses, they strongly disagree with reported experimental data on the position of the IS GDR centroid energy in <sup>208</sup>Pb. This is a serious problem, not only because the disagreement between theory and experiment is an order of magnitude larger than for other giant resonances, but also because the present data on IS GDR are not consistent with the value of the nuclear incompressibility $`K_A`$ derived from the measured excitation energy of the isoscalar GMR. This inconsistency could, perhaps, be explained by a possible excitation of isoscalar dipole strength in the low-energy window between 8 MeV and 14 MeV. Although predicted by all theoretical models, the low-lying IS GDR strength is not observed in the experimental spectra. From the analysis of the velocity fields, we have identified two basic isoscalar dipole modes. The ”squeezing” compression mode is found in the high-energy region at $`26`$ MeV. The low-energy dipole mode does not correspond to a compression mode, and its dynamics is determined by surface effects.
Acknowledgments
We thank P.F. Bortignon, G.Colò, U. Garg, Z.Y. Ma, and N. Van Giai for useful comments. This work has been supported in part by the Bundesministerium für Bildung und Forschung under contract 06 TM 875.
Figure Captions
* Fig.1 IS GDR strength distributions in <sup>208</sup>Pb calculated with the NL1 (dashed), NL3 (solid), and NL-SH (dot-dashed) effective interactions.
* Fig. 2 IS GDR transition densities for <sup>208</sup>Pb calculated with the NL3 parameter set. Proton (dot-dashed), neutron (dashed), and total (solid) transition densities are displayed for the peaks at 10.35 MeV (a) and 26.01 MeV (b).
* Fig. 3 Velocity distributions for the two isoscalar dipole modes in <sup>208</sup>Pb calculated with the NL3 effective interaction. The velocity fields correspond to the two peaks at 10.35 MeV (a) and 26.01 MeV (b).
|
no-problem/0003/nlin0003041.html
|
ar5iv
|
text
|
# Competitive Dynamics of Web Sites
## 1 Introduction
The emergence of an information era mediated by the Internet brings about a number of novel and interesting economic problems. Chiefly among them is the realization that ever decreasing costs in communication and computation are making the marginal cost of transmitting and disseminating information essentially zero. As a result, the standard formulation of the competitive equilibrium theory is inapplicable to the Internet economy. This is because the theory of competitive equilibrium focuses on the dynamics of price adjustments in situations where both the aggregate supply and demand are a function of the current prices of the commodities . Since on the Internet the price of a web page is essentially zero, supply will always match demand, and the only variable quantity that one needs to consider is the aggregate demand, i.e. the number of customers willing to visit a site or download information or software. As we will show, this aggregate demand can evolve in ways that are quite different from those of price adjustments.
A particular instance of this different formulation of competitive dynamics is provided by the proliferation of web sites that compete for the attention and resources of millions of consumers, often at immense marketing and development costs. As a result, the number of visitors alone has become a proxy for the success of a web site, the more so in the case of advertising based business models, where a well defined price is placed on every single page view. In this case, most customers do not pay a real price for visiting a web site. The only cost a visitor incurs is the time spent viewing an ad-banner, but this cost is very low and practically constant. Equally interesting, visits to a web site are such that there is non-rival consumption in the sense that one’s access to a site does not depend on other users viewing the same site. This can be easily understood in terms of Internet economics: once the fixed development cost of setting up a web site has been paid, it is relatively inexpensive to increase the capacity the site needs to meet increased demand. Thus, the supply of served web pages will always track the demand for web pages (neglecting network congestion issues) and will be offered at essentially zero cost.
The economics of information goods such as the electronic delivery of web pages has recently been reviewed by Smith et. al. . They show that when the marginal reproduction cost approaches zero, new strategies and behaviors appear, in particular with respect to bundling , price dispersion , value pricing versus cost pricing , versioning , and complicated price schedules .
Since supply matches demand when the price become negligibly small, the only variable quantity that we will consider in our model is the aggregate demand, i.e. the number of customers willing to visit a site. This is the quantity for which we study the dynamics as a function of the growth and capacity of web sites, as well as the competition between them. In particular, we explore the effects that competitive pressures among web sites have on their ability to attract a sizeable fraction of visitors who can in principle visit a number of equivalent sites. This is of interest in light of results obtained by Adamic and Huberman , who showed that the economics of the Internet are such that the distribution of visitors per site follows a power-law characteristic of winner-take-all markets. They also proposed a growth model of the Internet to account for this behavior which invokes either the continuous appearance of new web sites or different growth rates for sites.
While such a theory accounts for the dynamics of visits to sites, it does not take into account actions that sites might take to make potential visitors to several similar sites favor one over the other. As we show, when such mechanisms are allowed, the phenomenon of winner-take-all markets emerges in a rather surprising way, and persists even in situations where no new sites are continously created.
Our work also explains results obtained from computer simulation of competition between web sites by Oğuş et. al. . Their experiments show that brand loyalty and network effects together result in a form of winner-take-all market, in which only a few sites survive. This is consistent with the predictions of our theory.
In section 2 we present the model and illustrate its main predictions by solving the equations in their simplest instance in section 3. In section 4 we show that the transition from fair market share to winner-take-all persists in the general case of competition between two sites and in section 5 we extend our results to very many sites. We also show the appearence of complicated cycles and chaotic outcomes when the values of the competitive parameters are close to the transition point. A concluding section summarizes our results and discusses their implications to electronic commerce.
## 2 The Model
Consider $`n`$ web sites offering similar services and competing for the same population of users, which we’ll take to be much larger than the number of sites. Each site engages in policies, from advertising to prize reductions, that try to increase their share of the customer base $`f_i`$. Note that while $`f_i`$ is the fraction of the population that is a customer of web site $`i`$, it can be more generally taken to be the fraction of the population aware of the site’s existence. This could be measured by considering the number of people who bookmark a particular site.
The time evolution of the customer fraction $`f_i`$ at a given site $`i`$ is determined by two main factors. If there is no competition with any other sites, the user base initially grows exponentially fast, at a rate $`\alpha _i`$, and then saturates at a value $`\beta _i`$. These values are determined by the site’s capacity to handle a given number of visitors per unit time. If, on the other hand, other sites offer competing services, the strength of the competition determines whether the user will be likely to visit several competing sites (low competition levels) or whether having visited a given site reduces the probability of visiting another (high competition level).
Specifically, the competition term can be understood as follows: if fractions $`f_i`$ and $`f_j`$ of the people use sites $`i`$ and $`j`$, respectively, then assuming that the probability of using one site is independent of using another, a fraction $`f_if_j`$ will be using both sites. However, if both sites provide similar services, then some of these users will stop using one or the other site. The rate at which they will stop using site $`i`$ is given by $`\gamma _{ij}f_if_j`$, and the rate at which they abandon site $`j`$ is given by $`\gamma _{ji}f_if_j`$ (note that $`\gamma _{ij}`$ is not necessarily equal to $`\gamma _{ji}`$).
Mathematically the dynamics can thus be expressed as
$`{\displaystyle \frac{\mathrm{d}f_i}{\mathrm{d}t}}`$ $`=`$ $`\alpha _if_i(\beta _if_i){\displaystyle \underset{ij}{}}\gamma _{ij}f_if_j,`$ (1)
where $`\alpha _i`$ is the growth rate of individual sites, $`\beta _i`$ denotes their capacity to service a fraction of the customer base and $`\gamma _{ij}`$ is the strength of the competition. The parameter values are such that $`\alpha _i0`$, $`0\beta _i1`$ and $`\gamma _{ij}0`$.
The system of equations (1), which determines the nonlinear dynamics of user visits to web sites, possesses a number of attractors whose stability properties we will explore in detail The equations are functionally similar to those describing the competition between modes in a laser , and to those describing prey-predator equations in ecology .. In particular, we will show that as a function of the competition level, the solutions can undergo bifurcations which render a particular equilibrium unstable and lead to the appearance of new equilibria. The most striking result among them is the sudden appearance of a winner-take-all site which captures most of the visitors, a phenomenon that has been empirically observed in a study of markets in the web .
Since the complexity of the equations may obscure some the salient features of the solutions, we will first concentrate on the simplest case exhibiting a sharp transition from fair market share to a winner-take-all site, and then consider more complicated examples.
## 3 Fair Market Share to Winner-Take-All
Let us first consider one of the simplest instances of the problem described above, in which two web sites have the same growth rates $`\alpha _1=\alpha _2=1`$, the same capacities $`\beta _1=\beta _2=1`$ and symmetric competion $`\gamma _{12}=\gamma _{21}=\gamma `$. In this case the equations take the form
$`{\displaystyle \frac{\mathrm{d}f_1}{\mathrm{d}t}}`$ $`=`$ $`f_1(1f_1\gamma f_2)`$
$`{\displaystyle \frac{\mathrm{d}f_2}{\mathrm{d}t}}`$ $`=`$ $`f_2(1f_2\gamma f_1)`$
The four fixed points of this equation, which determine the possible equilibria, are given by
$$(f_1,f_2)\{(0,0),(1,0),(0,1),(\frac{1}{1+\gamma },\frac{1}{1+\gamma })\}$$
Since not all of these equilibria are stable under small perturbations, we need to determine their time evolution when subjected to a sudden small change in the fraction of visitors to any site. To do this, we need to compute the eigenvalues of the Jacobian evaluated at each of the four fixed points. The Jacobian is
$$𝐉=\left(\begin{array}{cc}12f_1^0\gamma f_2^0& \gamma f_1^0\\ \gamma f_2^0& 12f_2^0\gamma f_1^0\end{array}\right)$$
and the eigenvalues at each of the fixed points are given in the following table:
| equilibrium | eigenvalues |
| --- | --- |
| $`(0,0)`$ | 1 (twice) |
| $`(\frac{1}{1+\gamma },\frac{1}{1+\gamma })`$ | $`1`$ and $`\frac{\gamma 1}{1+\gamma }`$ |
| $`(1,0)`$ or $`(0,1)`$ | $`\frac{1}{2}(\gamma \pm \sqrt{(2\gamma )^2})`$ |
From this it follows that the fixed point $`(0,0)`$ is never stable. On the other hand, the equilibrium at $`(\frac{1}{1+\gamma },\frac{1}{1+\gamma })`$ is stable provided that $`\gamma <1`$. And the fixed points $`(0,1)`$ and $`(1,0)`$ are both stable if $`\gamma >1`$. From these results, we can plot the equilibrium size of the customer population as a function of the competition $`\gamma `$ between the two competitors. As Figure 1 shows, there is a sudden, discontinuous transition at $`\gamma =1`$. For low competition, the only stable configuration has both competitors sharing the market equally. For high competition, the market transitions into a ”Winner-Take-All Market” , in which one competitor grabs all the market share, whereas the other gets nothing.
As we will show below, this sudden transition persists under extremely general conditions, for two competitors as well as for $`n`$ competitors. The significant feature is that a very small change in the parameters can radically affect the qualitative nature of the equilibrium.
Another feature is that near the transition, the largest eigenvalue of the stable state is very close to zero (but negative). This means that the time of convergence to equilibrium diverges. In more complicated systems, this may make it extremely difficult to predict which equilibrium the system will converge to in the long term.
## 4 Competition between two sites
We now analyse the two site model in its most general form, without restricting the values of the parameters to be the same for the two sites. The system of equations is
$`{\displaystyle \frac{\mathrm{d}f_1}{\mathrm{d}t}}`$ $`=`$ $`f_1(\alpha _1(\beta _1f_1)\gamma _{12}f_2)`$
$`{\displaystyle \frac{\mathrm{d}f_2}{\mathrm{d}t}}`$ $`=`$ $`f_2(\alpha _2(\beta _2f_2)\gamma _{21}f_1)`$
As in the previous section these equations possess four fixed points at:
$`(f_1^0,f_2^0)`$ $`=`$ $`(0,0)`$
$`(f_1^0,f_2^0)`$ $`=`$ $`(\beta _1,0)`$
$`(f_1^0,f_2^0)`$ $`=`$ $`(0,\beta _2)`$
$`(f_1^0,f_2^0)`$ $`=`$ $`({\displaystyle \frac{\alpha _2(\alpha _1\beta _1\beta _2\gamma _{12})}{\alpha _1\alpha _2\gamma _{12}\gamma _{21}}},{\displaystyle \frac{\alpha _1(\alpha _2\beta _2\beta _1\gamma _{21})}{\alpha _1\alpha _2\gamma _{12}\gamma _{21}}})`$
Let’s analyze the stability of each fixed point. This is done by evaluating the Jacobian
$$𝐉=\left(\begin{array}{cc}\alpha _1\beta _12\alpha _1f_1^0\gamma _{12}f_2^0& \gamma _{12}f_1^0\\ \gamma _{21}f_2^0& \alpha _2\beta _22\alpha _2f_2^0\gamma _{21}f_1^0\end{array}\right)$$
and computing its two eigenvalues. Each fixed point will be stable only if the real parts of both eigenvalues are negative. The first trivial fixed point is always unstable, since the eigenvalues $`\alpha _1\beta _1`$ and $`\alpha _2\beta _2`$ are both positive quantities. The second equilibrium, with $`(f_1,f_2)=(\beta _1,0)`$ is stable provided $`\frac{\gamma _{12}}{\alpha _1}>\frac{\beta _1}{\beta _2}`$. The third equilibrium $`(f_1,f_2)=(0,\beta _2)`$ is similarly stable only if $`\frac{\gamma _{21}}{\alpha _2}>\frac{\beta _2}{\beta _1}`$. The final case is the most complicated one, and is stable in three distinct regimes. However, two of the stable solutions have a negative $`f_1`$ or $`f_2`$ and can never be reached from an initial condition with both populations positive. The only remaining equilibrium is stable whenever the other two aren’t. In order to summarize these results in the table below, it is convenient to define the following parameters:
$$\alpha =\frac{\alpha _1}{\alpha _2}\beta =\frac{\beta _1}{\beta _2}\gamma _1=\frac{\gamma _{12}}{\alpha _1}\gamma _2=\frac{\gamma _{21}}{\alpha _2}.$$
| equilibrium | stable if |
| --- | --- |
| (0, 0) | never |
| $`(\beta _1,0)`$ | $`\beta >\frac{1}{\gamma _2}`$ |
| $`(0,\beta _2)`$ | $`\beta <\gamma _1`$ |
| $`(\frac{\beta _1\beta _2\gamma _1}{1\gamma _1\gamma _2},\frac{\beta _2\beta _1\gamma _2}{1\gamma _1\gamma _2})`$ | $`\begin{array}{cc}\beta >\gamma _1& \beta <\frac{1}{\gamma _2}\end{array}`$ |
Note that in the last row, $`\beta >\gamma _1`$ and $`\beta <\frac{1}{\gamma _2}`$ imply that $`\gamma _1\gamma _2<1`$. As a result, for fixed $`\gamma _1`$ and $`\gamma _2`$, there are two different regimes. Either $`\gamma _1\gamma _2<1`$, in which case intermediate values of $`\beta `$ ($`\gamma _1<\beta <\frac{1}{\gamma _2}`$) lead to a ”fair” equilibrium in which both sites get a non zero $`f_i`$, or $`\gamma _1\gamma _2<1`$, in which case intermediate values of $`\beta `$ ($`\frac{1}{\gamma _2}<\beta <\gamma _1`$) lead to a situation in which either of the two ”winner-take-all” equilibria is stable. In this latter case, hysteresis occurs if $`\beta `$ slowly changes with time. This is illustrated in Figure 2. Starting from a low value of $`\beta `$, the only stable equilibrium is $`(0,\beta _2)`$. If we slowly increase the ratio $`\beta `$, this equilibrium remains stable, until $`\beta >\gamma _1`$. At that point, the equilibrium $`(0,\beta _2)`$ becomes unstable and the system relaxes to the only new equilibrium $`(\beta _1,0)`$. If we now reverse the process, and decrease the value of $`\beta `$, the new solution remains stable as long as $`\beta >\frac{1}{\gamma _1}`$.
Thus, there always is at least one stable equilibrium, but there never are more than two. If there are two stable equilibria, then the initial conditions determine into which of the two the system will fall. Which of the equilibria are stable depends on a total of three parameters (down from the five parameters that are required to fully describe the system once the time variable is rescaled).
It is interesting to ask which of the two sites will win as a function of a set of fixed parameters and as a function of the starting point. To do this, we plot the motion of $`(f_1,f_2)`$ as a vector field in Figure 3, for a particular set of parameters. As can be seen, the space of initial conditions is divided into two distinct regions, each of which leads to a different equilibrium.
## 5 Many sites
### 5.1 Analytic treatment
We now show that the sharp transition to a winner-take-all market that we found in the two site case is also present when many sites are in competition. In order to do so, we first examine the case where the parameters are the same for all sites, $`i`$, so that Equation (1) can be rewritten as
$`{\displaystyle \frac{\mathrm{d}f_i}{\mathrm{d}t}}`$ $`=`$ $`f_i(\alpha \beta \alpha f_i\gamma {\displaystyle \underset{ij}{}}f_j).`$
where $`i=1,\mathrm{},n`$ and $`n`$ is the number of sites. For $`n`$ equations, there are $`2^n`$ different vectors $`(f_1,\mathrm{},f_n)`$ for which all the time derivatives are zero, since for each equation, either $`f_i=0`$ or $`\alpha \beta \alpha f_i_{ij}\gamma f_j=0`$ at equilibrium. Without loss of generality, we can relabel the $`f_i`$ such that the first $`k`$ of them are non-zero, while the remaining $`n`$ are zero.
At equilibrium, the value of the $`f_i`$ with $`1ik`$ will be given by the solution of
$`\left(\begin{array}{cccc}\alpha & \gamma & \mathrm{}& \gamma \\ \gamma & \alpha & \mathrm{}& \gamma \\ \mathrm{}& \mathrm{}& \mathrm{}& \mathrm{}\\ \gamma & \gamma & \mathrm{}& \alpha \end{array}\right)\left(\begin{array}{c}f_1\\ f_2\\ \mathrm{}\\ f_k\end{array}\right)=\left(\begin{array}{c}\alpha \beta \\ \alpha \beta \\ \mathrm{}\\ \alpha \beta \end{array}\right)`$ (14)
Except for the degenerate cases, the matrix on the left hand side is invertible, so that
$`f_i=\{\begin{array}{cc}\frac{\alpha \beta }{\alpha +(k1)\gamma }& \text{if }1ik\hfill \\ 0& \text{if }k+1in\hfill \end{array}`$ (17)
We are now ready to compute the Jacobian about this equilibrium. It takes the form
$`𝐉=\left(\begin{array}{ccccccc}X& Y& \mathrm{}& Y& 0& \mathrm{}& 0\\ Y& X& \mathrm{}& Y& 0& \mathrm{}& 0\\ \mathrm{}& \mathrm{}& \mathrm{}& \mathrm{}& \mathrm{}& & \mathrm{}\\ Y& Y& \mathrm{}& X& 0& \mathrm{}& 0\\ 0& 0& \mathrm{}& 0& Z& \mathrm{}& 0\\ \mathrm{}& \mathrm{}& & \mathrm{}& \mathrm{}& \mathrm{}& \mathrm{}\\ 0& 0& \mathrm{}& 0& 0& \mathrm{}& Z\end{array}\right)`$ (25)
where
$`X`$ $`=`$ $`\alpha \beta (2\alpha +(k1)\gamma ){\displaystyle \frac{\alpha \beta }{\alpha +(k1)\gamma }}`$
$`Y`$ $`=`$ $`\gamma {\displaystyle \frac{\alpha \beta }{\alpha +(k1)\gamma }}`$
$`Z`$ $`=`$ $`\alpha \beta k\gamma {\displaystyle \frac{\alpha \beta }{\alpha +(k1)\gamma }}`$
The eigenvalues of the Jacobian are $`Z`$ (with multiplicity $`nk`$), $`XY`$ (with multiplicity $`k1`$) and $`X+(k1)Y`$ (with multiplicity $`1`$). Note that the last two eigenvalues are absent if $`k=0`$. Thus there are four distinct cases to check: $`k=0`$, $`k=1`$, $`1<k<n`$ and $`k=n`$. In the first case, the only eigenvalues are $`Z=\alpha \beta >0`$, so this solution is always unstable.
With $`k=1`$, the eigenvalues are
$`X`$ $`=`$ $`\alpha \beta `$
$`Z`$ $`=`$ $`(\alpha \gamma )\beta `$
That is, an equilibrium with one out of $`n`$ winners is stable provided $`\alpha <\gamma `$. Note that this is the same condition we obtained for two competitors.
With $`1<k<n`$ we have the eigenvalues
$`Z`$ $`=`$ $`\alpha \beta {\displaystyle \frac{\alpha \gamma }{\alpha +(k1)\gamma }}`$
$`XY`$ $`=`$ $`\alpha \beta {\displaystyle \frac{\gamma \alpha }{\alpha +(k1)\gamma }}`$
$`X+(k1)Y`$ $`=`$ $`\alpha \beta `$
The first and second eigenvalues above can not be negative simultaneously, thus there are no stable solutions with $`1<k<n`$.
Finally, for $`k=n`$ we have the same eigenvalues as for $`1<k<n`$, except that the eigenvalue $`Z`$ is now inexistent. As a result, the solution with $`k=n`$ is stable provided that $`\gamma <\alpha `$.
To summarize, the only stable solutions are
$`f_i=\{\begin{array}{cc}\frac{\alpha \beta }{\alpha +(n1)\gamma }& \text{if }\gamma <1\hfill \\ \beta \delta _{ij}& \text{if }\gamma >1\hfill \end{array}`$ (28)
That is, the winner-take-all dynamics observed for two sites persists independently of the number of competitors involved, at least in an idealized symmetric configuration. In the next section, we consider the dynamics for large systems in which the parameter value are drawn from a random distribution.
### 5.2 Critical dynamics
In the most general case, the dynamical change in the fraction of visitors to web sites can be determined by numerically solving the general equations of our model. In addition, provided the number of sites $`n`$ remains small one can check each of the $`2^n`$ candidate equilibria for stability and verify whether the numerical simulation converged to the only equilibrium or missed an existing but hard to reach equilibrium.
In Figure 4, we show the time evolution of $`f_i`$ for sixteen web sites, obtained by numerically integrating the equations using a Runge-Kutta scheme. The parameters defining the competitive strength between sites, $`\gamma _{ij}`$, were randomly chosen from a Gaussian distribution with a standard deviation of $`0.1`$, and a fixed mean $`\overline{\gamma }`$. On the left panel we exhibit a solution for $`\overline{\gamma }=0.5`$, far below the transition point. On the right panel, $`\overline{\gamma }=1.5`$ places us well above the transition, and we observe the evolution towards a winner-take-all market. Whereas below the transition the equilibrium has all sixteen competitors sharing the market, above it one web site takes all visitors.
Given the fixed set of parameters for the model, it is possible to diagonalize the Jacobian, evaluated at each of the $`2^n=2^{16}`$ fixed points. As in the case of the symmetric case, or for the general two site case, only one equilibrium is stable when $`\overline{\gamma }1`$. In addition, the values of the $`f_i`$ at the single stable equilibrium found in this manner match the values that the numerical simulation converges to.
When $`\overline{\gamma }1`$, numerically diagonalizing the $`2^n`$ Jacobians shows that the $`n`$ equilibria of the form $`f_i=\delta _{ik}`$, and no others, are stable. Thus the transition to a winner-take-all market subsists even when the parameters come from a randomized distribution.
A more interesting situation is posed by the dynamics of competition when the competitive strength approaches the critical value, $`\overline{\gamma }1`$. Since near the transition point the largest eigenvalue has an absolute value very close to zero the transients to equilibrium are very long. Moreover, the nature of the transients is such that many sites alternate in their market dominance for long periods of time.
Numerical diagonalization of all the possible Jacobians near criticality shows that frequently there are several stable equilibria, in which some sites have non-zero $`f_i`$, and some sites don’t. However, these solutions typically are not reached in a finite amount of time (if at all) when numerically integrating the equations. Furthermore, numerical integration, as shown in Figure 5, suggests that for this range of parameters the dynamics are chaotic (small differences in the initial conditions lead to diverging trajectories). For some initial conditions the system may converge to limit cycles, rather that to static equilibria. Thus, when the parameter values of $`\gamma _{ij}`$ are drawn from a distribution, the transition is not sudden in $`\overline{\gamma }`$. There is a range of values of $`\overline{\gamma }`$ for which the dynamics are more complicated.
For much larger $`n`$, it is no longer possible to verify every single candidate fixed point for stability. However, it is still possible to numerically integrate the equations. Either way, as the example in Figure 5 shows, the question of existence and stability of the equilibria is irrelevant if the stable equilibria are never reached, or only reached after an unreasonable amount of time.
## 6 Conclusion
In this paper we have shown that under general conditions, as the competition between web sites increases, there is a sudden transition from a regime in which many sites thrive simultaneously, to a ”winner take all market” in which a few sites grab almost all the users, while most other sites go nearly extinct, in agreement with the observed nature of electronic markets. This transition is the result of a nonlinear interaction among sites which effectively reduces the growth rate of a given site due to competitive pressures from the others. Without the interaction term, web sites would grow exponentially fast to a saturation level that depends on their characteristic properties.
Moreover, we have shown that the transition into a winner-take-all market occurs under very general conditions and for very many sites. In the limiting case of two sites, the phenomenon is reminiscent of the ”Principle of Mutual Exclusion” in ecology , in which two predators of the same prey can not coexist in equilibrium when competitive predation is very strong.
Smith et. al. attribute the price dispertion of goods sold online to several features of web sites: differences in branding and trust, in the appearance and in the quality of the search tools, switching costs between sites, and last but not least retailer awareness. A winner-take-all economy may thus have strong consequences for price dispertion, since a few sites can charge more by virtue of dominating the mind share of their customers.
It is interesting to speculate about the applicability of this model to different markets. We motivated the model for a massless Internet economy, in which demand can be instantly satisfied by supply at a negligible cost to the supplier, and in which competition does not occur on the basis of cost, but rather on advertising and differentiation in the services provided by the web sites. However, since winner-take-all markets are being observed in a much broader range of markets, it might well be the case that the sudden transition to winner-take-all behavior might also be a feature of these markets as well.
|
no-problem/0003/astro-ph0003391.html
|
ar5iv
|
text
|
# A search for Jovian-mass planets around CM Draconis using eclipse minima timing
## 1 Introduction
It has long been known that the presence of a third body orbiting both components of an eclipsing binary system will offset the binary from a common binary/third-mass barycenter thereby causing a periodic shift in the observed times of the binary eclipses. The amplitude of this shift is given by
$$\delta T=M_\mathrm{P}a_{}/M_\mathrm{B}c,$$
where $`M_\mathrm{P}`$ is the third body’s mass, $`a_{}=a\mathrm{sin}i`$ the third body’s semi-major axis along the line of sight, $`M_\mathrm{B}`$ the mass of the binary system, and $`c`$ the speed of light. As pointed out by Schneider & Doyle (1995) and Doyle et al. (1998), eclipse timings with a precision of a few seconds could detect the presence of an orbiting Jovian-mass object around a low-mass eclipsing binary system. Also, Doyle et al. (1998) gives a sample of 250 eclipsing binaries for which Jupiter-mass planets may be detectable by such studies. In this paper we report on an analysis of eclipse timings that were obtained as part of a photometric search for extrasolar planetary transits undertaken during the six years 1994 - 1999 around the M4.5/M4.5 binary CM Dra by the TEP project (Deeg el al. 1998a – further TEP1 – ; Doyle et al. (2000) – further TEP2 – )
## 2 Data and Analysis
Eclipse minimum times were obtained from photometric time series data of CM Dra; the photometric reduction pipeline is described in TEP1. Photometric data included in this analysis have a maximum photometric rms error of 0.7% and were corrected for nightly extinction variations. Eclipses of CM Dra were extracted from the data with a cut-off of $`\mathrm{\Delta }m>0.1\mathrm{mag}`$ from the off-eclipse baseline. The eclipse minimum times were then measured with a 7-segment Kwee-van-Woerden algorithm (Kwee & Van Woerden (1956)), and converted to heliocentric Julian dates with the ‘setjd’ routine in IRAF. The entire lightcurve with 1014 hours of coverage, taken by all telescopes of the TEP project between 1994 and 1999, contains 81 eclipses for which O-C times were measurable. For further analysis, however, we selected only data from a subset of telescopes which delivered the most consistent results for timing. These were the Crossley telescope at Lick Observatory, the JKT, INT and IAC80 telescopes at the Instituto de Astrofísica de Canarias, the 0.6m at Kourovka Observatory and the 1.2m of the Observatoire de Haute Provence. Inconsistent minimum times from those telescopes whose data were rejected were most likely caused by imprecise recording of the time, which depends in most systems on the computer that archives the data. We also excluded the eclipses observed by Lacy (1977), whose two primary eclipses had discrepancies of 25 seconds between them (based on a re-analysis of Lacy’s data, see TEP 1). From the remaining data only those minimum times were kept where the lightcurve covered the entire ingress and egress of each eclipse without significant ’holes’, and where the formal error given by the Kwee-van-Woerden algorithm was less than 10 seconds. The resulting sample, to be further investigated, contains minima timings of 16 primary and 25 secondary eclipses.
## 3 Results
In Fig. 1, the O-C (observed - computed) minimum times of these 41 eclipses are plotted. The computed minimum times $`T_n`$ are based on the linear ephemeris given by TEP1, where $`T_n=T_\mathrm{o}+nP_{\mathrm{orb}}`$, with a period of $`P_{\mathrm{orb}}=\mathrm{1.268\hspace{0.17em}389\hspace{0.17em}861}\pm \mathrm{0.000\hspace{0.17em}000\hspace{0.17em}005}`$days, an epoch of primary eclipses of $`T_\mathrm{o}=\mathrm{HJD2449830}\mathrm{.757\hspace{0.17em}00}\pm \mathrm{0.000\hspace{0.17em}01}`$, and an epoch of secondary eclipses of $`T_\mathrm{o}=\mathrm{HJD2449831}\mathrm{.390\hspace{0.17em}03}\pm \mathrm{0.000\hspace{0.17em}01}`$. This ephemeris was derived from eclipses measured between 1994 and 1996. The eclipses observed afterwards, in 1997-1999, do not exhibit any trends away from that ephemeris, and therefore no attempts have been made to derive a new one. The standard deviation of all O-C times from 1994-1999 against the elements listed above is 5.87 seconds for primary, 5.47 s for secondary, and 5.74 s for both eclipses together.
To evaluate the minimum times for the presence of periodicities, we performed a power spectral analysis using the method of sine-wave fitting common in solar oscillation studies (Kjeldsen & Frandsen (1992)). In this method, sine waves with increasing periods are fitted to the O-C values, using amplitude and phase as fitting parameters. (This is identical to fitting a sinusoidal ephemeris, $`T_n=T_\mathrm{o}+nP_{\mathrm{orb}}+A\mathrm{sin}[2\pi (t\tau )/P+\kappa ]`$ with stepwise increasing periods $`P`$, and recording amplitude $`A`$ and phase $`\kappa `$ of the best fit.) This method has the advantage over the Lomb periodogram spectral analysis (Lomb (1976), Press (1992)) that amplitudes are derived with an absolute scale, whereas the Lomb method derives only relative amplitudes.
The power spectra (Fig. 2) were obtained separately for primary and secondary eclipses, as well as for both eclipses combined. As can be seen, the highest peaks have amplitudes around 4 seconds. The only notable feature is a peak between 750 and 1050 days, occurring in both kinds of eclipses. This is the only feature significantly above 2 seconds amplitude in the power diagram of primary and secondary eclipses combined (Fig 2c). It has a maximum amplitude of 2.8 seconds at a period of 970 days. Also, the phases of the powerspectra (Fig. 3) are close for primary and secondary eclipses in that period range, being identical at a period of 890 days. Above periodicities of 2000 days, there is a smooth decay of spectral power, which is a consequence of the length of coverage of our data - the distance between the first and the last eclipse in the data set is 1879 days.
## 4 Discussion
The power spectra (Fig. 2) indicate that there are no periodic O-C minimum time variations with amplitudes of larger then 3-4 seconds present, for all periods less than 2000 days. This absence of amplitude variations allows us to *exclude* the presence of very massive planets around the CM Dra system, as indicated by the hatched region in the search-space diagram (Fig. 3). Excluding the peak around 1000 days period, the power spectra from all data (lower panel of Fig. 2) is relatively flat with an amplitude of about 2 seconds. This white-noise like flatness indicates an intrinsic imprecision in our data of about 2 seconds. This is most likely the results of the precision of the eclipse minima times being limited by the photometric noise of the eclipse lightcurves. O-C deviations of about 2 seconds constitute therefore a lower detection limit. Finally, the peak in the power spectra between 750 and 1100 days with an amplitude of $`2.5\pm 0.5`$ seconds and a good match of phases from primary and secondary eclipses may be the consequence of a third body, but is close to the observational noise. If this amplitude variation is caused by a third body, it would correspond to a circumbinary planet of 1.5-3 Jupiter masses at an orbital distance from CM Dra of 1.1 - 1.45 AU. We note that such a body would cause a periodic variation in the radial velocity of CM Dra with an amplitude of $`65\pm 20\mathrm{m}\mathrm{s}^1`$. Though sufficient precision to detect such radial velocities amplitudes has routinely been obtained in planetary detection programs, these program are always concerned with single stars. For eclipsing binaries, the mutual orbiting of the binary components causes large radial velocity amplitudes on the order of km/s, which obstruct the separation of the much smaller radial velocity amplitudes from a third body. In the case of CM Dra, the velocities of the binary components reported by Metcalfe et al. (1996) are 72 and 78 km/s, and the precision of these data would only allow the separation of third body amplitudes of more than 200 m/s (Latham (2000)). Finally, the limited time-baseline of our observations does not allow the detection of periodicities longer then about 2000 days. The absence of very heavy third bodies with periods up to a few times longer is however rather certain due to the good general adherence of the O-C times to a linear ephemeris. Influences from third bodies *w*ithin the Solar System onto the heliocentric eclipse minimum times are not of consequence. The strongest influence, by Jupiter, causes a 12-yearly deviation, but due to the high ecliptic latitude of CM Dra ( $`76.3\mathrm{deg}`$) its amplitude is limited to 0.58 seconds.
The *absence* of periodicities above 3-4 seconds amplitude - and the exclusion of corresponding massive planets - may be stated with certainty, even if the data analysis may not have accounted for every factor that may introduce spurious periodicities. The claim by Guinan et al. (1998) of a periodicity in minimum times of 70 days with an amplitude of 18 seconds, corresponding to a third body with a mass of $`0.01M_{}`$, is clearly invalidated (see also Deeg et al. 1998b ).
The *presence* of apparent periodicities with $``$ 3 seconds may however also be a consequence of slowly changing starspots which distort the symmetry of the eclipses. Although we have not been able to find any relevant variations in lightcurves of CM Dra through the different observing seasons 1994-1999, the possibility of starspots can not be entirely excluded. In any case, further monitoring with high precision minimum timing of the CM Dra system is needed to ascertain the continuing presence of the 700-1050 day periodicity.
Fig. 4 shows the search space of exoplanets around CM Dra covered by the eclipse timing observations described here and by the observations of transits from TEP1 and TEP2. The transit observations covered coplanar planets ($`\mathrm{sin}i1`$) on short period orbits, between 7 days (the shortest stable orbit around CM Dra) and 60 days (as a limit where observational coverage gets sparse), with a maximum detectable periodicity of 100 days (the limit where even coplanar planets would not cause transits because of the 89.82$`\mathrm{°}`$ inclination of the system). We assumed a mass limit of $`m/m_{\mathrm{Earth}}10`$, corresponding to the lower size limit of about 2.5 Earth Radii for detections with transits. The lower mass limit from the O-C timing method is derived from the absence of amplitudes over 2.5 seconds, except between 700 and 1050 days, were a planet candidate is indicated.
The two methods employed do cover rather complementary regimes: Whereas the strength of the transit method is the detection of relatively small planets on close orbits, O-C minimum timing is best for the detection of long period planets with at least Jupiter-like masses. The usefulness of the radial velocity method is limited in binary systems, though it might also lead to the discovery of massive third bodies around them. To verify the persistence of the 700-1050 day periodicity and the possibility of a planet, observations of CM Dra’s eclipse minimum times need to be continued during the the next several years.
## 5 Acknowledgments
We thank R. Garrido, A. Gimenez and the anonymous referee for helpful suggestions and Ayvur Akalin for help with the Kwee-Van-Woerden algorithm.
|
no-problem/0003/cond-mat0003199.html
|
ar5iv
|
text
|
# Dislocation scattering in a two dimensional electron gas
## Acknowledgments
The authors would like to acknowledge helpful discussions with B. Heying, C. Elsass, I. Smorchkova, P. Chavarkar , J. Singh, and J. Speck.
|
no-problem/0003/math-ph0003003.html
|
ar5iv
|
text
|
# Generic Jumps of Fredholm Indices and the Quantum Hall Effect
## 1 Introduction and Motivation
Suppose one interpolates between Fredholm operators with different indices. What can one say about the way the indices change? The answer to this question depends on the choice of the embedding space for the Fredholm operators in question. In the space of bounded operators, little can be said. But, in the space of Toeplitz operators, (and then also for Toeplitz modulu compacts), as we shall explain, the indices change by abrupt discontinuous jumps that tend to be small. We relate this behavior to certain conjectures and open problems that arise in the context of the Quantum Hall Effect (QHE) \[Sto\].
### 1.1 Physical background
In the theory of the integer quantum Hall effect (of non-interacting electrons) \[BvES, ASS\] one identifies the Hall conductance with the Fredholm index of a rather special operator, namely $`PUP`$, thought of as an operator on the range of $`P`$. Here $`P=P(E)`$ is an (infinite dimensional) projection in the Hilbert space $`L^2(C)`$, namely the projection on the spectrum of the one electron Hamiltonian below the Fermi energy $`E`$. $`U`$ is the multiplication operator $`\frac{z}{|z|}`$ associated with a singular gauge transformation that introduces an Aharonov-Bohm flux tube at the origin of the Euclidean plane. $`PUP`$ is Fredholm provided the integral kernel of the projection, $`p(z,z^{};E)`$ has good decay properties as $`|zz^{}|`$ gets large \[ASS\].
Recent progress in the rigorous theory of random Schrödinger operators relevant to the QHE \[Aiz\] guarantees good decay properties for $`p(z,z^{};E)`$ provided $`E`$ lies in certain energy intervals. Percolation arguments \[Tru\] and scaling theories of localization \[Khm\] give theoretical evidence that these decay properties persist for all but a discrete set of energies. This implies that the graph of the Hall conductace as a function of $`E`$ should be a step function. Indeed, experimentally, the Hall conductance in the integer Hall effect, is close to a monotonic step function with $`\pm 1`$ and $`\pm 2`$ jumps \[Lau\]. (Jumps by 2 occur when the Hall conductance is larger than 6 and is attributed to the smallness of the magnetic moment of the electron in these systems.)
The smallness of the jumps of the Fredholm indices in the QHE might, of course, be a special property of a special system. Here, instead, we want to explore the opposite point of view, namely the possibility that the existence of steps and the smallness of the jumps reflects a generic property of Fredholm indices and has little to do with the specific properties of the system in question.
Some support to this point of view comes from the relation of Chern numbers and Fredholm indices. In non-commutative geometry \[Con\] Chern numbers and Fredholm indices are intimately related. This is also the case in the index theory of elliptic operators \[Ati\]. For Chern numbers that arise from studies of spectral bundles (of Hamiltonians with discrete spectra), a generic deformation of the Hamiltonian leads to a step function with $`\pm 1`$ jumps in the first Chern number \[Sim\]. This follows from the Wigner von Neumann codimension 3 rule for eigenvalue crossing \[vNW\] and the fact that a generic crossing is a conic crossing and is not system specific.
As far as the QHE goes one might argue that since the Hall conductance can be directly related to a Chern number \[Sto, TKNN\], the genericity of small jumps follows immediately. The difficulty with this argument has to do with the thermodynamic limit. Normally, the QHE is associated with large systems. The genericity result quoted above for Chern numbers is for operators with discrete spectrum. This is the case for finite sytems, but is in general not the case for extended systems, and in particular does not apply to models of the quantum Hall effect. The main attractive feature of the Fredholm approach to the Hall effect is that it is phrased directly in the thermodynamic limit.
Another way of phrasing the main theme of this paper is: What, if any, is the analog for Fredholm operators of the genericity of small jumps in Chern numbers?
### 1.2 The mathematical problem
We wish to interpolate between two (or more) Fredholm operators. If the indices of these operators are different this cannot be done within the space of Fredholm operators. At some points in the interpolation the Fredholm property will be lost and the index will be ill defined. For “generic” interpolations, what is the nature of this bad set? Near such a bad point, how big a range of indices can be found?
Working in the space of bounded operators, little can be said. The space is simply too large, and when the Fredholm property is lost we lose all analytic control. However, in the space of sufficiently smooth Toeplitz operators interesting results can be obtained. In systems without symmetry, we find the following behavior: Almost every operator is Fredholm, and sets of codimension $`n`$ appear as boundaries between regions of Fredholm operators whose indices differ by $`n`$. We speak simply of the index “jumping by $`n`$” on a set of codimension $`n`$.
In systems with a $`Z_2`$ symmetry (e.g. time reversal symmetry or complex conjugation symmetry), sets of codimension $`n`$ appear as common boundaries of regions of Fredholm operators whose indices differ by as much as $`2n`$. That is, the index can jump by as much as $`2n`$ on a set of codimension $`n`$.
## 2 Basic Definitions and Properties
We review here the basic definitions and properties of Fredholm operators on separable Hilbert spaces. For a more complete treatment see \[Dou\].
###### Definition 1
A bounded operator $`A`$ on a separable Hilbert space if Fredholm if there exists another bounded operator $`B`$ such that $`1AB`$ and $`1BA`$ are compact.
In particular, the kernel and cokernel of $`A`$ are finite dimensional, and we define
###### Definition 2
The index of a Fredholm operator $`F`$ is
$$Index(F)=dimKer(F)dimKer(F^{}).$$
(1)
Fredholm operators are stable under compact perturbations and under small bounded perturbations. That is, if $`A`$ is Fredholm, there exists an $`ϵ>0`$ such that, for any bounded operator $`B`$ with operator norm $`B<ϵ`$ and for any compact operator $`K`$, the operator $`A+B+K`$ is Fredholm with the same index as $`A`$.
The simplest example of a Fredholm operator with nonzero index is the shift operator. Let $`e_0,e_1,e_2,\mathrm{}`$ be an orthonormal basis for a Hilbert space, and let the operator $`a`$ act by
$$a(e_n)=\{\begin{array}{cc}e_{n1}\hfill & \text{if }n>0\hfill \\ 0\hfill & \text{if }n=0\hfill \end{array}.$$
(2)
The adjoint of $`a`$ acts by
$$a^{}(e_n)=e_{n+1}$$
(3)
Since $`aa^{}=a^{}a+|e_0e_0|`$ is the identity, $`a`$ is Fredholm. The kernel of $`a`$ is 1-dimensional. The cokernel of $`a`$, which is the same as the kernel of $`a^{}`$, is 0 dimensional. Thus the index of $`a`$ is 1. Similarly, $`a^{}`$ is Fredholm with index $`1`$.
The following theorem is standard:
###### Theorem 1
If $`A_1,\mathrm{}A_n`$ are Fredholm operators, then the product $`A_1A_2\mathrm{}A_n`$ is also Fredholm, and $`Index(A_1\mathrm{}A_n)=_{i=1}^nIndex(A_i)`$.
Finally we consider connectedness in the space of Fredholm operators. If $`A`$ and $`A^{}`$ are Fredholm operators on the same Hilbert space, then there is a continuous path of Fredholm operators from $`A`$ to $`A^{}`$ if and only if $`Index(A)=Index(A^{})`$. (By continuous, we mean relative to the operator norm). Put another way, the path components of $`Fred(H)`$, the space of Fredholm operators on $`H`$, is indexed (pun intended) by the integers. The $`n`$-th path component is precisely the set of Fredholm operators of index $`n`$ \[Dou\].
## 3 Fredholm Operators in the Space of Bounded Operators
The most natural setting for our problem is consider arbitrary bounded operators, with the topology defined by the operator norm. We ask how many parameters must be varied in order to reach the common boundary of two regions, whose indices differ by $`k`$. Unfortunately, the answer is independent of $`k`$:
###### Theorem 2
Let $`U_n`$ be the set of Fredholm operators of index $`n`$. Every point on the boundary of $`U_n`$ is also on the boundary of $`U_m`$, for every integer $`m`$.
Proof: Let $`A`$ be a (not Fredholm) operator on the boundary of $`U_n`$. Given $`ϵ>0`$, we must find an operator in $`U_m`$ within a distance $`ϵ`$ of $`A`$.
Suppose that the kernel and cokernel of $`A`$ are infinite dimensional, and that there is a gap in the spectrum of $`A^{}A`$ at zero. (If this is not the case, we may perturb $`A`$ by an arbitrarily small amount to make it so). Now let $`B`$ be a unitary map from the kernel of $`A`$ to the cokernel. Let $`P,(P^{})`$ be the orthogonal projection onto $`ker(A),(coker(A))`$, and let $`a`$ be a shift operator on $`ker(A)`$. For each $`m0`$, $`A(ϵ)=A+ϵBa^mP`$ has a bounded right inverse
$$A^{}\frac{1}{P^{}+AA^{}}P_{}^{}+\frac{1}{ϵ}(a^{})^mB^{}P^{}.$$
(4)
It follows that the cokernel of $`A(ϵ)`$ is empty. It is easy to see that the kernel of $`A(ϵ)`$ is $`m`$ dimensional hence $`Index(A(ϵ))=m`$. Similarly, $`A+ϵB(a^{})^mP`$ has index $`m`$.
This theorem tells us that, in the space of all bounded operators there is no specific notion of being at a transition point from index $`n`$ to index $`m`$. As long as an operator stays Fredholm, its index cannot change, and when it fails to be Fredholm it can change into anything.
To achieve useful results, we must work on a smaller space.
## 4 Linear Combinations of Shifts
In this section and the next we show that “generic” behavior is indeed achieved in some finite dimensional spaces, and in some infinite-dimensional spaces with sufficiently fine topologies. We see also how control is lost as the space is enlarged and the topology is coarsened.
### 4.1 Shift by one
We begin by considering linear combinations of the shift operator $`a`$ and the identity operator 1. That is, we consider the operator
$$A=c_1a+c_0$$
where $`c_1`$ and $`c_0`$ are constants.
###### Theorem 3
If $`|c_1||c_0|`$, then $`A`$ is Fredholm. The index of $`A`$ is 1 if $`|c_1|>|c_0|`$ and zero if $`|c_1|<|c_0|`$. If $`|c_1|=|c_0|`$, then $`A`$ is not Fredholm.
Proof: First suppose $`|c_0|>|c_1|`$. Then $`A`$ is invertible:
$$A^1=c_0^1(1+(c_1/c_0)a)^1=\underset{n=0}{\overset{\mathrm{}}{}}\frac{(1)^nc_1^n}{c_0^{n+1}}a^n,$$
as the sum converges absolutely. Thus $`A`$ has neither kernel nor cokernel, and has index zero.
If $`|c_1|>|c_0|`$, then the kernel of $`A`$ is 1-dimensional, namely all multiples of $`|\psi =_{n=0}^{\mathrm{}}z_0^ne_n`$, where $`z_0=c_0/c_1`$. Notice how the norm of $`|\psi `$ goes to infinity as $`|z_0|1`$. However, $`A^{}`$ has no kernel, since for any unit vector $`|\varphi `$, $`A^{}|\varphi =\overline{c}_1a^{}|\varphi +\overline{c}_0|\varphi \overline{c}_1a^{}|\varphi \overline{c}_0|\varphi =|c_1||c_0|`$. Thus the index of $`A`$ is 1.
If $`|c_1|=|c_0|`$, then $`A`$ is at the boundary between index 1 and index 0, and so cannot be Fredholm.
### 4.2 Finite linear combinations of shifts
Next we consider linear combinations of $`1,a,a^2,\mathrm{}`$ up to some fixed $`a^n`$. That is, we consider operators of the form
$$A=c_na^n+c_{n1}a^{n1}+\mathrm{}+c_0.$$
(5)
This is closely related to the polynomial
$$p(z)=c_nz^n+\mathrm{}+c_0.$$
(6)
###### Theorem 4
If none of the roots of $`p`$ lie on the unit circle, then $`A`$ is Fredholm, and the index of $`A`$ equals the number of roots of $`p`$ inside the unit circle, counted with multiplicity. If any of the roots of $`p`$ lie on the unit circle, then $`A`$ is not Fredholm.
Proof: The polynomial $`p(z)`$ factorizes as $`p(z)=c_k_{i=1}^k(z\zeta _i)`$, where $`k`$ is the degree of $`p`$ (typically $`k=n`$, but it may happen that $`c_n=0`$). But then $`A=c_k_{i=1}^k(a\zeta _i)`$. If none of the roots $`\zeta _i`$ lie on the unit circle, then each term in the product is Fredholm, so the product is Fredholm, and the index of the product is the sum of the indices of the factors. By Theorem 3, this exactly equals the number of roots $`\zeta _i`$ inside the unit circle.
If any of the roots lie on the unit circle, then a small perturbation can push those roots in or out, yielding Fredholm operators with different indices. This borderline operator therefore cannot be Fredholm.
The last theorem easily generalizes to linear combination of left-shifts and right-shifts. The index of an operator
$$A=c_na^n+\mathrm{}+c_1a+c_0+c_1a^{}+\mathrm{}+c_m(a^{})^m$$
(7)
equals the number of roots of
$$p(z)=\underset{i=m}{\overset{n}{}}c_iz^i$$
(8)
inside the unit circle, minus the degree of the pole at $`z=0`$ (that is $`m`$, unless $`c_m=0`$). This follow from the fact that
$$A=(\underset{i=m}{\overset{n}{}}c_ia^{i+m})(a^{})^m.$$
(9)
Since there is no qualitative difference between combinations of left-shifts and combinations of both left- and right-shifts, we restrict our attention to left-shifts only, and consider families of operators of the form (5).
###### Theorem 5
In the space of complex linear combinations of 1, $`a`$, …, $`a^n`$, almost every operator is Fredholm. For every $`kn`$, the points where the index can jump by $`k`$ (by which we mean the common boundaries of regions of Fredholm operators whose indices differ by $`k`$) is a set of real codimension $`k`$.
In the space of real linear combinations of 1, $`a`$, …, $`a^n`$, almost every operator is Fredholm. For every $`kn`$, the points where the index jumps by $`k`$ is a stratified space, the largest stratum of which has real codimension $`(k+1)/2`$, where $`x`$ denotes the integer part of $`x`$.
Proof: Our parameter space is the space of coefficients $`c_i`$, or equivalently the space of polynomials of degree $`n`$. This is either $`\mathrm{IR}^{n+1}`$ or $`\text{ }|\mathrm{C}^{n+1}`$, depending on whether we allow real or complex coefficients. In either case, the set $`U_k`$ of Fredholm operators of index $`k`$ is identical to the set of polynomials with $`k`$ roots inside the unit circle and the remaining $`nk`$ roots outside (if $`c_n=0`$, we say there is a root at infinity; if $`c_n=c_{n1}=0`$, there is a double root at infinity, and so on. Counting these roots at infinity, there are always exactly $`n`$ roots in all.) The boundary of $`U_k`$ is the set of polynomials with at most $`k`$ roots inside the unit circle, at most $`nk`$ outside the unit circle, and at least one root on the unit circle. (Strictly speaking, the zero polynomial is also on this boundary. This is of such high codimension that it has no effect on the phase portrait we are developing.). We consider the common boundary of $`U_k`$ and $`U_k^{}`$. If $`k<k^{}`$, a nonvanishing polynomial is on the boundary of both $`U_k`$ and $`U_k^{}`$ if it has at most $`k`$ roots inside the unit circle and at most $`nk^{}`$ roots outside. It must therefore have at least $`k^{}k`$ roots on the unit circle.
If we are working with complex coefficients, this is a set of codimension $`k^{}k`$. The roots themselves, together with an overall scale $`c_n`$, can be used to parametrize the space of polynomials. For each root, being on the unit circle is codimension 1, while being inside or outside are open conditions. Since the roots are independent, placing $`k^{}k`$ roots on the unit circle is codimension $`k^{}k`$.
If we are working with real coefficients, the roots are not independent, as non-real roots come in complex conjugate pairs. Thus, the common boundary of $`U_k`$ and $`U_k^{}`$ breaks into several strata, depending on how many real roots and how many complex conjugate pairs lie on the unit circle. If $`k^{}k`$ is even, the biggest stratum consists of having $`(k^{}k)/2`$ pairs, and has codimension $`(k^{}k)/2`$. If $`k^{}k`$ is odd, the biggest stratum consists of having $`(k^{}k1)/2`$ pairs and one real root on the unit circle, and has codimension $`(k^{}+1k)/2`$.
Theorem 5 is illustrated in Figure 1, where the phase portrait is shown for $`n=2`$ with real coefficients, with $`c_2`$ fixed to equal 1. The points above the parabola $`c_0=c_1^2/4`$ have complex conjugate roots, while points below have real roots. Notice that the transition from index 2 to index 0 occurs at an isolated point when the roots are real, but on an interval when the roots come in complex-conjugate pairs.
It is clear that an almost identical theorem applies to linear combinations of left-shifts up to $`a^n`$ and right-shifts up to $`(a^{})^m`$. The results are essentially independent of $`n`$ and $`m`$ (their only effect being to limit the size of possible jumps to $`n+m`$). We can therefore extend the results to the space of all (finite) linear combinations of left- and right-shifts, which is topologized as the union over all $`n`$ and $`m`$ of the spaces considered above. Our result, restated for that space, is
###### Theorem 6
In the space of finite complex linear combinations of left- and right-shifts of arbitrary degree, almost every operator is Fredholm. For every integer $`k1`$, the points where the index can jump by $`k`$ (by which we mean the common boundaries of regions of Fredholm operators whose indices differ by $`k`$) is a set of real codimension $`k`$.
If we restrict the coefficients to be real, then, for every $`kn`$, the points where the index jumps by $`k`$ is a stratified space, the largest stratum of which has real codimension $`(k+1)/2`$.
## 5 Toeplitz operators
Although Theorem 6 refers to an infinite-dimensional space, this space is still extremely small – each point is a finite linear combination of shifts. In this section we consider infinite linear combinations of shifts. This is equivalent to studying Toeplitz operators.
###### Definition 3
The Hardy space $`H`$ is the subspace of $`L^2(S^1)`$ consisting of functions whose Fourier transforms have no negative frequency terms. Equivalently, if we give $`L^2(S^1)`$ a basis of Fourier modes $`e_n=e^{in\theta }`$, where the integer $`n`$ ranges from $`\mathrm{}`$ to $`\mathrm{}`$, then $`H`$ is the closed linear span of $`e_0,e_1,e_2,\mathrm{}`$.
We think of $`S^1`$ as sitting in the complex plane, with $`z=e^{i\theta }`$. Now let $`f(z)`$ be a bounded, measurable function on $`S^1`$, and let $`P`$ be the orthogonal projection from $`L^2(S^1)`$ to $`H`$. If $`|\psi H`$, then $`|f\psi `$ (pointwise product) is in $`L^2(S^1)`$, and $`P|f\psi H`$. We define the operator $`T_f`$ by
$$T_f|\psi =P|f\psi .$$
(10)
###### Definition 4
An operator of the form (10) is called a Toeplitz operator. We call a Toeplitz operator $`T_f`$ continuous if the underlying function $`f`$ is continuous, and apply the terms “differentiable”, “smooth” and “analytic” similarly.
Remark: Toeplitz operators can be represented by semi-infinite matrices that have constant entries on diagonals, and the various classes we have defined correspond to the decay away from the main diagonal.
Notice that
$$T_{e_m}e_n=\{\begin{array}{cc}e_{n+m}\hfill & \text{if }n+m0\hfill \\ 0\hfill & \text{otherwise}\hfill \end{array}$$
(11)
so $`T_{e_m}`$ is simply a shift by $`m`$, a right shift if $`m>0`$ and a left-shift if $`m<0`$. All our results about shifts can therefore be understood in the context of Toeplitz operators. Theorem 5 refers to operators $`T_f`$, where $`f`$ is a polynomial in $`z^1`$ of limited degree. Theorem 6 considers polynomials or arbitrary degree in $`z`$ and $`z^1`$. We will see that the results carry over to analytic functions on an annulus around $`S^1`$, and to a lesser extent to $`C^k`$ Toeplitz operators, but with results that weaken as $`k`$ is decreased.
Here are some standard results about Toeplitz operators. For details, see \[Dou\].
###### Theorem 7
A $`C^1`$ Toeplitz operator $`T_f`$ is Fredholm if and only if $`f`$ is everywhere nonzero on the unit circle. In that case the index of $`T_f`$ is minus the winding number of $`f`$ around the origin, namely
$$Index(T_f)=Winding(f)=\frac{1}{2\pi i}_{S^1}\frac{df}{f},$$
(12)
Given the first half of the theorem, the equality of index and winding number is easy to understand. We simply deform $`f`$ to a function of the form $`f(z)=z^n`$, while keeping $`f`$ nonzero on all of $`S^1`$ throughout the deformation (this is always possible, see e.g. \[GuP\]). In the process of deformation, neither the index of $`T_f`$ nor the winding number of $`f`$ can change, as they are topological invariants. Since the winding number of $`z^n`$ is $`n`$, and since $`T_{z^n}=(a^{})^n`$ (if $`n0`$, $`a^n`$ otherwise), which has index $`n`$, the result follows.
We now consider functions $`f`$ on $`S^1`$ that can be analytically continued (without singularities) an annulus $`r_0|z|r_1`$, where the radii $`r_0<1`$ and $`r_1>1`$ are fixed. This is equivalent to requiring that the Fourier coefficients $`\widehat{f}_n`$ decay exponentially fast, i.e. that the sum
$$\underset{n=\mathrm{}}{\overset{\mathrm{}}{}}|\widehat{f}_n|(r_0^n+r_1^n)$$
(13)
converges. For now we do not impose any reality constraints or other symmetries on the coefficients $`\widehat{f}_n`$. This space of functions is a Banach space, with norm given by the sup norm on the annulus. This norm is stronger than any Sobolev norm on the circle itself.
The analysis of the corresponding Toeplitz operators is straightforward and similar to the proof of Theorem 5. Since $`f`$ has no poles in the annulus, we just have to keep track of the zeroes of $`f`$. For the index of $`T_f`$ to change, a zero of $`f`$ must cross the unit circle. For the index to jump from $`k`$ to $`k^{}`$, $`|kk^{}|`$ zeroes must cross simultaneously. In the absence of symmetry, the locations of the zeroes are independent and can be freely varied, so this is a codimension-$`|kk^{}|`$ event.
If we impose a reality condition: $`f(\overline{z})=\overline{f(z)}`$, then zeroes appear only on the real axis or in complex conjugate pairs. In that case, changing the index by 2 is merely a codimension-1 event. Combining these observations we obtain
###### Theorem 8
In the space of Toeplitz operators that are analytic in a (fixed) annulus containing $`S^1`$, almost every operator is Fredholm. For every integer $`k1`$, the points where the index can jump by $`k`$ is a set of real codimension $`k`$.
If we impose a reality condition $`f(\overline{z})=\overline{f(z)}`$ then, for every $`kn`$, the points where the index jumps by $`k`$ is a stratified space, the largest stratum of which has real codimension $`(k+1)/2`$.
Finally we consider Toeplitz operators that are not necessarily analytic, but are merely $`\mathrm{}`$ times differentiable, and we use the $`C^{\mathrm{}}`$ norm. Our result is
###### Theorem 9
In the space of Toeplitz $`C^{\mathrm{}}`$ operators, almost every operator is Fredholm. For every integer $`k`$ with $`1k2\mathrm{}+1`$, the points where the index can jump by $`k`$ is a set of real codimension $`k`$. For every integer $`k2\mathrm{}+1`$, the points where the index can jump by $`k`$ is a set of real codimension $`2\mathrm{}+1`$.
In other words, our familiar results hold up to codimension $`2\mathrm{}+1`$, at which point we lose all control of the change in index.
Proof: As long as $`f`$ is everywhere nonzero, $`T_f`$ is Fredholm. To get a change in index, therefore, we need one or more points where $`f`$, and possibly some derivatives of $`f`$ with respect to $`\theta `$, vanish. Suppose then that for some angle $`\theta _0`$, $`f(\theta _0)=f^{}(\theta _0)=\mathrm{}=f^{(n1)}(\theta _0)=0`$ for some $`n\mathrm{}`$, but that the $`n`$-th derivative $`f^{(n)}(\theta _0)0`$. This is a codimension $`2n1`$ event, since we are setting the real and imaginary parts of $`n`$ variables to zero, but have a 1-parameter choice of points where this can occur. Without loss of generality, we suppose that this $`n`$-th derivative is real and positive. By making a $`C^{\mathrm{}}`$-small perturbation of $`f`$, we can make the value of $`f`$ highly oscillatory near $`\theta _0`$, thereby wrapping around the origin a number of times. However, since a $`C^{\mathrm{}}`$-small perturbation does not change the $`n`$-th derivative by much, the sign of the real part of $`f`$ can change at most $`n`$ times near $`\theta _0`$, so the argument of $`f`$ can only increase or decrease by $`n\pi `$ or less. The difference between these two extremes is $`2n\pi `$, or a change in winding number of $`n`$.
To change the index by an integer $`m`$, therefore, we must have the function vanish to various orders at several points, with the sum of the orders of vanishing adding to $`m`$. The generic event is for $`f`$ (but not $`f^{}`$) to vanish at $`m`$ different points – this is a codimension $`m`$ event, analagous to having $`m`$ zeroes of a polynomial cross the unit circle simultaneously at $`m`$ different points. All other scenarios have higher codimension and are analogous to having 2 or more zeroes of the $`m`$ zeroes crossing the unit circle at the same point.
The situation is different, however, when the function $`f`$ and the first $`\mathrm{}`$ derivatives all vanish at a point $`\theta _0`$. Then the higher-order derivatives are not protected from $`C^{\mathrm{}}`$-small perturbations and, by making such a perturbation, we can change $`f`$ into a function that is identically zero on a small neighborhood of $`\theta =\theta _0`$. By making a further small perturbation, we can make $`f`$ wrap around the origin as many times as we like near $`\theta =\theta _0`$. More specifically, if $`f`$ is zero on an interval of size $`\delta `$, then, for small $`ϵ`$, $`\stackrel{~}{f}(\theta )=f(\theta )+ϵe^{iN\theta }`$ will wrap around the origin approximately $`N\delta /2\pi `$ times near $`\theta _0`$. By picking $`N`$ as large (positive or negative) as we wish, we can obtain arbitrarily positive or negative indices. As long as we take $`ϵN^{\mathrm{}}`$, this perturbation will remain small in the $`C^{\mathrm{}}`$ norm.
## 6 The Quantum Hall Effect
We have seen in the previous section that the Fredholm index of a generic one dimensional family of Toeplitz operators is a step function with small jumps. This is reminiscent of what one observes for the Hall conductance for random Schrödinger operators.
In this section we want to discuss some of the difficulties, and what one would still need to know, for the strategy in this paper to yield useful results for the QHE.
### 6.1 Landau levels
The Hall conductance is related to the Index of $`PUP`$ (on $`RangeP`$) with $`P`$ a spectral projection in $`L^2(\text{ }|\mathrm{C})`$ and $`U`$ a multiplication by $`\frac{z}{|z|}`$. This operator is closely related to a Toeplitz operator in the case of a basic paradigm for the Hall effect:
###### Theorem 10
Let $`P`$ be a projection on the lowest Landau level in $`\mathrm{IR}^2`$, then $`PUP`$ differs from a Toeplitz operator by a compact operator.
Proof: A basis for the lowest Landau level is
$$|n=\frac{1}{\sqrt{\pi n!}}z^ne^{|z|^2/2},n0.$$
(14)
As a consequence
$$n|U|m=\delta _{n,m+1}\frac{(m+1/2)!}{m!\sqrt{m+1}}\delta _{n,m+1}\left(1\frac{1}{8m}\right).\text{ }\text{ }\text{ }$$
(15)
The same result also holds if $`P`$ is a projection on a higher Landau level, but the calculation is more involved. If $`P`$ is a projection onto multiple Landau levels, then $`PUP`$ is a compact perturbation of a direct sum of Toeplitz operators, one for each Landau level. This suggests that the class of Toeplitz operators is indeed related to the QHE.
For (spinless) electrons/holes on the Euclidean and hyperbolic planes, with homogeneous magnetic field, and without disorder, $`Index(PUP)(E)`$ has been explicitly computed as a function of the “Fermi energy” $`E`$. In the Euclidean plane one finds a monotonic step function with jumps $`\pm 1`$ \[APn\]. (One needs both signs for electrons and holes.) The same results apply in the hyperbolic plane for all energies below the continuous spectrum \[APn\]. This implies that also for (relatively) compact perturbations of these Hamiltonians the Fredholm index in the QHE behaves as does the Fredholm index of Toeplitz operators. The situation is, however, quite different for Schrödinger operators with periodic potentials where $`PUP(E)`$ failes to be Fredholm on intervals of “energy bands” and where the Fredholm index in adjacent gaps can jump by large integers \[TKNN\].
### 6.2 An open problem
For applications to the Hall effect one considers $`PUP`$ (on the range of $`P`$) where the projection $`P`$ depends on a parameter such as the Fermi energy or the external magnetic field. The family $`PUP`$ is therefore defined on different spaces, since the range of $`P`$ is not fixed. Our strategy, so far, has been to study a family of operators on a fixed Hilbert space. To adapt the QHE to this strategy one must replace $`PUP`$ by something like
$$C=PUP+1P,$$
(16)
acting on the full Hilbert space, as $`Index(C)`$ on the full space coincides with $`Index(PUP)`$ on $`Range(P)`$. Now, a deformation of $`P`$ leads to a deformation of $`C`$ and gives a family of bounded operators on a fixed space, say, $`L^2(\text{ }|\mathrm{C})`$. However, this modification is not without a price since now, even for the simple case of a full Landau level, $`C`$ is not strictly a Toeplitz operator. It is a rather silly generalization of a Toeplitz operator to a direct sum of a Toeplitz operator and the identity.
A more serious problem has to do with what should one pick as a good family $`P`$. In particular, when one considers a variation of the Fermi energy $`E`$ the corresponding projection $`P(E)`$ is not continuous in the operator norm. Hence, a smooth variation of $`E`$ is not even a smooth variation of $`C`$ in the operator norm (much less in the sharper norms considered above).
Using the fact that the Fredholm index does not change under small changes in the norm of the operator, there is no harm done if one replaces the spectral projection $`P(E)`$ by the Fermi function
$$P_\beta (E)=\frac{1}{\mathrm{exp}\beta (HE)+1},$$
(17)
for $`\beta `$ large. Unlike $`P(E)`$, $`P_\beta (E)`$ is a smooth function of $`E`$, and so the family $`C_\beta (E)`$ is smooth. The price one pays is that $`P_\beta (E)`$ is not a projection, which leads to ambiguities as to what one might want to choose for $`C_\beta (E)`$ . For example, instead of (16) one might choose
$$C_\beta (E)=P_\beta (E)UP_\beta (E)+(1P_\beta ^2(E)).$$
(18)
The trouble is that it is not clear what, if anything, the results about families of Toeplitz operators imply for the family $`C_\beta (E)`$.
We therefore pose the following questions:
For random Scrödinegr operators on the plane, with $`\beta `$ sufficiently large, what are the properties of the family of operators $`C_\beta (E)`$? Is it Fredholm away from a discrete set of energies $`E`$, or does it fail to be Fredholm on bigger sets? If it fails to be Fredholm at isolated points, are the jumps generically small?
## Acknowledgments
This research was supported in part by the Israel Science Foundation, the Fund for Promotion of Research at the Technion, the DFG, the National Science Foundation and the Texas Advanced Research Program.
|
no-problem/0003/cond-mat0003054.html
|
ar5iv
|
text
|
# Polariton Dispersion Law in Periodic Bragg and Near-Bragg Multiple Quantum Well Structures
## I Introduction
Optical properties of excitons confined in quasi-two-dimensional quantum well (QW) structures attract a great deal of interest (see for review Ref. ). Starting from pioneering work by Agranovich and Dubovsky , it was understood that since the translational invariance in quasi-two-dimensional systems is broken in the direction normal to the plane of confinement, the coupling between excitons and light would lead to the radiative decay of excitons. This situation is usually described in terms of quasi-modes with complex eigen-energies. Imaginary parts of the latter characterize radiative life-times of the respective modes. Systems with multiple QW’s (MQW’s) demonstrate the presence of several quasi-modes with different radiative decay rates. For a few of those modes the radiative decay rates turn out to be larger than those for a single QW, and are actually growing with the number of QW’s in the structure. Such modes are often called bright or super-radiant, while the modes with reduced radiative decay are called dark or sub-radiant. One of the theoretical and experimental methods to identify quasi-modes of MQW’s is to consider reflection coefficient, which has complex-valued poles at the modes’ frequencies. The imaginary part of the frequency is interpreted as a half-width of the reflection resonance.
The interpretation of optical properties of MQW’s in terms of super- and sub-radiant modes gives a clear physical picture when the number of QW’s is not very large. In systems with a larger number of wells, such an interpretation may be misleading. Consider, for instance, recent experiments described in Ref. , where reflection and luminescence were studied for structures with up to 100 QW’s. These experiments used the so called Bragg resonance structures, for which the period of the structure, $`a`$, satisfies the Bragg resonance condition, $`a/2=\lambda _0`$, for the wavelength $`\lambda _0`$ of the radiation at the first heavy-hole exciton resonance frequency $`\omega _0`$. The theory of such structures in terms of super-radiant modes was developed in a number of papers. The main result of the theory is that there exists just one “super-radiant” mode with a life-time $`N\mathrm{\Gamma }_0`$, where $`N`$ is the number of the wells in the structure, and $`\mathrm{\Gamma }_0`$ is a radiative life-time of excitons in a single well. The reflection coefficient from such a structure is given by
$$R=\frac{(N\mathrm{\Gamma }_0)^2}{\left(\omega \omega _0\right)^2+\left(\gamma +N\mathrm{\Gamma }_0\right)^2},$$
(1)
where $`\gamma `$ is a homogeneous exciton broadening. This expression describes a very broad reflection resonance with the maximum at the Bragg resonance frequency. Eq. (1) obviously brakes down when $`N`$ grows too large, but the interpretation of this equation in terms of the super-radiant mode becomes ambiguous even before that. In Ref. the luminescence from a MQW structure with the number of wells up to $`100`$ was found to be very small at the frequency of the super-radiant mode. This seemingly paradoxical result becomes quite obvious if one considers the spectrum of MQW’s in the superlattice limit. When the number of QW’s increases, so called sub-radiant modes lose the imaginary component of their frequencies, and form regular stationary normal modes of an infinite periodic structure. At the same time super-radiant modes become evanescent modes of the band-gaps of the structure. The reflection coefficient in band-gaps is close to one (if the homogeneous broadening is small enough), and its frequency dependence is very broad with almost rectangular shape. No propagating excitations exist at these frequencies, so it is obvious why the luminescence detected in Ref. in this region was so weak. These rather straightforward discussion is warranted by the overuse of the terminology of super-radiance in the context of MQW’s.
The experiments of Ref. are the first where long MQW’s with Bragg or near Bragg periods are studied. As just mentioned, it is more natural to discuss these experiments in terms of stationary excitations of an infinite periodic superlattice. Even though the dispersion equation for this system in its general form has been obtained by many authors, the detailed analysis of this equation under Bragg or near Bragg conditions has not been carried out. To discuss details of the polariton dispersion in such a situation is the main objective of the present paper. The results of this discussion will be useful in better understanding the results of Ref. and similar experiments.
## II The structure of the spectrum and polariton dispersion laws for a periodic Bragg superlattice
A general expression for the polariton dispersion law in a periodic QW superlattice was derived many times by different authors. For a wave propagating in the direction of growth it has the following form:
$$\mathrm{cos}\left(Qa\right)=\mathrm{cos}\left(\frac{\omega }{c}a\right)\frac{2\mathrm{\Gamma }_0\omega }{\omega _0^2\omega ^22\gamma \omega }\mathrm{sin}\left(\frac{\omega }{c}a\right),$$
(2)
where $`Q`$ is the Bloch vector of the polariton and $`c`$ is the speed of light in a background material. Generalization for an oblique direction is straightforward: $`\omega /c`$ is replaced with $`k_z=\sqrt{(\omega /c)^2k_{}^2}`$, where $`k_{}`$ is an in-plane component of the wave vector. For short period superlattices, $`a\omega /c1`$, this equation is reduced to the standard polariton dispersion in a dispersionless material. In the absence of the homogeneous broadening, there is a polariton gap between $`\omega _0`$ and $`\sqrt{\omega _0^2+4\mathrm{\Gamma }_0c/a}`$. In general, band-gaps in the polariton spectrum are determined by inequalities:
$`{\displaystyle \frac{2\mathrm{\Gamma }_0\omega }{\omega _0^2\omega ^2}}\mathrm{cot}\left({\displaystyle \frac{\omega }{2c}}a\right)`$ $`<`$ $`1,,`$ (3)
$`{\displaystyle \frac{2\mathrm{\Gamma }_0\omega }{\omega _0^2\omega ^2}}\mathrm{tan}\left({\displaystyle \frac{\omega }{2c}}a\right)`$ $`>`$ $`1,`$ (4)
where the polariton wave vector, $`Q`$ is 0 at the end of the interval determined by the first of this inequalities, and $`Q=\pi /a`$ at the ending point of the second one. For frequencies close to $`\omega _0`$ these inequalities are often solved approximately in the so called resonance approximation, where the frequency is taken equal to $`\omega _0`$ everywhere except for the exciton resonance denominator. This approximation fails, however, for Bragg structures satisfying the condition
$$\frac{a\omega _0}{c}=\pi $$
(5)
because the last term in Eq. (2) describing interaction between QW excitons and light vanishes at the exciton resonance frequency, $`\omega _0`$. In the absence of homogeneous broadening $`\gamma `$, the denominator in this term also vanishes, and, therefore, this case requires careful, albeit elementary, analysis.
Inequalities (3) and (4) in this case can be rewritten as
$`{\displaystyle \frac{2\mathrm{\Gamma }_0\omega }{\omega ^2\omega _0^2}}\mathrm{tan}\left({\displaystyle \frac{\omega \omega _0}{2c}}a\right)`$ $`<`$ $`1,`$ (6)
$`{\displaystyle \frac{2\mathrm{\Gamma }_0\omega }{\omega ^2\omega _0^2}}\mathrm{cot}\left({\displaystyle \frac{\omega \omega _0}{2c}}a\right)`$ $`>`$ $`1.`$ (7)
One can notice now that the first of these inequalities is never violated for frequencies close to $`\omega _0`$ as long as $`\mathrm{\Gamma }_0\omega _0`$. The boundaries of the band-gap are determined entirely by Eq. (7), which means that at the both ends of the gap, the polariton wave vector $`Q=\pi /a`$. From Eq. (7) we find that the polariton band-gap is determined by the inequalities
$$\omega _0\sqrt{\frac{2\omega _0\mathrm{\Gamma }_0}{\pi }}<\omega <\omega _0+\sqrt{\frac{2\omega _0\mathrm{\Gamma }_0}{\pi }},$$
(8)
provided that inequality $`\sqrt{\mathrm{\Gamma }_0/\omega _0}1`$ holds, which is usually true in real systems ($`\sqrt{\mathrm{\Gamma }_0/\omega _0}10^2`$ in the experiment of Ref.).
In the presence of homogeneous broadening the band-gap is not clearly defined, but it is remarkable that if $`\gamma 0`$ the solution of Eq. (2) at $`\omega =\omega _0`$ is real, $`Q=\pi /a`$, while, as Eq. (8) shows, this solution acquires an imaginary part when $`\gamma =0`$. In order to get a better understanding of the situation we have solved dispersion equation (2) in the presence of the homogeneous broadening for the frequencies satisfying Eq. (8). We found that the real part of the polariton’s wave vector $`Q^{}`$ and its imaginary part $`Q^{\prime \prime }`$ have the following form:
$$|Q^{}\pi |=Q^{\prime \prime }=\frac{1}{a}\sqrt{\frac{\pi \mathrm{\Gamma }_0ϵ}{\gamma \omega _0}},$$
(9)
for $`|ϵ|\gamma `$, where $`ϵ=\omega \omega _0`$. Eq. (9) shows that for small $`|ϵ|`$, the imaginary part of the polariton wave vector indeed becomes zero along with $`ϵ`$, while farther away from the resonance frequency $`\omega _0`$, $`|ϵ|\gamma `$
$$Q^{\prime \prime }=\frac{1}{a}\sqrt{\frac{2\pi \mathrm{\Gamma }_0}{\omega _0}},$$
which is the expression one would obtain at $`\omega =\omega _0`$ in the absence of the homogeneous broadening.
Eq. (9) suggests a simple explanation of the results of luminescence experiments carried out in Ref. with the exact Bragg structures. In this work, a peak of the luminescence at the resonance frequency, $`\omega _0`$, right in the middle of the polariton gap was observed. One can relate this peak to the zeroing of the imaginary part of the polariton wave number $`Q`$. The width of the peak is determined by the homogeneous broadening parameter, $`\gamma `$. This observation can be used in order to validate the suggested explanation.
## III Near-Bragg MQW structures
One of the important experimental results of Ref. is the observation of changes in the luminescence pattern with the change in the period of the MQW structure. In this section we examine how the spectrum of the MQW’s evolves when it is tuned away from the exact Bragg resonance. We solve inequalities (3) and (4) approximately for the frequency region $`|\omega \omega _B|a/c1`$, where $`\omega _B`$ is the Bragg frequency defined as $`\omega _B=\pi c/a`$. In this approximation one finds that when the system is tuned away from the exact Bragg condition, $`\omega _0=\omega _B`$, the band-gap given by Eq. (8) divides into two gaps. If $`\omega _0>\omega _B`$ one has for the two gaps:
$`\omega _2`$ $`<\omega `$ $`<\omega _B,`$ (10)
$`\omega _0{\displaystyle \frac{1}{2}}\pi \mathrm{\Gamma }_0{\displaystyle \frac{\omega _0\omega _B}{\omega _B}}`$ $`<\omega <`$ $`\omega _1,`$ (11)
where
$`\omega _1`$ $`=`$ $`{\displaystyle \frac{\omega _0+\omega _B}{2}}+{\displaystyle \frac{1}{2}}\sqrt{(\omega _0\omega _B)^2+{\displaystyle \frac{16\mathrm{\Gamma }_0\omega _B^2}{\pi (\omega _B+\omega _0)}}},`$ (12)
$`\omega _2`$ $`=`$ $`{\displaystyle \frac{\omega _0+\omega _B}{2}}{\displaystyle \frac{1}{2}}\sqrt{(\omega _0\omega _B)^2+{\displaystyle \frac{8\mathrm{\Gamma }_0\omega _B}{\pi }}}.`$ (13)
In the case of the detuning of the opposite sign, $`\omega _0<\omega _B`$, the band-gaps are determined by
$`\omega _2`$ $`<\omega `$ $`<\omega _0+{\displaystyle \frac{1}{2}}\pi \mathrm{\Gamma }_0{\displaystyle \frac{\omega _B\omega _0}{\omega _B}},`$ (14)
$`\omega _B`$ $`<\omega <`$ $`\omega _1.`$ (15)
Using data from Ref. ($`\omega _0=1.491`$ $`eV`$, $`\mathrm{\Gamma }_0=27`$ $`meV`$), we can estimate positions of the gap boundaries for the system used in those experiments. The estimates are consistent with the positions of the luminescent peaks observed in Ref. for different degrees of detuning. The general dispersion equation (2) can give the values of the wave numbers $`Q`$ corresponding to the modes excited in those experiments. We believe, however, that it is useful to have approximate “long-wave” dispersion laws for those modes. For concreteness, we consider the case $`\omega _0<\omega _B`$. In this case, the excitations under interest belong to the branches with frequencies greater then $`\omega _0+\frac{1}{2}\pi \mathrm{\Gamma }_0\frac{\omega _B\omega _0}{\omega _B}\omega _0`$ and less than $`\omega _2`$. The first of these branches approaches the band edge with $`Q=0`$, and the second one with $`Q=\pi `$. The near-the-edge dispersion laws for these branches can be obtained in the form:
$$\omega =\omega _0+\frac{1}{2}\pi \mathrm{\Gamma }_0\frac{\omega _B\omega _0}{\omega _B}+\pi \mathrm{\Gamma }_0\frac{\omega _B\omega _0}{8\omega _B}Q^2a^2,$$
(16)
for the branch near $`\omega _0`$, and
$$\omega =\omega _2\frac{(\omega _0\omega _2)^3}{4\mathrm{\Gamma }_0^2}\left(Qa\pi \right)^2,$$
(17)
for the branch near $`\omega _2`$. One can see from these expressions that the effective masses of these two branches are significantly different. The one described by Eq. (16) has a very small effective mass, and therefore the frequencies of this mode could only barely be distinguished from the resonance frequency $`\omega _0`$. The second branch, described by Eq. (17), has much stronger dispersion, and, therefore, it must be separated from $`\omega _0`$ more strongly than by the width of the gap between $`\omega _0`$ and $`\omega _2`$. Indeed, using the numerical parameters of Ref. , we find that the width of the gap for the detuning $`\omega _0=0.98\omega _B`$ is approximately equal to $`1`$ $`meV`$, while experimentally observed splitting between the modes is $`3.2`$ $`meV`$. This corresponds to the mode excited with a wave number $`Q`$ such that $`|Qa\pi |0.1`$. The effective mass of this mode at the band edge under consideration, according to Eq. (17), increases with an increase of detuning from the Bragg structure. This predictions can also be tested experimentally in order to check if the simple picture suggested in the present paper corresponds to the phenomenon observed in Ref. .
Concluding, we analyzed the dispersion law of polaritons in periodic MQW structures at,or close to the Bragg resonance condition, $`\omega _0a/c=\pi `$, and established the pattern of band-gaps and conductivity bands arising in such structures. We also obtained analytical expressions for effective masses of polariton modes presumably observed in Ref. . The theoretical results obtained were found to agree with experimental data. We also suggested some new experiments that can be used to further test the adequacy of the presented results.
We wish to thank S. Schwarz for reading and commenting on the manuscript. Work at Seton Hall University was supported by NATO Linkage Grant No 974573, work at Queens College was supported by PSC-CUNY research award.
|
no-problem/0003/nlin0003001.html
|
ar5iv
|
text
|
# A formula with volumes of five tetrahedra and discrete curvature
## Remarks
1. Similar considerations will be much more complicated already for the four-dimensional space, where we will have 6 vertices, 6 tetrahedra, 15 edges, and 20 two-dimensional faces where the “discrete curvature” can be concentrated.
2. It may seem that the whole manifold where everything happens must be “flat” due to the delta function $`\delta (\omega )`$. However, it is not so, and not only because of the mentioned possibility of generalization to the spherical geometry. Imagine, for instance, a ramified covering of some flat manifold, where the “full angle” corresponding to going around a small contour surrounding a “ramification contour” can equal any multiple of $`2\pi `$.
|
no-problem/0003/astro-ph0003445.html
|
ar5iv
|
text
|
# Neutrino and axion emissivities of neutron stars from nucleon-nucleon scattering data
## Abstract
Neutrino and axion production in neutron stars occurs mainly as bremsstrahlung from nucleon-nucleon ($`NN`$) scattering. The energy radiated via neutrinos or axions is typically very small compared to other scales in the two-nucleon system. The rate of emission of such “soft” radiation is directly related to the on-shell $`NN`$ amplitude, and thereby to the $`NN`$ experimental data. This facilitates the model-independent calculation of the neutrino and axion radiation rates which is presented here. We find that the resultant rates are roughly a factor of four below earlier estimates based on a one-pion-exchange $`NN`$ amplitude.
Neutron stars are believed to be born during a supernova explosion with an interior temperature $`T`$ of order 60 MeV. The subsequent evolution of the hot and dense compact star is characterized by a rapid early cooling phase followed by a, significantly slower, late-time cooling phase . During both of these phases neutrinos are an important source of energy loss. Thermal evolution of neutron stars is largely driven by neutrino bremsstrahlung reactions such as
$$NNNN\nu \overline{\nu }nnnpe^{}\overline{\nu _e}.$$
(1)
In the first part of the nascent neutron star’s life it evolves by diffusion of trapped neutrinos. This cools the interior to $`T1`$ MeV on a time scale of a few seconds . The resultant intense neutrino emission is thought to play a central role in both the supernova mechanism and r-process nucleosynthesis. The high neutrino luminosity is fueled by the reactions (1), which compete with the annihilation $`e^++e^{}\nu \overline{\nu }`$ in degenerate matter. At later times, the neutron star enters a period of slower thermal evolution, during which the emitted neutrinos free stream. This occurs because the neutrino mean-free path becomes long when $`T\stackrel{<}{}\text{ }1`$ MeV. The time scale for this long-term cooling of the dense, degenerate, neutron-rich, inner core thus depends crucially on the neutrino emissivity, which is, again, dominated by the reactions (1). Observational constraints on this late-time portion of the neutron star’s evolution will improve as X-ray observatories such as Einstein, EXOSAT and ROSAT gather pulsar data which gives information on the surface temperatures of these stars. The challenge for theorists is to improve the models of both the early- and late-time cooling of the neutron star.
The emissivities which are key ingredients in these simulations are dominated by the reactions (1). Despite their central role in neutron-star dynamics these reactions have received relatively little attention. Pioneering work was done by Friman and Maxwell who computed the reaction rates using a nucleon-nucleon amplitude due only to a single pion exchange (henceforth we refer to such calculations as the “OPE approximation”). Recently, many-body effects, in particular the suppression due to multiple scattering (the Landau-Pomeranchuk-Migdal, or LPM, effect ), have been shown to be important at temperatures $`T\stackrel{>}{}\text{ }5`$ MeV . However, the work of Ref. is still the state of the art treatment of the $`NN`$ interactions which occur during the reactions (1) (see also Ref. ). In this letter we follow an approach reminiscent of that used in soft-photon calculations , and relate the rate of production of soft-neutrino radiation in $`NN`$ scattering to the on-shell $`NN`$ scattering amplitude. This yields a calculation we present as a “benchmark”, which accounts for the two-nucleon dynamics in a model-independent way. We make no attempt to account for many-body effects, although they are undoubtedly important in the star. We identify the density and temperature range over which our results are valid and show that, although limited, it is of interest to both supernova and neutron star physics. In contrast, the full evaluation of the rates for the reactions (1) in a strongly-coupled medium is a complicated problem. Therefore, of necessity, most solutions to it will be model-dependent. Thus, we see our results as providing a model-independent foundation on which future work that assesses the role of many-body effects can build.
Our main focus in this work will be the neutrino emissivity from $`NNNN\nu \overline{\nu }`$. However, axions (if they exist) couple, like neutrinos, to the nucleon spin. Therefore, in computing the neutrino emissivity one obtains the axion emissivity from $`NNNNa`$ as a welcome by-product . This is useful because one important constraint on the axion mass comes from considering the role of axionstrahlung in the dynamics of SN1987A . Indeed, if the rate for the reaction $`NNNNa`$ is too high then the supernova dynamics is completely changed, and the successful “standard” picture of the supernova is destroyed. Thus, one can constrain the axion coupling, and hence the axion mass, by demanding that axion radiation did not make too large a contribution to the energy loss from SN1987A.
Neutrino and axion emissivities: We begin by explicitly calculating the emissivity due to $`NNNN\nu \overline{\nu }`$. The $`\nu \overline{\nu }`$ coupling to non-relativistic baryons at low energies is given by the Lagrange density
$$_W=\frac{G_F}{2\sqrt{2}}l^\mu N^{}\left(c_v\delta _{\mu ,0}c_a\delta _{\mu ,i}\sigma _i\right)N,$$
(2)
where $`ł^\mu =\overline{\nu }\gamma ^\mu (1\gamma ^5)\nu `$ is the leptonic current, $`G_F=1.166\times 10^5`$ GeV<sup>-2</sup>, $`N`$ is the nucleon field, and $`c_v`$ and $`c_a`$ are the nucleon neutral-current vector and axial-vector coupling constants. Some Feynman diagrams for the bremsstrahlung process are shown in Fig. 1.
The incoming (outgoing) nucleon momenta are labeled $`𝐩_\mathrm{𝟏},𝐩_\mathrm{𝟐}`$ ($`𝐩_\mathrm{𝟑},𝐩_\mathrm{𝟒}`$). The dashed line represents radiation—a neutrino-anti-neutrino pair in this case—which carries energy $`\omega `$ and momentum $`𝐪`$. In general we are interested in cases where the radiated energy is small compared to the incoming nucleon energy. In the limit $`\omega 0`$ the amplitudes corresponding to diagrams (a) and (b) in Fig. 1 are dominant, as they contain pieces proportional to $`1/\omega `$. On the other hand, the contributions from the re-scattering diagram Fig. 1(c), and from meson-exchange currents such as Fig. 1(d), remain finite in the $`\omega 0`$ limit. Thus, for the reaction $`nnnn\nu \overline{\nu }`$ the matrix element can be written as
$$=2\frac{G_F}{2\sqrt{2}}\frac{1}{\omega }l^\mu 𝐩^{}|[𝐓_{NN},\mathrm{\Gamma }_\mu ]|𝐩+O(\omega ^0),$$
(3)
where $`𝐩(𝐩^{})`$ is the initial (final) relative momentum of the two-nucleon system. We refer to results which retain only this leading term, of $`O(\omega ^1)`$, in $``$ as “true in the soft-neutrino approximation (SNA)”. In general the $`NN`$ T-matrix appearing in Eq. (3), $`𝐓_{NN}`$, will be half off-shell <sup>*</sup><sup>*</sup>*As used here, it should involve a sum over the allowed partial-waves of the $`NN`$ system. This, together with the factor of two in front of the matrix element, accounts for the exchange graphs which must be included in $``$.. But, in the SNA we can take $`𝐓_{NN}`$ to be the on-shell $`NN`$ amplitude. We can also neglect the difference between the magnitude of the initial and final-state relative momenta. We expect these approximations to break down when $`\omega m_\pi `$, since $`m_\pi `$ sets the scale for variations of T<sub>NN</sub> in the off-shell direction At very low relative momenta the scale of breakdown is set by the $`NN`$ scattering length, since that gives the variation in the on-shell direction. However, $`a_{NN}`$ does not really play a role here, since typical nucleon momenta in neutron stars are at least 100 MeV.. So, in the SNA, the $`NN`$ interaction is described by the on-shell T-matrix $`𝐓_{NN}`$, evaluated at a center-of-mass energy which, for reasons of symmetry, is chosen to be $`(p^2+p^2)/(2M)`$ ($`M`$ is the nucleon mass). This T-matrix can be constructed from phase shifts deduced from $`NN`$ scattering data . Note that the OPE approximation used in most previous calculations involves substituting $`V_{OPE}`$, the one-pion-exchange potential, for $`𝐓_{NN}`$ in Eq. (3). Meanwhile, $`\mathrm{\Gamma }_\mu `$ is the vertex which couples the radiation to the nucleons. For $`\nu \overline{\nu }`$ radiation $`\mathrm{\Gamma }_\mu `$ follows straight from Eq. (2). Only its three-vector part contributes to $``$ at $`O(\omega ^1)`$. Equation (3) then gives us a model-independent result for $``$, which is correct in the SNA.
If only two-body collisions are taken into account then the neutrino emissivity from a neutron gas is given by Fermi’s golden rule
$`_{\nu \overline{\nu }}={\displaystyle \frac{d^3q_1}{(2\pi )^32\omega _1}\frac{d^3q_2}{(2\pi )^32\omega _2}(2\pi )^4\delta (E_{in}E_{fn})}`$ (4)
$`\omega \delta ^3(𝐩_{in}𝐩_{fn}){\displaystyle \left[\underset{i=\mathrm{1..4}}{}\frac{d^3p_i}{(2\pi )^3}\right]\frac{1}{s}\underset{\mathrm{spin}}{}||^2},`$ (5)
where $`=f_1f_2(1f_3)(1f_4)`$, with $`f_i=1/(1+\mathrm{exp}(E_i\mu _i)/T)`$ being the Fermi-Dirac distribution function for the nucleons, and $`s=4`$ the symmetry factor accounting for identical nucleons. The spin-summed square of the matrix element can be factored into leptonic and hadronic tensors, and then represented by
$$\underset{\mathrm{spin}}{}||^2=\frac{G_F^2c_a^2}{8}\mathrm{Tr}(l^il^j)_{i,j}.$$
(6)
The trace over the lepton tensor is easily evaluated. Further, since we are interested in soft radiation, we may safely ignore $`\stackrel{}{q}`$ in the momentum delta function . This allows us to directly integrate the leptonic trace over neutrino angles to obtain
$$𝑑\mathrm{\Omega }_1𝑑\mathrm{\Omega }_2\mathrm{Tr}(l_il_j)=8(4\pi )^2\omega _1\omega _2\delta _{i,j}.$$
(7)
Therefore, only the trace of the hadronic tensor $`_{ij}`$ contributes to the emissivity, and so we define a scalar function,
$`S_\sigma (\omega )`$ $`=`$ $`{\displaystyle \left[\underset{i=\mathrm{1..4}}{}\frac{d^3p_i}{(2\pi )^3}\right](2\pi )^4\delta ^3(𝐩_\mathrm{𝟏}+𝐩_\mathrm{𝟐}𝐩_\mathrm{𝟑}𝐩_\mathrm{𝟒})}`$ (9)
$`\delta (E_1+E_2E_3E_4\omega ){\displaystyle \frac{1}{s}}_{ii},`$
which is called the dynamical spin structure function of the medium. It is related to the $`\nu \overline{\nu }`$ emissivity via:
$$_{\nu \overline{\nu }}=\frac{G_F^2c_a^2}{16\pi ^4}\frac{1}{30}𝑑\omega \omega ^6S_\sigma (\omega ),$$
(10)
where $`\omega `$ is the total energy of the emitted $`\nu \overline{\nu }`$ pair.
In the two-body approximation considered here we evaluate $`_{ii}`$ using Eqs. (3) and (6). For the case of emission from the $`nn`$ system, only the spin-triplet two-nucleon state contributes, and the trace is:
$$_{ii}=16\frac{1}{\omega ^2}\underset{M_sM_s^{}}{}\left|1M_s^{},𝐩^{}|[S_i,𝐓_{NN}]|𝐩,1M_s\right|^2,$$
(11)
where $`S_i`$ is the total spin of the two-nucleon system. It is straightforward to generalize this formula for $`_{ii}`$ to the $`np`$ case, although the $`NN`$ spin singlet then contributes. (The $`np`$ case, and the failure of the OPE approximation there, is discussed in Ref. .) From Eqs. (11) and (9) we can calculate $`S_\sigma `$, and thus the $`\nu \overline{\nu }`$ emissivity.
The emission of any radiation which couples to the nucleon spin will be described by the same function $`S_\sigma `$. Thus, as mentioned above, with $`S_\sigma `$ in hand we may derive the axion emissivity $`_a`$. The effective theory for axion-nucleon interactions is described by the Lagrange density $`_{\mathrm{ann}}=g_{\mathrm{ann}}a\overline{N}\gamma ^5N`$, where $`a`$ is the axion field, and $`g_{\mathrm{ann}}=10^8(m_a/1\mathrm{eV})`$ is the effective axion coupling ($`m_a`$ is the axion mass) . The calculation of the axion emissivity in this effective theory is analogous to the above calculation of the neutrino emissivity, and yields
$$_a=\frac{g_{\mathrm{ann}}^2}{16\pi ^2M^2}\frac{1}{3}𝑑\omega \omega ^4S_\sigma (\omega ).$$
(12)
Before proceeding to our results we note that $`S_\sigma `$ can be defined in a much more general way, where it describes the response of a many-body system to an external spin-dependent perturbation. Equations (10) and (12) remain true if this definition is adopted. To obtain Eq. (9) for $`S_\sigma `$ in this general case one takes the long-wavelength limit of the leading term in the density expansion.
Results & Discussion: We present results for the dynamic spin structure function $`S_\sigma (\omega )`$, since it includes the density, temperature, and nuclear dynamics dependence of the neutrino and axion emissivities. During the evolution of neutron stars, one encounters varying degrees of nucleon degeneracy, with $`\mu _n/T1`$ at birth, but $`\mu _n/T10`$ at late times. Earlier investigations have shown that analytic approximations to the phase-space integrals in Eq. (9) work poorly at intermediate degeneracy . Therefore, in this work these integrals are all performed numerically. In order to investigate the effect of our model-independent treatment of the $`NN`$ interaction we plot the ratio $`R_\sigma (\omega )S_\sigma ^{SNA}(\omega )/S_\sigma ^{\mathrm{ref}}(\omega )`$, where $`S_\sigma ^{SNA}(\omega )`$ is calculated as described above. The denominator, $`S_\sigma ^{\mathrm{ref}}(\omega )`$, is the dynamic spin structure function found when a hadronic tensor trace of the form $`_{ii}=c/\omega ^2`$ is inserted into Eq. (9). We adjust the constant $`c`$ so that when $`S_\sigma ^{\mathrm{ref}}`$ is employed in Eq. (10) the neutrino emissivity thereby obtained is equal to that found if the full OPE matrix element is used in evaluating $`_{ii}`$ We could have followed Refs. and adopted a reference $`S_\sigma (\omega )`$ in which the full OPE-approximation matrix element is replaced by its value in the $`m_\pi 0`$ limit. However, this is a poor approximation to the actual result for one-pion exchange, since it over-estimates the OPE-approximation emissivities by as much as a factor of two (see also Refs. )..
Figure 2 shows the resulting ratio $`R_\sigma (\omega )`$ for neutron matter at a range of temperatures and a baryon density equal to the nuclear saturation density. The results are plotted as a function of the dimensionless ratio $`\omega /T`$ (note $`S_\sigma (\omega )`$ has significant strength only for $`\omega /T\stackrel{<}{}\text{ }15`$).
The most striking feature of the results is that the one-pion-exchange approximation significantly overestimates the rate for neutrino (or axion) production. The large reduction in the response functions over those obtained in the OPE approximation occurs for two reasons. Firstly, one-pion exchange over-estimates the strength of the $`NN`$ tensor force, and so even replacing the $`V_{OPE}`$ employed previously with the full $`V_{NN}`$ would lead to a reduction in $`S_\sigma `$. Secondly, the unitarity of our $`NN`$ T-matrix leads to an $`NN`$ amplitude which is, in general, significantly smaller than that found in the OPE approximation, and hence to a much-reduced $`S_\sigma `$. That the OPE approximation does such a poor job in describing the $`NN`$ dynamics should come as no surprise. At the energies of interest here the $`NN`$ interaction is intrinsically non-perturbative, and so replacing the full $`NN`$ T-matrix by $`V_{OPE}`$ gives only a crude estimate of these rates.
Fig. 2 may be used to infer where the SNA breaks down. Recall that $`S^{\mathrm{ref}}`$ is constructed using a hadronic tensor $`_{ij}`$ proportional to $`1/\omega ^2`$. In fact this part of the hadronic response is the only piece of the $`_{ij}`$ calculated in the SNA that can be trusted. Therefore, any $`\omega `$-dependence of $`R_\sigma (\omega )`$ represents an effect in $`S_\sigma ^{SNA}(\omega )`$ coming from physics beyond the SNA. Viewing Fig. 2 in this light, and considering the constraint $`\omega \stackrel{<}{}\text{ }m_\pi `$, implies that at $`\rho =\rho _0`$ the SNA works well for $`T\stackrel{<}{}\text{ }10`$ MeV.
The suppression of the axial response seen in Fig. 2 translates into a corresponding diminution of axion and neutrino emissivities. Let us define the ratio $`R__\nu _{\nu \overline{\nu }}^{SNA}/_{\nu \overline{\nu }}^{OPE}`$. Here, $`_{\nu \overline{\nu }}^{OPE}`$ is the emissivity found in the OPE approximation, as calculated in Ref. . (A similar ratio of axion emissivities is approximately equal to $`R__\nu `$ in the domain of validity of the SNA.) The ratio $`R__\nu `$ is displayed in the table below for a range of densitites and at temperatures of 1 and 10 MeV. In fact, the temperature and density dependence of $`_{\nu \overline{\nu }}^{SNA}`$ is predominantly determined by that of the nucleon equilibrium distribution functions appearing in Eq. (5), but the ratio $`R__\nu `$ changes significantly over the densities and temperatures considered because $`_{\nu \overline{\nu }}^{OPE}`$ has much more variation with $`n_B`$ and $`T`$. We see that, for the range of conditions considered here, the SNA gives a rate of emission of soft axial radiation which is roughly a factor of four smaller than that given by the OPE approximation.
| $`n_B`$ (fm<sup>-3</sup>) | $`R__\nu ^{nn}`$ (1 MeV) | $`R__\nu ^{nn}`$ (10 MeV) |
| --- | --- | --- |
| 0.08 | 0.29 | 0.27 |
| 0.16 | 0.24 | 0.23 |
| 0.48 | 0.16 | 0.16 |
Disclaimers: As mentioned previously, our calculation makes no attempt to include many-body effects. These will certainly be important in some regimes of temperature and density. For instance, it is claimed that in-medium modifications of the pion, attributed to the many-body nature of the problem, strongly affect the emission rates . Such effects are outside the scope of this work. However, as stated earlier, we expect the LPM effect to strongly reduce the response function . In particular, it will suppress the emission of radiation with $`\omega \stackrel{<}{}\text{ }\gamma `$, where $`\gamma `$ is the nucleon quasi-particle width at the Fermi surface. This LPM-effect limit on the validity of our calculation is indicated in Fig. 2, with the value of $`\gamma `$ taken from Ref. . Figure 2 suggests that LPM-suppression will affect the emissivity if $`T>10`$ MeV. Note that the LPM effect and the use of the SNA both significantly reduce the rate of emission of soft axial radiation.
Several microphysical ingredients play a role in the thermal evolution of neutron stars, and so it is difficult to state precisely how the results obtained in this work will affect observable aspects of neutron star evolution. However, our results imply that $`NN`$ bremsstrahlung is less important during the star’s infancy than previously thought. In addition, the reduction of the axion emissivity we have found will, presumably, weaken the axion-mass bound obtained from SN1987A. However, these comments are subject to the caveat that through large regions of the infant neutron star the temperature is high enough to invalidate the SNA.
On the other hand, for late-time cooling the temperature is small, and our results are applicable everywhere in the star. However, in this regime the modified URCA reaction, $`nnnpe^{}\overline{\nu _e}`$, is significantly more efficient than the pair process considered here . In degenerate matter this charged-current reaction does not produce soft radiation, since the typical change in the energy of the $`NN`$ system is of order the electron chemical potential, i.e. about 100 MeV. Nevertheless, the results presented here suggest that large corrections to the modified URCA rate calculated in the OPE approximation will occur when a better model of the $`NN`$ amplitude is used.
Conclusion: Finally, we reiterate that none of these disclaimers modify the two central conclusions of this paper: that the soft-neutrino approximation gives a model-independent result for the emissivity due to the reactions $`NNNN\nu \overline{\nu }`$ and $`NNNNa`$; and that these emissivities are much smaller than those found when one-pion exchange is used as the $`NN`$ amplitude.
Acknowledgements: We are grateful to J.-W. Chen and M. J. Savage for useful discussions. We thank the U. S. Department of Energy for its support under contracts DOE/DE-FG03-97ER4014 and DOE/DE-FG06-90ER40561. C. H. acknowledges the support of the Alexander von Humboldt foundation.
|
no-problem/0003/hep-ph0003297.html
|
ar5iv
|
text
|
# DESY 00-053 ISSN 0418-9833 MPI/PhT/2000-13 hep-ph/0003297 March 2000 Strong Coupling Constant from Scaling Violations in Fragmentation Functions
## Abstract
We present a new determination of the strong coupling constant $`\alpha _s`$ through the scaling violations in the fragmentation functions for charged pions, charged kaons, and protons. In our fit we include the latest $`e^+e^{}`$ annihilation data from CERN LEP1 and SLAC SLC on the $`Z`$-boson resonance and older, yet very precise data from SLAC PEP at center-of-mass energy $`\sqrt{s}=29`$ GeV. A new world average of $`\alpha _s`$ is given.
PACS numbers: 13.65.+i, 13.85.Ni, 13.87.Fh
The strong force acting between hadrons is one of the four fundamental forces of nature. It is now commonly believed that the strong interactions are correctly described by quantum chromodynamics (QCD), the SU(3) gauge field theory which contains colored quarks and gluons as elementary particles. The strong coupling constant $`\alpha _s^{(n_f)}(\mu )=g_s^2/(4\pi )`$, where $`g_s`$ is the QCD gauge coupling, is a basic parameter of the standard model of elementary particle physics; its value $`\alpha _s^{(5)}(M_Z)`$ at the $`Z`$-boson mass scale is listed among the constants of nature in the Review of Particle Physics . Here, $`\mu `$ is the renormalization scale, and $`n_f`$ is the number of active quark flavors, with mass $`m_q\mu `$. The formulation of $`\alpha _s^{(n_f)}(\mu )`$ in the modified minimal-subtraction ($`\overline{\mathrm{MS}}`$) scheme, with four-loop evolution and three-loop matching at the flavor thresholds, is described in Ref. .
There are a number of processes in which $`\alpha _s^{(5)}(M_Z)`$ can be measured (see Refs. , for recent reviews). A reliable method to determine $`\alpha _s^{(5)}(M_Z)`$ is through the extraction of the fragmentation functions (FF’s) in the annihilation process
$$e^+e^{}(\gamma ,Z)h+X,$$
(1)
which describes the inclusive production of a single charged hadron, $`h`$. Here, $`h`$ may either refer to a specific charged-hadron species, such as $`\pi ^\pm `$, $`K^\pm `$, or $`p/\overline{p}`$, or to the sum of all charged hadrons. The partonic cross sections pertinent to process (1) can entirely be calculated in perturbative QCD with no additional input, except for $`\alpha _s`$. They are known at next-to-leading order (NLO) and even at next-to-next-to-leading order . The subsequent transition of the partons into hadrons takes place at an energy scale of the order of 1 GeV and can, therefore, not be treated in perturbation theory. Instead, the hadronization of the partons is described by FF’s $`D_a^h(x,Q^2)`$. Their values correspond to the probability that the parton $`a`$, which is produced at short distance, of order $`1/Q`$, fragments into the hadron $`h`$ carrying the fraction $`x`$ of the momentum of $`a`$. In the case of process (1), $`Q`$ is typically of the order of the center-of-mass (CM) energy $`\sqrt{s}`$. Given their $`x`$ dependence at some scale $`Q_0`$, the evolution of the FF’s with $`Q`$ may be computed perturbatively from the timelike Altarelli-Parisi equations , which are presently known through NLO . This method to determine $`\alpha _s^{(5)}(M_Z)`$ is particularly clean in the sense that, unlike other methods, it is not plagued by uncertainties associated with hadronization corrections, jet algorithms, parton density functions (PDF’s), etc. We recall that, similarly to the scaling violations in the PDF’s, perturbative QCD only predicts the $`Q^2`$ dependence of the FF’s. Therefore, measurements at different CM energies are needed in order to extract values of $`\alpha _s^{(5)}(M_Z)`$. Furthermore, since the $`Q^2`$ evolution mixes the quark and gluon FF’s, it is essential to determine all FF’s individually.
In 1994/95, two of us, together with Binnewies, extracted $`\pi ^\pm `$ and $`K^\pm `$ FF’s through fits to PEP and partially preliminary LEP1 data and thus determined $`\alpha _s^{(5)}(M_Z)`$ to be 0.118 (0.122) at NLO (LO) (BKK). However, these analyses suffered from the lack of specific data on the fragmentation of tagged quarks and gluons to $`\pi ^\pm `$, $`K^\pm `$, and $`p/\overline{p}`$ hadrons. This drawback has been cured in 1998 by the advent of a wealth of new data from the LEP1 and SLC experiments . The data partly come as light-, $`c`$-, and $`b`$-quark-enriched samples with identified final-state hadrons ($`\pi ^\pm `$,$`K^\pm `$, and $`p/\overline{p}`$) or as gluon-tagged three-jet samples without hadron identification . This new situation motivates us to update, refine, and extend the BKK analysis by generating new LO and NLO sets of $`\pi ^\pm `$, $`K^\pm `$, and $`p/\overline{p}`$ FF’s. By also including in our fits $`\pi ^\pm `$, $`K^\pm `$, and $`p/\overline{p}`$ data (without flavor separation) from PEP , with CM energy $`\sqrt{s}=29`$ GeV, we obtain a handle on the scaling violations in the FF’s, which allows us to extract LO and NLO values of $`\alpha _s^{(5)}(M_Z)`$. The latter data combines small statistical errors with fine binning in $`x`$ and is more constraining than other data from the pre-LEP1/SLC era.
The NLO formalism for extracting FF’s from $`e^+e^{}`$ data was comprehensively described in Ref. . We work in the $`\overline{\mathrm{MS}}`$ renormalization and factorization scheme and choose the renormalization scale $`\mu `$ and the factorization scale $`M_f`$ to be $`\mu =M_f=\xi \sqrt{s}`$, except for gluon-tagged three-jet events, where we put $`\mu =M_f=2\xi E_{\mathrm{jet}}`$, with $`E_{\mathrm{jet}}`$ being the gluon jet energy in the CM frame. Here, the dimensionless parameter $`\xi `$ is introduced to determine the theoretical uncertainty in $`\alpha _s^{(5)}(M_Z)`$ from scale variations. As usual, we allow for variations between $`\xi =1/2`$ and 2 around the default value 1. For the actual fitting procedure, we use $`x`$ bins in the interval $`0.1x1`$ and integrate the theoretical functions over the bin widths as is done in the experimental analyses. The restriction at small $`x`$ is introduced to exclude events in the nonperturbative region, where mass effects and nonperturbative intrinsic-transverse-momentum effects are important and the underlying formalism is insufficient. We parameterize the $`x`$ dependence of the FF’s at the starting scale $`Q_0`$ as
$$D_a^h(x,Q_0^2)=Nx^\alpha (1x)^\beta .$$
(2)
We treat $`N`$, $`\alpha `$, and $`\beta `$ as independent fit parameters. In addition, we take the asymptotic scale parameter $`\mathrm{\Lambda }_{\overline{\mathrm{MS}}}^{(5)}`$, appropriate for five quark flavors, as a free parameter. Thus, we have a total of 46 independent fit parameters. The quality of the fit is measured in terms of the $`\chi ^2`$ value per degree of freedom, $`\chi _{\mathrm{DF}}^2`$, for all selected data points. Using a multidimensional minimization algorithm , we search this 46-dimensional parameter space for the point at which the deviation of the theoretical prediction from the data becomes minimal.
The $`\chi _{\mathrm{DF}}^2`$ values achieved for the various data sets used in our LO and NLO fits may be seen from Table 1. Most of the $`\chi _{\mathrm{DF}}^2`$ values lie around unity or below, indicating that the fitted FF’s describe all data sets within their respective errors. In general, the $`\chi _{\mathrm{DF}}^2`$ values come out slightly in favor for the DELPHI data. The overall goodness of the NLO (LO) fit is given by $`\chi _{\mathrm{DF}}^2=0.98`$ (0.97). The goodness of our fit may also be judged from Figs. 1 and 2, where our LO and NLO fit results are compared with the ALEPH , DELPHI , OPAL , and SLD data. In Fig. 1, we study the differential cross section $`(1/\sigma _{\mathrm{tot}})d\sigma ^h/dx`$ of process (1) for $`\pi ^\pm `$, $`K^\pm `$, $`p/\overline{p}`$, and unidentified charged hadrons at $`\sqrt{s}=91.2`$ GeV, normalized to the total hadronic cross section $`\sigma _{\mathrm{tot}}`$, as a function of the scaled momentum $`x=2p_h/\sqrt{s}`$. As in Refs. , we assume that the sum of the $`\pi ^\pm `$, $`K^\pm `$, and $`p/\overline{p}`$ data exhaust the full charged-hadron data. We observe that, in all cases, the various data are mutually consistent with each other and are nicely described by the LO and NLO fits, which is also reflected in the relatively small $`\chi _{\mathrm{DF}}^2`$ values given in Table 1. The LO and NLO fits are almost indistinguishable in those regions of $`x`$, where the data have small errors. At large $`x`$, where the statistical errors are large, the LO and NLO results sometimes moderately deviate from each other. In Fig. 2, we compare the ALEPH and OPAL measurements of the gluon FF in gluon-tagged charged-hadron production, with $`E_{\mathrm{jet}}=26.2`$ and 40.1 GeV, respectively, with our LO and NLO fit results. The data are nicely fitted, with $`\chi _{\mathrm{DF}}^2`$ values of order unity, as may be seen from Table 1. By the same token, this implies that the data are mutually consistent<sup>1</sup><sup>1</sup>1The new FF sets can be obtained from http://www.desy.de/~poetter/kkp.html.
The purpose of this letter is to update and improve the determinations of $`\mathrm{\Lambda }_{\overline{\mathrm{MS}}}^{(5)}`$ and $`\alpha _s^{(5)}(M_Z)`$ from the scaling violations in the FF’s. We obtain $`\mathrm{\Lambda }_{\overline{\mathrm{MS}}}^{(5)}=88\genfrac{}{}{0pt}{}{+34}{31}\genfrac{}{}{0pt}{}{+3}{23}`$ MeV at LO and $`\mathrm{\Lambda }_{\overline{\mathrm{MS}}}^{(5)}=213\genfrac{}{}{0pt}{}{+75}{73}\genfrac{}{}{0pt}{}{+22}{29}`$ MeV at NLO, where the first errors are experimental and the second ones are theoretical. The experimental errors are determined by varying $`\mathrm{\Lambda }_{\overline{\mathrm{MS}}}^{(5)}`$ in such a way that the total $`\chi _{\mathrm{DF}}^2`$ value is increased by one unit if all the other fit parameters are kept fixed, while the theoretical errors are obtained by repeating the LO and NLO fits for the scale choices $`\xi =1/2`$ and 2. From the LO and NLO formulas for $`\alpha _s^{(n_f)}(\mu )`$ , we thus obtain
$`\alpha _s^{(5)}(M_Z)`$ $`=`$ $`0.1181{\displaystyle \genfrac{}{}{0pt}{}{+0.0058}{0.0069}}{\displaystyle \genfrac{}{}{0pt}{}{+0.0006}{0.0049}}\text{(LO)},`$
$`\alpha _s^{(5)}(M_Z)`$ $`=`$ $`0.1170{\displaystyle \genfrac{}{}{0pt}{}{+0.0055}{0.0069}}{\displaystyle \genfrac{}{}{0pt}{}{+0.0017}{0.0025}}\text{(NLO)},`$ (3)
respectively. Adding the maximum experimental and theoretical deviations from the central values in quadrature, we find $`\mathrm{\Lambda }_{\overline{\mathrm{MS}}}^{(5)}=(88\pm 41)`$ MeV and $`\alpha _s^{(5)}(M_Z)=0.1181\pm 0.0085`$ at LO and $`\mathrm{\Lambda }_{\overline{\mathrm{MS}}}^{(5)}=(213\pm 79)`$ MeV and $`\alpha _s^{(5)}(M_Z)=0.1170\pm 0.0073`$ at NLO. We observe that our LO and NLO values of $`\alpha _s^{(5)}(M_Z)`$ are quite consistent with each other, which indicates that our analysis is perturbatively stable. The fact that the respective values of $`\mathrm{\Lambda }_{\overline{\mathrm{MS}}}^{(5)}`$ significantly differ is a well-known feature of the $`\overline{\mathrm{MS}}`$ definition of $`\alpha _s^{(n_f)}(\mu )`$ .
Our values of $`\mathrm{\Lambda }_{\overline{\mathrm{MS}}}^{(5)}`$ and $`\alpha _s^{(5)}(M_Z)`$ perfectly agree with those presently quoted by the Particle Data Group (PDG) as world averages, $`\mathrm{\Lambda }_{\overline{\mathrm{MS}}}^{(5)}=212\genfrac{}{}{0pt}{}{+25}{23}`$ MeV and $`\alpha _s^{(5)}(M_Z)=0.1185\pm 0.002`$, respectively. Notice that, in contrast to our LO and NLO analyses, the PDG evaluates $`\mathrm{\Lambda }_{\overline{\mathrm{MS}}}^{(5)}`$ from $`\alpha _s^{(5)}(M_Z)`$ using the three-loop relationship . The PDG combines twelve different kinds of $`\alpha _s^{(5)}(M_Z)`$ measurements, including one from the scaling violations in the FF’s , by minimizing the total $`\chi ^2`$ value. The world average cited above is then estimated from the outcome by allowing for correlations between certain systematic errors. It is interesting to investigate how the world average of $`\alpha _s^{(5)}(M_Z)`$ is affected by our analysis. To this end, we first combine the twelve $`\alpha _s^{(5)}(M_Z)`$ measurements reported in Ref. to find $`\alpha _s^{(5)}(M_Z)=0.1181\pm 0.0014`$ with $`\chi ^2=3.74`$.<sup>2</sup><sup>2</sup>2This result slightly differs from the corresponding one found in Ref. . If we replace the value $`\alpha _s^{(5)}(M_Z)=0.125\pm 0.005\pm 0.008`$ resulting from previous FF analyses , which enters the PDG average, with our new NLO value, then we obtain $`\alpha _s^{(5)}(M_Z)=0.1180\pm 0.0014`$ with $`\chi ^2=3.21`$, i.e., the face value of the world average essentially goes unchanged, while the overall agreement is appreciably improved. This is also evident from the comparison of Fig. 3, which summarizes our updated world average, with the corresponding Fig. 9.1 in Ref. . We observe that the central value of our new NLO result for $`\alpha _s^{(5)}(M_Z)`$ falls into the shaded band, which indicates the error of the world average, while in Fig. 9.1 of Ref. the corresponding central value exceeds the world average by 3.3 standard deviations of the latter, which is more than for all other eleven processes. Furthermore, our new NLO result has a somewhat smaller error (0.0073) than the corresponding result used by the PDG (0.009). If we take the point of view that our new NLO value of $`\alpha _s^{(5)}(M_Z)`$ should rather be combined with the result from the previous FF analyses before taking the world average, then the latter turns out to be $`\alpha _s^{(5)}(M_Z)=0.1181\pm 0.0014`$ with $`\chi ^2=3.29`$.
In summary, we presented an updated and improved determination of $`\alpha _s^{(5)}(M_Z)`$ from the LO and NLO analyses of inclusive light-hadron production in $`e^+e^{}`$ annihilation. Our strategy was to only include in our fits high-precision LEP1 and SLC data with both flavor separation and hadron identification (namely, light-, $`c`$-, and $`b`$-quark-enriched samples of $`\pi ^\pm `$, $`K^\pm `$, and $`p\overline{p}`$ data) , gluon-tagged three-jet samples with a fixed gluon-jet energy , and the $`\pi ^\pm `$, $`K^\pm `$, and $`p/\overline{p}`$ data sets from the pre-LEP1/SLC era with the highest statistics and the finest binning in $`x`$ . Our LO and NLO results for $`\alpha _s^{(5)}(M_Z)`$ are given in Eq. (3). They should be compared with the result from scaling violations in FF’s quoted in Ref. , $`0.125\pm 0.005\pm 0.008`$ . If we repeat the global analysis of Ref. with this result replaced by our new NLO value, then the world average (before taking into account estimated correlations between systematic errors) is changed from $`\alpha _s^{(5)}(M_Z)=0.1181\pm 0.0014`$ with $`\chi ^2=3.74`$ to $`\alpha _s^{(5)}(M_Z)=0.1180\pm 0.0014`$ with $`\chi ^2=3.21`$, i.e., the overall agreement is appreciably improved.
The II. Institut für Theoretische Physik is supported by the Bundesministerium für Bildung und Forschung under Contract No. 05 HT9GUA 3, and by the European Commission through the Research Training Network Quantum Chromodynamics and the Deep Structure of Elementary Particles under Contract No. ERBFMRXCT980194.
|
no-problem/0003/quant-ph0003026.html
|
ar5iv
|
text
|
# Quantum mechanical probabilities and general probabilistic constraints for Einstein–Podolsky–Rosen–Bohm experiments
## 1 Introduction
It is a remarkable feature of elementary, nonrelativistic quantum mechanics that it does not conflict with the theory of special relativity for any practical purpose. An important example of this “peaceful coexistence” between quantum mechanics and special relativity is provided by the fact that one cannot exploit the quantum mechanical correlations taking place between spatially separated parts of a composite quantum system to convey classical messages faster than light , in spite of the fact that such quantum correlations can yield a violation of Bell’s inequality . Indeed, the quantum mechanical probabilities behind those correlations are found to satisfy the so-called causal communication constraint (also referred to in the literature as the condition of “parameter independence” , “simple locality” , “signal locality” , or “physical locality” ) which, roughly speaking, stipulates that the probability of a particular measurement outcome on any one part of the system should be independent of which sort of measurement was performed on the other parts. This requirement prevents the acausal exchange of classical information between them. Consider an experimental set-up of the Einstein-Podolsky-Rosen-Bohm type designed to test the Clauser-Horne-Shimony-Holt (CHSH) version of Bell’s inequality. The CHSH inequality concerns a statistical ensemble of identically prepared systems, each of which consisting of two parts (call them, say, $`A`$ and $`B`$) far away from one another. Let $`a_1`$, $`a_2`$, $`b_1`$, and $`b_2`$ denote two-valued ($`\pm 1`$) physical variables, with $`a_1`$ and $`a_2`$ referring to measurements on part $`A`$ of the system by a local observer, and $`b_1`$ and $`b_2`$ referring to local measurements on part $`B`$. For each of the systems in the ensemble a measurement of either $`a_1`$ or $`a_2`$ ($`b_1`$ or $`b_2`$) is performed on part $`A`$ ($`B`$). A complete experimental run on the ensemble of systems will yield the set of numerical values $`p(a_j=m,b_k=n)`$, where $`j,k=1\text{ or }2`$, and $`m,n=\pm 1`$, with $`p(a_j=m,b_k=n)`$ being the probability of getting the outcomes $`a_j=m`$ and $`b_k=n`$ in a joint measurement of the variables $`a_j`$ and $`b_k`$. Each of these probabilities fulfills the property
$$0p(a_j=m,b_k=n)1.$$
(1)
Furthermore, the various observable probabilities are assumed to satisfy the normalization condition
$$\underset{m,n=\pm 1}{}p(a_j=m,b_k=n)=1,$$
(2)
for any $`j,k=1\text{ or }2`$. The causal communication constraint, on the other hand, requires that
$`p(a_j=m)={\displaystyle \underset{n=\pm 1}{}}p(a_j=m,b_1=n)={\displaystyle \underset{n=\pm 1}{}}p(a_j=m,b_2=n),`$ (3a)
$`p(b_k=n)={\displaystyle \underset{m=\pm 1}{}}p(a_1=m,b_k=n)={\displaystyle \underset{m=\pm 1}{}}p(a_2=m,b_k=n).`$ (3b)
Condition (3a) states that the probability of obtaining $`a_j=m`$ is independent of which measurement ($`b_1`$ or $`b_2`$) is performed on part $`B`$. Similarly, condition (3b) states that the probability for $`b_k=n`$ is independent of which measurement ($`a_1`$ or $`a_2`$) is performed on part $`A`$.
Let us define the correlation function $`c(a_j,b_k)`$ between the variables $`a_j`$ and $`b_k`$ to be the expectation value of the product $`a_jb_k`$. In terms of the measurable probabilities $`p(a_j=m,b_k=n)`$, this quantity can be expressed as
$`c(a_j,b_k)=`$ $`p(a_j=1,b_k=1)+p(a_j=1,b_k=1)`$
$`p(a_j=1,b_k=1)p(a_j=1,b_k=1).`$ (4)
The CHSH inequality
$$2c(a_1,b_1)+c(a_1,b_2)+c(a_2,b_1)c(a_2,b_2)2,$$
(5)
holds in any theory of local hidden variables, and restricts the maximum absolute value of the sum of correlations, $`\mathrm{\Delta }c(a_1,b_1)+c(a_1,b_2)+c(a_2,b_1)c(a_2,b_2)`$, to 2. We can write the CHSH inequality in a compact notation as $`\left|\mathrm{\Delta }_{\text{LHV}}\right|2`$. On the other hand, it is well known since the work of Cirel’son that the quantum prediction of the CHSH sum of correlations is bounded in absolute value by $`2\sqrt{2}`$, that is, $`\left|\mathrm{\Delta }_{\text{QM}}\right|2\sqrt{2}`$. This latter bound, however, still lies well below the maximum absolute theoretical value, $`4`$, allowed by a general probabilistic theory, $`\left|\mathrm{\Delta }_{\text{GP}}\right|4`$. This absolute probabilistic limit is attained whenever $`c(a_1,b_1)=c(a_1,b_2)=c(a_2,b_1)=c(a_2,b_2)=\pm 1`$. Popescu and Rohrlich addressed the question of whether relativistic causality restricts the maximum quantum prediction of the CHSH sum of correlations to $`2\sqrt{2}`$ instead of 4. As Popescu and Rohrlich put it , “Rather than ask why quantum correlations violate the CHSH inequality, we might ask why they do not violate it more.” They found that relativistic causality does not by itself constrain the maximum CHSH sum of quantum correlations to $`2\sqrt{2}`$. Indeed, they gave a set of probabilities which satisfies the causal communication constraint and which provides the maximum level of violation, 4.
In this paper we will work out the basic relationships which develop between the joint probabilities involved in the CHSH inequality when these are required to satisfy the normalization condition and the causal communication constraint. After this is done, we will focus on the particular case in which three specific probabilities are equal to zero. This leads us directly to a consideration of Hardy’s nonlocality theorem , and enables us to deduce in a very general and economical way the constraints that probability theory and causality impose on it. We shall see that quantum mechanics imposes further restrictions of its own beyond those required by the causal communication constraint. Furthermore, following a suggestion made by Kwiat and Hardy at the end of their recent paper in Ref. 17, we will explore the middle ground between the limits imposed by quantum mechanics and relativistic causality within the context of Hardy’s theorem. We note that, interestingly, all this is done within the general framework of the CHSH inequality. This allows a unified treatment of both the CHSH inequality and Hardy’s nonlocality theorem, and avoids to deal with two different types of inequalities (for example the CHSH inequality and the Clauser-Horne (CH) inequality , had Hardy’s theorem been cast in this latter form of inequality). Moreover, this unification is convenient because, as shown by Mermin , the CHSH and CH inequalities need not be equivalent if the causal communication constraint, Eqs. (3), does not hold.
## 2 General constraints on the joint probabilities in the CHSH inequality
The CHSH inequality (5) involves sixteen joint probabilities $`p(a_j=m,b_k=n)`$, although, as we shall presently see, the constraints in Eqs. (2) and (3) reduce the number of independent probabilities to eight. In order to abbreviate the notation, from now on the various probabilities $`p(a_j=m,b_k=n)`$ will be referred to by the respective shorthands $`p1,p2,\mathrm{},p16`$, according to the following convention
$`p1`$ $`p(a_1=1,b_1=1),`$ $`p2`$ $`p(a_1=1,b_1=1),`$
$`p3`$ $`p(a_1=1,b_1=1),`$ $`p4`$ $`p(a_1=1,b_1=1),`$
$`p5`$ $`p(a_1=1,b_2=1),`$ $`p6`$ $`p(a_1=1,b_2=1),`$
$`p7`$ $`p(a_1=1,b_2=1),`$ $`p8`$ $`p(a_1=1,b_2=1),`$ (6)
$`p9`$ $`p(a_2=1,b_1=1),`$ $`p10`$ $`p(a_2=1,b_1=1),`$
$`p11`$ $`p(a_2=1,b_1=1),`$ $`p12`$ $`p(a_2=1,b_1=1),`$
$`p13`$ $`p(a_2=1,b_2=1),`$ $`p14`$ $`p(a_2=1,b_2=1),`$
$`p15`$ $`p(a_2=1,b_2=1),`$ $`p16`$ $`p(a_2=1,b_2=1).`$
Now we can write down explicitly the constraints imposed by the requirements of normalization (cf. Eq. (2)) and causality (cf. Eqs. (3a)-(3b)) as follows
$`p1+p2+p3+p4=1,`$
$`p5+p6+p7+p8=1,`$
$`p9+p10+p11+p12=1,`$
$`p13+p14+p15+p16=1,`$
$`p1+p2p5p6=0,`$
$`p3+p4p7p8=0,`$ (7)
$`p9+p10p13p14=0,`$
$`p11+p12p15p16=0,`$
$`p1+p3p9p11=0,`$
$`p2+p4p10p12=0,`$
$`p5+p7p13p15=0,`$
$`p6+p8p14p16=0.`$
Relations (7) constitute a system of 12 linear equations with 16 unknowns. It can be shown that the rank of the $`12\times 16`$ matrix of the coefficients for this system is equal to 8, so that the set of Eqs. (7) determines 8 among the 16 probabilities $`p1,p2,\mathrm{},p16`$. So, for instance, we can get the following convenient solution of system (7) for which the set of variables $`𝒱=\{p2,p3,p6,p7,p10,p11,p13,p16\}`$ is given in terms of the remaining set of variables $`𝒰=\{p1,p4,p5,p8,p9,p12,p14,p15\}`$,
$`p2`$ $`={\displaystyle \frac{1}{2}}(1p1p4+p5p8p9+p12+p14p15),`$ (8a)
$`p3`$ $`={\displaystyle \frac{1}{2}}(1p1p4p5+p8+p9p12p14+p15),`$ (8b)
$`p6`$ $`={\displaystyle \frac{1}{2}}(1+p1p4p5p8p9+p12+p14p15),`$ (8c)
$`p7`$ $`={\displaystyle \frac{1}{2}}(1p1+p4p5p8+p9p12p14+p15),`$ (8d)
$`p10`$ $`={\displaystyle \frac{1}{2}}(1p1+p4+p5p8p9p12+p14p15),`$ (8e)
$`p11`$ $`={\displaystyle \frac{1}{2}}(1+p1p4p5+p8p9p12p14+p15),`$ (8f)
$`p13`$ $`={\displaystyle \frac{1}{2}}(1p1+p4+p5p8+p9p12p14p15),`$ (8g)
$`p16`$ $`={\displaystyle \frac{1}{2}}(1+p1p4p5+p8p9+p12p14p15).`$ (8h)
The basic relationships (8a)-(8h) between joint probabilities arise as a direct consequence of the fulfillment of the normalization condition and the causal communication constraint. It should be noted, however, that the conditions in Eq. (1) impose a lot of further constraints on their own. For example, the non-negativity of $`p13`$ in Eq. (8g) requires that
$$1+p4+p5+p9p1+p8+p12+p14+p15,$$
(9)
where, according to Eq. (1), all the probabilities $`p\text{j}`$ in the set $`𝒰`$ are assumed to fulfill the condition $`0p\text{j}1`$.<sup>1</sup> Similarly, other constraints like that of Eq. (9) arise when we demand that each of the probabilities $`p\text{k}`$ in the set $`𝒱`$ determined by Eqs. (8a)-(8h) fulfills the property $`0p\text{k}1`$. Many more constraints can be established from Eqs. (8a)-(8h) by demanding the non-negativity of the sum of any combination of probabilities pertaining to the set $`𝒱`$. For example, the non-negativity of the sum $`p2+p7`$ requires that
$$p1+p81.$$
(10)
Analogously, the non-negativity of the sum $`p2+p3+p6+p7+p10+p11+p13+p16`$ requires that
$$p1+p4+p5+p8+p9+p12+p14+p154,$$
(11)
and so on.
It is a well-established theoretical fact that the quantum mechanical probabilities satisfy the causal communication constraint. (A simple demonstration of the fact that the parameter independence conditions (3a)-(3b) are entailed by the quantum mechanical formalism is given in the Appendix, and more general demonstrations can be found, for example, in Ref. 1.<sup>2</sup>) Therefore the quantum predictions for the probabilities $`p1,p2,\mathrm{},p16`$, whichever they might be, should satisfy each of the constraints in Eqs. (8), as well as those induced by the non-negativity of single joint probabilities and the non-negativity of the sums of joint probabilities, such as the constraints in Eqs. (9)-(11). Quantum mechanics, however, is a rather peculiar statistical theory, and it places additional restrictions on the sets of probabilities it can produce, beyond those imposed by more general considerations like causality. Let us consider the CHSH sum of correlations, $`\mathrm{\Delta }=c(a_1,b_1)+c(a_1,b_2)+c(a_2,b_1)c(a_2,b_2)`$. Taking into account the normalization condition in Eq. (2), it is simple algebra to verify that this quantity can be equivalently expressed as
$$\mathrm{\Delta }=2(p1+p4+p5+p8+p9+p12+p14+p152).$$
(12)
From the inequality (11), we can see that the maximum value for $`\mathrm{\Delta }`$ is 4, and that this value will occur whenever $`p1+p4+p5+p8+p9+p12+p14+p15=4`$. Actually, there are two sets of values for the probabilities $`p1,p2,\mathrm{},p16`$ which provide the maximum absolute probabilistic limit for $`\mathrm{\Delta }_{\text{GP}}`$, namely 4, and which, at the same time, satisfy the causality constraints in Eqs. (8a)-(8h). The first set of values is, $`p\text{j}=1/2`$ for all $`p\text{j}𝒰`$, and $`p\text{k}=0`$ for all $`p\text{k}𝒱`$; the second set is, $`p\text{j}=0`$ for all $`p\text{j}𝒰`$, and $`p\text{k}=1/2`$ for all $`p\text{k}𝒱`$. As we have said, the quantum mechanical probabilities obey both the inequality (11) and the constraints in Eqs. (8). However, such probabilities do not saturate the inequality (11). In fact, it can be shown that the maximum quantum prediction for the sum $`p1+p4+p5+p8+p9+p12+p14+p15`$ amounts to $`2+\sqrt{2}`$. There exist two sets of quantum mechanical values for the probabilities $`p1,p2,\mathrm{},p16`$ which provide the maximum absolute limit for $`\mathrm{\Delta }_{\text{QM}}`$, namely $`2\sqrt{2}`$, and which, at the same time, are consistent with the conditions (8). The first set of values is, $`p\text{j}=\frac{1}{8}(2+\sqrt{2})`$ for all $`p\text{j}𝒰`$, and $`p\text{k}=\frac{1}{2}\frac{1}{8}(2+\sqrt{2})`$ for all $`p\text{k}𝒱`$; the second set is, $`p\text{j}=\frac{1}{2}\frac{1}{8}(2+\sqrt{2})`$ for all $`p\text{j}𝒰`$, and $`p\text{k}=\frac{1}{8}(2+\sqrt{2})`$ for all $`p\text{k}𝒱`$. Of course, as was already noted, the probabilistic limit $`|\mathrm{\Delta }_{\text{GP}}|=4`$ entails that $`c(a_1,b_1)=c(a_1,b_2)=c(a_2,b_1)=c(a_2,b_2)=\pm 1`$. In quantum mechanics the situation is radically different. In fact, it can be shown that whenever we have the quantum predictions $`c(a_1,b_1)=c(a_1,b_2)=c(a_2,b_1)=\pm 1`$, then necessarily we must have $`c(a_2,b_2)=\pm 1`$ as well. In terms of joint probabilities this means that, for example, whenever quantum mechanics predicts that $`p1=p4=p5=p8=p9=p12=1/2`$, then necessarily it also predicts that $`p14=p15=0`$. It will be noted, incidentally, that this is the reason why it is not possible to construct a nonlocality argument of the Greenberger-Horne-Zeilinger type for two spin-$`\frac{1}{2}`$ particles .
## 3 Focusing on the case of Hardy’s nonlocality theorem
Let us now look at one of the constraints in Eqs. (8), for example, that in Eq. (8g). As, by assumption, the probabilities $`p1,p8,p12,p14`$, and $`p15`$ are non-negative, the following inequality must hold
$$2p131p4+p5+p9.$$
(13)
This inequality is important for what follows because the four involved probabilities $`p4,p5,p9,`$ and $`p13`$ can be used to construct an argument for nonlocality of the type invented by Hardy .<sup>3</sup> Actually, inequality (13) was already derived by Merminin Ref. 8 (see, in particular, Eq. (9) and Appendix A of ). The derivation presented in this paper, however, has been made within a more general framework. Indeed, the inequality (13) arises here as an immediate by-product of the basic constraint, Eq. (8g), $`2p131=p4+p5+p9p1p8p12p14p15`$. In this respect, from a pedagogical point of view, our derivation has the advantage of being more straightforward. Moreover, thanks to this general framework (the “CHSH framework”), we have been able to quickly identify (see note 3) the eight sets of probabilities which can lead to a Hardy-type nonlocality contradiction, along with the constraint of the type (13) that causality imposes on each of these sets. In what follows we pick out the set of probabilities appearing in Eq. (13), $`\{p4,p5,p9,p13\}`$, although, naturally, any one of the remaining sets could be equally well considered.<sup>4</sup>
Hardy’s nonlocality argument applies to the case in which the three probabilities $`p4`$, $`p5`$, and $`p9`$ vanish while the probability $`p13`$ does not. For this case the inequality (13) reduces to $`p131/2`$. Thus, the non-negativity condition, $`0p13`$, combined with the causality constraint, $`p131/2`$, determine the range of values
$$0p131/2,$$
(14)
within which the probability $`p13`$ can vary without violating the causal communication constraint for the case that $`p4=p5=p9=0`$. It is easy to calculate the absolute value taken by the quantity $`\mathrm{\Delta }`$ in the probabilistic limiting case in which $`p13=1/2`$ and $`p4=p5=p9=0`$. Indeed, it is clear from Eq. (8g) that, whenever we have $`p13=1/2`$ and $`p4=p5=p9=0`$, then necessarily $`p1=p8=p12=p14=p15=0`$. Hence, since $`p\text{j}=0`$ for all $`p\text{j}𝒰`$, Eq. (12) tells us that $`|\mathrm{\Delta }_{\text{GP}}|=4`$. Of course, from Eqs. (8a)-(8h), we also have that $`p\text{k}=1/2`$ for all $`p\text{k}𝒱`$. The situation for quantum mechanics is, once again, quite different. Although quantum mechanics does satisfy the constraint in Eq. (14) for the case in which $`p4=p5=p9=0`$, the quantum prediction for $`p13`$ does not saturate the upper bound of the inequality (14). Indeed, it can be shown that the maximum attainable value for $`p13`$ predicted by quantum mechanics when $`p4=p5=p9=0`$ is $`\tau ^5`$, with $`\tau `$ being the golden mean, $`\frac{1}{2}(1+\sqrt{5})`$. This gives a maximum value of $`p13=\mathrm{0.090\hspace{0.17em}\hspace{0.17em}17}`$.
At this point we derive an immediate but important relationship between the absolute value of the quantity $`\mathrm{\Delta }`$ and the probability $`p13`$, which applies in the considered case in which $`p4=p5=p9=0`$. So, from Eq. (12), we have that, for this case, $`\mathrm{\Delta }=2(p1+p8+p12+p14+p152)`$. On the other hand, when we put $`p4=p5=p9=0`$ in Eq. (8g), we deduce that $`p1+p8+p12+p14+p15=12p13`$. Hence we obtain
$$|\mathrm{\Delta }|=2+4p13.$$
(15)
It should be stressed that the identity (15) arises as a direct consequence of the causality constraint in Eq. (8g). Therefore, relation (15) should be fulfilled by the quantum mechanical probabilities if these are to satisfy the causal communication constraint, Eqs. (3). It can be shown that, in fact, the quantum theoretic predictions do satisfy the identity (15) for the case that the probabilities $`p4`$, $`p5`$, and $`p9`$ are made to vanish. Furthermore, it is worth noting that the relation (15) gives a new insight into the rationale of the causality constraint in Eq. (14). Indeed, if $`p13`$ were allowed to be greater than 1/2 when $`p4=p5=p9=0`$, then the CHSH sum of correlations on the left side of (15) would be greater than 4, which is impossible by the very definition of $`\mathrm{\Delta }`$. A straightforward implication of Eq. (15) is that $`|\mathrm{\Delta }|>2`$ whenever $`p4=p5=p9=0`$ and $`p13>0`$, and that $`|\mathrm{\Delta }|=2`$ whenever $`p4=p5=p9=p13=0`$. On the other hand, from the results expounded in this and in the preceding paragraph, it is evident that relativistic causality does not by itself restrict the maximum CHSH sum of quantum correlations to $`2+4\tau ^5`$ for the case in which $`p4=p5=p9=0`$, since, as we have seen, we can indeed have a general probabilistic situation in which $`p4=p5=p9=0`$ and $`|\mathrm{\Delta }_{\text{GP}}|=4`$. Of course, from Eq. (15), this will happen whenever $`p13=1/2`$. We also note that, for the special case in which the state of the composite system is described by a maximally entangled state,<sup>5</sup> quantum mechanics predicts that $`p13=0`$ whenever we have $`p4=p5=p9=0`$. Thus the maximally entangled state cannot be used to exhibit Hardy-type nonlocality, in spite of the fact that this class of states yields the maximum quantum violation, $`\left|\mathrm{\Delta }_{\text{QM}}\right|=2\sqrt{2}`$, of the CHSH inequality .
The quantum mechanical restriction (see Eq. (15)), $`|\mathrm{\Delta }_{\text{QM}}|2+4\tau ^5=\mathrm{2.360\hspace{0.17em}\hspace{0.17em}68}`$, which applies to the case where $`p4=p5=p9=0`$, is more stringent than the restriction $`|\mathrm{\Delta }_{\text{QM}}|2\sqrt{2}=\mathrm{2.828\hspace{0.17em}\hspace{0.17em}42}`$ obtained when no a priori conditions (other than those in Eqs. (1)-(3)) are imposed on the probabilities $`p1,p2,\mathrm{},p16`$. This is the reason why Hardy’s theorem does not lead to a more definitive experimental test of quantum nonlocality than the ones already performed. However, it should be appreciated that, for the case in which $`p4=p5=p9=0`$, quantum mechanics predicts an amount of violation of the CHSH inequality, $`|\mathrm{\Delta }_{\text{LHV}}|2`$, which is comparatively larger than that predicted for the CH type inequality , $`p13p4+p5+p9`$. Indeed, for the considered case, and in accordance with quantum mechanics, the quantity $`\mathrm{\Delta }`$ can reach a value as large as $`|\mathrm{\Delta }_{\text{QM}}|=\mathrm{2.360\hspace{0.17em}\hspace{0.17em}68}`$, so that the maximum violation of the CHSH inequality predicted by quantum mechanics is $`\mathrm{2.360\hspace{0.17em}\hspace{0.17em}68}2`$. Normalizing this latter inequality to 2, we obtain $`(\mathrm{2.360\hspace{0.17em}\hspace{0.17em}68}2)/20`$, or $`\mathrm{0.180\hspace{0.17em}\hspace{0.17em}34}0`$.<sup>6</sup> On the other hand, the corresponding quantum violation of the CH inequality is $`\mathrm{0.090\hspace{0.17em}\hspace{0.17em}17}0`$. It is therefore concluded that experiments based on the CHSH inequality should give a more conclusive, clear-cut experimental verification of Hardy’s nonlocality than the ones based on the CH inequality, provided that the magnitude of the experimental error is the same for both kinds of experiments.
We conclude by briefly examining the middle ground, $`\mathrm{2.360\hspace{0.17em}\hspace{0.17em}68}\mathrm{\Delta }4`$, between the upper limits imposed by quantum mechanics and relativistic causality within the context of Hardy’s nonlocality theorem. To this end, we write down the causality constraint in Eq. (8g) that obtains for the case in which $`p4=p5=p9=0`$,
$$p1+p8+p12+p14+p15=12p13.$$
(16)
Since quantum mechanics requires that $`0p13\mathrm{0.090\hspace{0.17em}\hspace{0.17em}17}`$ when $`p4=p5=p9=0`$, then the quantum prediction for the sum of probabilities on the left-hand side of (16) is constrained to obey the inequality
$$\mathrm{0.819\hspace{0.17em}\hspace{0.17em}66}\mathrm{\Sigma }_{\text{QM}}1,$$
(17)
where the symbol $`\mathrm{\Sigma }`$ stands for the sum $`p1+p8+p12+p14+p15`$. This inequality translates into the following one, $`2|\mathrm{\Delta }_{\text{QM}}|\mathrm{2.360\hspace{0.17em}\hspace{0.17em}68}`$, with $`|\mathrm{\Delta }_{\text{QM}}|`$ reaching its maximum value $`\mathrm{2.360\hspace{0.17em}\hspace{0.17em}68}`$ whenever the sum $`\mathrm{\Sigma }`$ equals $`\mathrm{0.819\hspace{0.17em}\hspace{0.17em}66}`$. The quantum mechanical probabilities giving this maximum value are $`p1=\tau ^3`$ and $`p8=p12=p14=p15=\tau ^4`$. On the other hand, for a general probabilistic theory, and for the case that $`p4=p5=p9=0`$, the probability $`p13`$ is required to satisfy the less stringent condition $`0p131/2`$, and thus we now have the inequality
$$0\mathrm{\Sigma }_{\text{GP}}1,$$
(18)
which translates into the inequality $`2|\mathrm{\Delta }_{\text{GP}}|4`$. The transition from the quantum domain to the general probabilistic domain happens when the value of $`\mathrm{\Sigma }`$ drops below the threshold $`\mathrm{0.819\hspace{0.17em}\hspace{0.17em}66}`$. Thus the lower bound of the inequality (17) provides a tool to discriminate between quantum mechanics and general probabilistic theories. Indeed, if the observable quantity $`\mathrm{\Sigma }`$ were found experimentally to lie within the quantum mechanically forbidden interval, $`0\mathrm{\Sigma }<\mathrm{0.819\hspace{0.17em}\hspace{0.17em}66}`$, for a situation in which the probabilities $`p4`$, $`p5`$, and $`p9`$ have been made to vanish, then quantum mechanics would prove wrong. Of course we do not expect any violation of quantum mechanics to be found. It is to be emphasized, however, that a violation of the lower bound of the inequality (17) for the case where $`p4=p5=p9=0`$, is not at all forbidden by the general requirement of relativistic causality. A survey of what is known about correlations in physical systems, along with additional restrictions on the kinds of correlations which are allowed by quantum mechanics, can be found in Ref. 4.
## 4 Conclusion and summary
A final remark is in order about the general nature of the extra constraints added by quantum mechanics, as compared with those entailed by general probabilistic theories. From the preceding paragraphs it should be clear that, for the considered CHSH and CH tests of nonlocality, the additional constraints implied by quantum mechanics essentially entail a weakening of the correlations it can produce. So, for example, as we have seen, the quantum mechanical probabilities do not saturate the inequality in Eq. (11), so that the quantum prediction of the CHSH sum of correlations, Eq. (12), is bounded in absolute value by $`2\sqrt{2}`$ (Cirel’son limit), instead of $`4`$. Moreover, this weakening is not at all determined by the requirement of relativistic causality since it is theoretically possible to have (hypothetical) stronger-than-quantum correlations preserving relativistic causality. It should be noticed, however, that, generally speaking, the constraints added by quantum mechanics do not prevent a given joint probability from attaining a value of unity. Indeed, if the system is prepared in a suitable product state, we can always make a given probability, say, $`p1`$, to be equal to unity, yet having the constraint $`|\mathrm{\Delta }_{\text{QM}}|2\sqrt{2}`$. (In fact, for product states, we have the more stringent bound $`|\mathrm{\Delta }_{\text{QM}}|2`$.)
In summary, in the present paper we have determined a number of basic restrictions (cf. Eqs. (8)) on the joint probabilities involved in an experiment of the Einstein-Podolsky-Rosen-Bohm type, which develop when these are required to satisfy the normalization condition and the causal communication constraint. These restrictions are, therefore, rather general and should be fulfilled by any physical theory consistent with relativistic causality. Further constraints arise when the sums of joint probabilities are required to satisfy the non-negativity condition. We have also considered the conceptually important case in which three specific probabilities are set to zero. This allows a formulation of Hardy’s nonlocality theorem within the framework of the CHSH inequality. For the abovementioned case, we have obtained the relevant constraints imposed by a general probabilistic theory, on the one hand, and by quantum mechanics, on the other hand, and we have derived a simple inequality discriminating between the two kinds of theories.
Acknowledgments — The author wishes to thank the anonymous referees for their valuable comments which led to an improvement of an earlier version of this paper.
Appendix
To show how the parameter independence condition embodied in the second equality of Eq. (3a) arises as a consequence of the formalism of quantum mechanics, we introduce the quantum mechanical operators $`\widehat{a}_1`$, $`\widehat{a}_2`$, $`\widehat{b}_1`$, and $`\widehat{b}_2`$, which are defined through the eigenvalue equations
$`\widehat{a}_j|m;\widehat{a}_j`$ $`=m|m;\widehat{a}_j,`$
$`\widehat{b}_k|n;\widehat{b}_k`$ $`=n|n;\widehat{b}_k,`$ (A1)
where $`j,k=1\text{ or }2`$, and $`m,n=\pm 1`$. The eigenvectors of $`\widehat{a}_j`$ satisfy the orthogonality relation $`+;\widehat{a}_j|;\widehat{a}_j=0`$ and, furthermore, they are normalized to length 1, that is, $`+;\widehat{a}_j|+;\widehat{a}_j=;\widehat{a}_j|;\widehat{a}_j=1`$, with similar relations holding for the eigenvectors of $`\widehat{b}_k`$. The operators $`\widehat{a}_j`$ and $`\widehat{b}_k`$ correspond to the measurement of the variables $`a_j`$ and $`b_k`$, respectively. Then, in the simplest (idealized) situation in which the quantum state describing the composite system is the pure state $`|\psi `$, the probability of getting the outcomes $`a_j=m`$ and $`b_k=n`$ in a joint measurement of $`\widehat{a}_j`$ and $`\widehat{b}_k`$ is
$$p(a_j=m,b_k=n)=\psi |\widehat{p}_m(\widehat{a}_j)\widehat{p}_n(\widehat{b}_k)|\psi ,$$
(A2)
where $`\widehat{p}_m(\widehat{a}_j)`$ and $`\widehat{p}_n(\widehat{b}_k)`$ are the projection operators $`\widehat{p}_m(\widehat{a}_j)=|m;\widehat{a}_jm;\widehat{a}_j|`$ and $`\widehat{p}_n(\widehat{b}_k)=|n;\widehat{b}_kn;\widehat{b}_k|`$. We note that the operators $`\widehat{p}_m(\widehat{a}_j)`$ and $`\widehat{p}_n(\widehat{b}_k)`$ are assumed to be compatible (that is, mutually commuting) in the case that the space-time events corresponding to the measurement of $`\widehat{a}_j`$ and $`\widehat{b}_k`$ are spacelike separated, so that, for such a case, the product $`\widehat{p}_m(\widehat{a}_j)\widehat{p}_n(\widehat{b}_k)`$ is a well-defined observable operator. We thus have
$`p(a_j=m)`$ $`={\displaystyle \underset{n=\pm 1}{}}\psi |\widehat{p}_m(\widehat{a}_j)\widehat{p}_n(\widehat{b}_k)|\psi `$
$`=\psi |\widehat{p}_m(\widehat{a}_j)\left({\displaystyle \underset{n=\pm 1}{}}\widehat{p}_n(\widehat{b}_k)\right)|\psi .`$ (A3)
Now, since the sum of projectors $`\widehat{p}_+(\widehat{b}_k)+\widehat{p}_{}(\widehat{b}_k)`$ is the identity operator acting on the two-dimensional Hilbert space pertaining to part $`B`$, the expression in Eq. (A3) reduces to
$$p(a_j=m)=\psi |\widehat{p}_m(\widehat{a}_j)|\psi ,$$
(A4)
which, clearly, is independent of the choice of measurement ($`b_1`$ or $`b_2`$) performed on part $`B`$. Hence relation (3a) follows. In fact, regarding the value of the quantum mechanical probability $`p(a_j=m)`$, it is immaterial whether some measurement is actually performed on $`B`$ or not. An entirely analogous argument could be established to show the independence of the quantum mechanical probability $`p(b_k=n)`$ from any measurement parameter concerning part $`A`$ of the system.
Notes
1. For the sake of completeness, it should be mentioned that various authors have entertained the logical possibility of solving the EPR paradox by considering extended probability measures (including negative ones). See, for example, the papers quoted in Ref. 19.
2. From the experimental side, there has been a recent test of the CHSH inequality conducted by Weihs et al. (see the first paper in Ref. 20) which, to date,is the only one experiment to force a violation of Bell’s inequality under truly spacelike separation of the individual measurement processes of the two involved observers. This experiment can be considered as an update with present-day technology of the classic third experiment by Aspect and co-workers (see the second paper in ). To date, however, all performed experiments testing Bell inequalities (including that of Weihs et al.) rely on one or another sort of auxiliary assumption (like the fair-sampling assumption) in order to deal with the detection efficiency loophole. Further experimental work testing quantum correlations in relativistic configurations is reported in the third paper of .
3. It is to be noted that Hardy’s nonlocality argument can be constructed out of other suitable sets of four probabilities, for example, out of the set $`\{p1,p8,p12,p16\}`$. The relevant constraint imposed by relativistic causality on this set will be (see Eq. (8h)), $`2p161p1+p8+p12`$. Indeed, there exists a total of eight sets of four probabilities for which it is possible to develop Hardy’s nonlocality theorem. Each of these sets is associated with one specific relation in Eqs. (8a)-(8h). For any given relation, the four probabilities forming the set are the three on the right-hand side with positive sign, plus the one on the left-hand side. So, for example, the set associated with relation (8a) is $`\{p2,p5,p12,p14\}`$, the set associated with relation (8b) is $`\{p3,p8,p9,p15\}`$, etc.
4. For ease of comparison with the work of Mermin in Ref. 8, we quote here the translation between Mermin’s notation and ours for the probabilities appearing in Eq. (13): $`p(11RR)p4`$, $`p(12GG)p5`$, $`p(21GG)p9`$, and $`p(22GG)p13`$.
5. The paradigm of maximally entangled state for a system composed of two subsystems is the singlet state of two spin-$`\frac{1}{2}`$ particles, $`|\psi =(1/\sqrt{2})(|_1|_2|_1|_2)`$, or its photon analog (we omit the spatial wave function of the state). $`|_1`$, $`|_1`$, $`|_2`$, and $`|_2`$ represent the spin states of the two particles (polarized along a common $`z`$-axis). The maximally entangled state is a symmetric state in that the two coefficients of the (biorthogonal) superposition have the same modulus. For the maximally entangled state, the quantum mechanical probabilities satisfy the symmetry relations, $`p1=p4`$, $`p2=p3`$, $`p5=p8`$, $`p6=p7`$, $`p9=p12`$, $`p10=p11`$, $`p13=p16`$, and $`p14=p15`$. In the case of two spin-$`\frac{1}{2}`$ particles, the variables $`a_1`$, $`a_2`$, $`b_1`$, and $`b_2`$ denote spin measurements (with outcomes $`+1`$ or $`1`$) along different directions for each of the particles.
6. Another way of obtaining this quantum violation is the following. Assuming without loss of generality that $`p1+p4+p5+p8+p9+p12+p14+p152`$, the CHSH inequality, $`|\mathrm{\Delta }_{\text{LHV}}|2`$, reads as (see Eq. (12)), $`2p1p4p5p8p9p12p14p151`$ or, equivalently, $`1p1p8p12p14p15p4+p5+p9`$. It is this latter inequality which can be compared with the CH type inequality, $`p13p4+p5+p9`$. When $`p4=p5=p9=0`$, the above inequality reduces to $`1p1p8p12p14p150`$. Now, from Eq. (16) (see below), we finally obtain $`2p130`$. The maximum quantum violation of this inequality is then $`\mathrm{0.180\hspace{0.17em}\hspace{0.17em}34}0`$. (Please note that, for a theory of local hidden variables, we have that $`p13=0`$ whenever $`p4=p5=p9=0`$, and thus the inequality $`2p130`$ is satisfied.)
|
no-problem/0003/astro-ph0003468.html
|
ar5iv
|
text
|
# The Intrinsic Shape Distribution of a Sample of Elliptical Galaxies
## 1 Introduction
Over the years a number of attempts have been made to derive the intrinsic shape distribution of elliptical galaxies from observations (Hubble 1926, Sandage et al. 1970, Noerdlinger 1979, Marchant & Olson 1979, Richstone 1979, Binggeli 1980, Binney & de Vaucouleurs 1981, Olson & de Vaucouleurs 1981; for a review see Statler 1996). Generally the results of these efforts have been ambiguous, and interest in the problem waned somewhat in the 1980s. But more recent developments have sparked renewed attempts to crack this classic chestnut. Among these developments are the recognition that halo shapes may serve as a diagnostic of galaxy formation physics (Dubinski & Carlberg 1991, Weil & Hernquist 1996), and indications that Hamiltonian chaos, dissipation, or both may either force triaxial equilibrium configurations to evolve slowly toward axisymmetry or render them altogether impossible (Dubinski 1994, Merritt & Fridman 1996, Merritt & Quinlan 1998).
Studies of central surface brightness profiles using HST suggest that the fundamental properties of elliptical galaxies may be bimodally distributed. There appears to be a dichotomy between high-luminosity, slowly rotating systems with shallow central cusps and boxy isophotes, and lower luminosity, rotationally supported systems with steeper cusps and a tendency for diskiness (Lauer et al. 1995; see also Kormendy & Bender 1996). Since rapid rotation and triaxiality are generally regarded as being incompatible, one might anticipate a bimodal distribution of triaxialities. Tremblay & Merritt (1996) find that low- and high-luminosity elliptical galaxies have different distributions of apparent ellipticity, which would imply different distributions of true shapes. Merritt & Tremblay’s work joins that of Fasano & Vio (1991), Ryden (1992, 1996), and Fasano (1996) as successors to the classical photometric approaches pioneered by Hubble, Sandage, and others.
However, photometric methods, while effective in constraining the distribution of overall flattenings, reveal little about the frequency of axisymmetry vs. triaxiality in the population. Uncovering this information requires the use of kinematic data and dynamical models to connect the kinematics to the shape of the gravitational potential. In the rare cases where well-defined, equilibrium gas disks are present, emission-line kinematics can yield excellent constraints on the shape of the potential if one assumes that the gas is on closed orbits (Bertola et al. 1991). But for the majority of ellipticals, methods relying primarily on stellar kinematics are essential. Approaches of this type were originated by Binney (1985) and enlarged upon by Franx et al. (1991) and Tenjes et al. (1993). Statler (1994a, 1994b) introduced major refinements, including improved dynamical models and a Bayesian approach to model fitting. This method has great potential to place quite narrow constraints on the triaxialities of individual galaxies for which very high quality stellar kinematic data are available. Unfortunately, the number of such galaxies is still very small, and is likely to increase at only a modest pace in the short term.
Our goal in this paper is to see what can be learned from the larger sample of galaxies with stellar kinematic data of less-than-ideal quality already in the literature. We focus on the Davies & Birkinshaw (1988, hereafter DB) sample of radio ellipticals, all of which have kinematic data on multiple position angles and are photometrically well studied. In the process, we extend the statistical methods of Statler (1994b) and show how to estimate the parent shape distribution from a sample of galaxies for which the data may be very inhomogeneous. Our method will thus continue to be generally applicable as new data are obtained.
In the next section of the paper we describe the general statistical approach for determining the parent distribution of a set of intrinsic quantities from measurements of related, but different, observable quantities, and show the particular application of this approach to the shape problem. In § 3, we discuss our treatment of the data and define a subsample of the DB galaxies which we are able to model reliably. Section 4 presents the results for the parent distribution, and examines systematic effects relating to unknown aspects of the stellar dynamics. Section 5 compares our results to those of previous studies, and § 6 sums up.
## 2 Estimating the Parent Distribution
### 2.1 Basic Idea
In previous papers (Statler 1994b, 1994c, Statler et al. 1999) we describe a Bayesian approach to inferring the intrinsic shapes of individual elliptical galaxies, based on dynamical models that predict the mean radial velocity field (VF) by solving the equation of continuity for the stellar “fluid.” For triaxial systems with negligible figure rotation, the streamlines of the mean motion in the main families of circulating orbits are dictated by the triaxiality of the mass distribution; thus for a given shape, orientation, and set of boundary condition parameters describing the internal orbit populations, the line-of-sight VF can be calculated. The calculation of the VF is very fast, so the multidimensional parameter space can be adequately explored.
For each set of parameters, we calculate the probability that the observed VF and surface brightness distribution would, with the known observational errors, be obtained from the corresponding model. This yields a multidimensional likelihood function $`L(T,c,\mathrm{\Omega },𝐝)`$, where $`T`$ is the triaxiality of the total mass distribution, $`c`$ is the short-to-long axis ratio of the luminosity distribution, $`\mathrm{\Omega }=(\theta ,\varphi )`$ is the orientation of the galaxy, and the vector $`𝐝`$ represents the remaining dynamical parameters (see § 2.4). The likelihood is integrated over an assumed prior distribution in the dynamical parameters $`𝐝`$ and an isotropic prior distribution in $`\mathrm{\Omega }`$ to give a two-dimensional likelihood $`L(T,c)`$.<sup>1</sup><sup>1</sup>1See Statler et al. (1999) for a more rigorous formulation of this statement. To obtain the Bayesian estimate of the galaxy’s shape, $`L(T,c)`$ is multiplied by a model for the parent shape distribution and normalized. In most of our previous work, this model has been a flat distribution, $`F(T,c)=\mathrm{const}`$, meaning that we have estimated the shape of each galaxy in isolation. The likelihood $`L_i(T,c)`$ for each galaxy $`i`$ is basically a data point with an error ellipse, i.e., a measurement of $`T`$ and $`c`$. Of course, in this case the errors are strongly non-Gaussian since we work with the actual probability distribution. The goal now is to combine these measurements for a sample of galaxies into an estimate of the parent distribution from which the sample was drawn.
Our algorithm is conceptually simple. We start with a flat model for the parent distribution, $`F(T,c)=\mathrm{const}`$. The Bayesian posterior probability density $`P_i(T,c)`$ for each galaxy is the normalized product of $`L_i`$ with the model parent $`F`$. (At this stage, $`P_i=L_i`$.) We stack the $`P_i`$’s on top of each other, add them up, smooth the sum with a nonparametric smoothing spline, and normalize the result. This gives us an improved model for the parent distribution $`F`$, which we multiply by the $`L_i`$’s and feed iteratively into the same procedure. Note that, after the first iteration, the statistical estimate of the shape of each galaxy in the context of the whole sample, $`P_i`$, is different from the estimate of its shape in isolation, $`L_i`$. This difference arises from the requirement that the sample be drawn from an isotropic distribution of orientations. Note also that since all operations are performed on distributions that are already integrated over $`\mathrm{\Omega }`$, the isotropy of the parent distribution is guaranteed.
A one dimensional toy problem can demonstrate that this algorithm works, even when the $`L_i`$ distributions are strongly non-Gaussian. Consider a set of objects, each with a value of some intrinsic property $`X`$ between 0 and 1. $`X`$ is not measurable, but a related quantity $`x`$ is. Suppose that for any object an observer has a uniform probability of measuring any value of $`x`$ between 0 and $`X`$. It is easy to show that a single measurement $`x_i`$ implies a likelihood $`L_i(X)`$ that is zero for $`X<x_i`$ and proportional to $`X^1`$ for $`X>x_i`$. Figure 6 shows the algorithm in action. The likelihoods $`L_i(X)`$ for 9 measured $`x`$ values are shown in Fig. 6$`a`$. These functions are multiplied by the initially flat parent distribution (b), summed (c), and smoothed to produce the new parent (d). The result after 20 iterations (e, solid line) is a decent representation of the true parent distribution (dotted line). The functions $`P_i(X)`$ giving the estimates of the $`X`$ values for the individual objects (f) differ from the original $`L_i`$’s but are consistent with the parent distribution.
### 2.2 Statistical Rationale
The stack-smooth-iterate algorithm is a general technique that is closely related to Lucy’s Method (Lucy 1974) and penalized likelihood (Wahba & Wendelberger 1980, Silverman 1986, Green & Silverman 1994).<sup>2</sup><sup>2</sup>2Our method is similar, but not identical, to a well-established approach developed by Wahba & Wendelberger (1980). We are grateful to the referee for directing us to this important paper. Readers uninterested in the statistical details are welcome to skip directly to § 2.5.
Imagine a population of objects, each of which possesses some value of an intrinsic property (or set of properties) $`X`$, distributed according to the parent distribution $`F_p(X)`$. Let there be an observable quantity (or set of quantities) $`x`$ which is related to $`X`$ by a conditional probability distribution $`P(x|X)`$. The distributions are normalized so that $`𝑑XF_p(X)=𝑑xP(x|X)=1`$. ¿From models, we calculate the likelihood $`L(X|x)`$ that a given measurement of $`x`$ was obtained from an intrinsic value $`X`$. We assume that the likelihoods are explicitly normalized so that $`𝑑XL(X|x)=1`$. If the models take into account all physical effects and measurement errors exactly, then $`L(X|x)`$ and $`P(x|X)`$ are mathematically the same function. For measurements $`x`$, the estimate of $`X`$ is given by the posterior density,
$$P(X)=\frac{F(X)L(X|x)}{𝑑XF(X)L(X|x)},$$
(1)
where $`F(X)`$ is the current estimate of the parent distribution. This is the standard Bayesian approach for individual objects.
The goal is to find the parent distribution $`F(X)`$ that maximizes the joint probability of obtaining the measurements $`x_i`$ for the set of $`n`$ objects $`i=1,\mathrm{},n`$. The logarithm of this probability is given by
$$\mathrm{ln}𝒫=\mathrm{ln}\underset{i}{}𝑑XF(X)P(x_i|X)=\underset{i}{}\mathrm{ln}𝑑xF(X)L(X|x_i).$$
(2)
If we write the measurements in terms of a distribution of observables, $`W(x)=_i\delta (xx_i)`$, then we can interpret the integral
$$W_m(x)𝑑XF(X)L(X|x)$$
(3)
as giving the distribution of observables that would be predicted if the model parent distribution $`F(X)`$ were correct. With these definitions, equation (2) becomes
$$\mathrm{ln}𝒫=𝑑xW(x)\mathrm{ln}W_m(x).$$
(4)
If we subtract the constant $`𝑑xW(x)\mathrm{ln}W(x)`$ from the quantity in equation (4), we get the Lucy $`H`$-function,
$$H_L=𝑑xW(x)\mathrm{ln}\frac{W_m(x)}{W(x)}.$$
(5)
In the statistics literature, $`H_L`$ is known as the “Kullback-Leibler information distance” between model and data (Silverman 1986).
Lucy’s method works to increase $`H_L`$ (decrease the information distance) by iteratively applying the rule
$$F_{\mathrm{new}}(X)=F(X)𝑑x\frac{W(x)}{W_m(x)}L(X|x).$$
(6)
Using equations (1) and (3), this can be written as
$$F_{\mathrm{new}}(X)=𝑑xW(x)P(X)=\underset{i}{}P_i(X).$$
(7)
Thus one iteration of Lucy’s method is identical to our scheme of “stacking” the posterior densities. In the absence of smoothing, our approach will seek a parent distribution that maximizes the likelihood of the observed sample.
For a finite sample, however, the maximum-likelihood parent distribution will be a set of spikes at the maximum of each $`L(X_i|x_i)`$, so it is ill-advised to iterate without a penalty function that enforces smoothness. Moreover, for a realistically small sample, there is not a unique $`F(X)`$ that maximizes $`\mathrm{ln}𝒫`$ unless such a penalty function is present to lift the degeneracy. To implement this penalty at each iteration, we regard $`F_{\mathrm{new}}(X)`$, computed according to equation (7), as a noisy realization of an underlying smooth function $`F_{\mathrm{new}}^s(X)`$. This function is estimated using a smoothing spline, and $`F_{\mathrm{new}}(X)`$ is replaced by its smoothed counterpart.
### 2.3 Smoothing Splines and Cross-Validation
Smoothing splines may be defined in any number of dimensions; here we summarize the two-dimensional case. This discussion is adapted from Green & Silverman (1994) and Silverman (1986).
The estimate of the parent distribution, $`F_{\mathrm{new}}(X)`$, is defined on a discrete grid of $`n`$ values of $`X`$. (We drop the subscript “new” in what follows for brevity.) In our case, $`X=(T,c)`$, and we have $`F(T_1,c_1)`$,…, $`F(T_n,c_n)`$, from which we want to determine the underlying smooth function $`F^s(T,c)`$. For a trial function $`g(T,c)`$, a penalized sum of squares of the residuals is given by
$$S(g)=\underset{i=1}{\overset{n}{}}\{F(T_i,c_i)g(T_i,c_i)\}^2+\alpha J(g),$$
(8)
where $`\alpha >0`$ is a “rate of exchange” between the usual goodness of fit measure and the penalizing function $`J(g)`$, given by
$$J(g)=𝑑T𝑑c\left\{\left(\frac{^2g}{T^2}\right)^2+2\left(\frac{^2g}{Tc}\right)^2+\left(\frac{^2g}{c^2}\right)^2\right\}.$$
(9)
The penalizing function measures the rapid variation and departure from local linearity in $`g`$. The functions which minimize $`S(g)`$ are known as thin plate splines, which are analogous to natural cubic splines in one dimension. Algorithms for calculating thin plate splines are implemented in the routines DTPSS and DPRED in the GCVPACK package (Bates et al. 1987), which is available from Netlib.<sup>3</sup><sup>3</sup>3http://netlib.org/gcv
The problem of finding $`F^s(X)`$ is now reduced to determining a single parameter $`\alpha `$. Likelihood cross validation provides a method for doing this automatically. The premise is that the best estimate of $`\alpha `$ should produce the distribution which best predicts all future data points. Since one is generally not gifted with prescience, one proceeds by removing one measurement from the sample and calculating the likelihood that that measurement would be obtained in the parent distribution found from the other measurements. Repeating this procedure for each measurement in turn and then averaging the likelihoods yields the cross validation (CV) score.
In our case, the measurements are the individual normalized likelihoods $`L(X_i|x_i)`$. We remove the $`i`$th measurement from our data set and create a new distribution $`F_i(X)`$ using the methods above. For a given value of $`\alpha `$, the likelihood that a single $`L(X_i|x_i)`$ is drawn from the smoothed model parent distribution $`F_i^s(X;\alpha )`$ is given by
$$_i=\mathrm{ln}\left(𝑑XF_i^s(X;\alpha )L(X|x_i)\right).$$
(10)
Averaging the $`_i`$’s gives the likelihood cross validation score,
$$CV(\alpha )=n^1\underset{i=1}{\overset{n}{}}_i,$$
(11)
and maximizing $`CV(\alpha )`$ provides the best estimate for $`\alpha `$. To ensure uniqueness of the final result, we compute the maximum of $`CV(\alpha )`$ only once, on the first iteration, and fix the smoothing parameter for all subsequent iterations to its initial value. In some cases there is not a unique maximum in $`CV(\alpha )`$; instead, $`CV(\alpha )`$ is nearly flat up to some $`\alpha _0`$, beyond which it turns over. We set $`\alpha `$ to its turnover value, and we see no indication that this choice biases the results.
### 2.4 Implementation
Our numerical implementation follows from that described in Statler (1994b). The treatment of individual galaxies is essentially the same, except for details noted in § 3 below. The grid of dynamical parameters used in the models is also the same as in the earlier work; to aid the reader in § 4 we give a brief overview here.
The models assume that (1) rotation of the figure (i.e., tumbling) is negligible; (2) short-axis tube and long-axis tube mean motions can be represented by confocal streamlines (Anderson & Statler 1998); (3) the luminosity density $`\rho _L`$ is stratified on similar ellipsoids, $`\rho _L(r,\theta ,\varphi )=\overline{\rho }_L(r)\rho _L^{}(\theta ,\varphi )`$; and (4) the velocity field obeys a “similar flow” ansatz outside the tangent point for a given line of sight, $`𝐯(r,\theta ,\varphi )=\overline{v}(r)𝐯^{}(\theta ,\varphi )`$. The last two assumptions are needed for projecting the models. The results are insensitive to the accuracy of these assumptions as long as $`\overline{\rho }_L(r)`$ and $`\overline{\rho }_L(r)\overline{v}(r)`$ decrease faster than $`r^2`$. This requirement limits the validity of the models to regions where the rotation curve is not steeply rising. As a further simplification we adopt power laws for the luminosity density and the velocity scaling law: $`\overline{\rho }r^k`$ and $`\overline{v}(r)r^l`$. The index $`k`$ is determined from surface photometry, and we nominally adopt $`l=(0,\pm \frac{1}{2})`$, omitting the $`l=\frac{1}{2}`$ case when $`k2.5`$. It turns out that the results are not very sensitive to either of these parameters.
Remaining properties of the phase space distribution function are described by a scalar constant $`C`$ and a function of one variable $`v^{}(t)`$. These parameters describe the mean velocity across the $`xz`$ plane on one fiducial shell, which in turn determines the velocity field over the whole shell once the triaxiality $`T`$ and the luminosity density are specified. The “contrast” $`C`$ is defined as the ratio of the $`y`$ component of the mean velocity on the $`x`$ axis to that on the $`z`$ axis, on the fiducial shell. The function $`v^{}(t)`$ gives the angular dependence of the mean velocity across the $`xz`$ plane on the fiducial shell. The variable $`t`$ is a rescaled polar angle, given, for spherical shells, by
$$t=\{\begin{array}{cc}2\frac{\mathrm{sin}^2\theta }{T},\hfill & \theta <\mathrm{sin}^1\sqrt{T},\hfill \\ \frac{\mathrm{cos}^2\theta }{1T},\hfill & \theta >\mathrm{sin}^1\sqrt{T},\hfill \end{array}$$
(12)
where $`\theta `$ is the usual polar angle. The relation for ellipsoidal shells is given in § Section 3.1 of Statler (1994b). By definition, $`v^{}(0)=C`$ and $`v^{}(2)=1`$.
The model grid comprises 8 different assumptions for the variation of $`C`$ with intrinsic shape. In four of these $`C`$ is constant: $`C=0`$ (long-axis tube dominated), $`0.5`$, $`1`$, and infinite (short-axis tube dominated). Four more functional forms for $`C(T,c)`$ are introduced to mimic certain self-consistent models, and are given in equations (11) – (14) of Statler (1994b). The function $`v^{}(t)`$ is taken to be either piecewise-constant or piecewise-linear in each of the intervals $`[0,1)`$ and $`(1,2]`$ (in the linear cases dropping to zero at $`t=1`$). This function describes how the mean rotation speed in each of the tube orbit families declines away from the symmetry plane that contains its parent orbits. For example, the mean rotation in the Galaxy drops with height above the disk plane as one moves into the more pressure-supported halo. At the other extreme, a maximally rotating isothermal sphere has constant rotation speed at all latitudes. Accordingly, we refer to linear $`v^{}(t)`$ as “disklike” rotation, and constant $`v^{}(t)`$ as “spheroidlike” rotation. A model can be disklike or spheroidlike in either short-axis or long-axis tubes. One should avoid the impression that disklike rotation necessarily implies a two-component structure; in an oblate disklike model, the mean rotation speed $`45\mathrm{°}`$ up from the equatorial plane is half of the in-plane value, a much gentler transition than in a genuine disk-halo system.
The likelihoods $`L_i(T,c)`$ for each galaxy are computed on a $`20\times 20`$ rectangular grid on the intervals $`0T1`$ and $`0.4c1`$. Smoothing a function over a finite domain creates problems near the edges unless suitable boundary conditions are imposed to minimize this effect. The thin plate spline does not impose any strict boundary conditions but rather sees the area outside of the boundaries as lacking information. The penalizing function $`J(L_i(T,c))`$ is therefore the only part to contribute to the penalized sum of squares of the residuals, forcing the function $`L_i(T,c)`$ to be flat outside the boundaries (see section 2.3). In practice this has the effect of biasing $`L_i(T,c)`$ towards closed contours and reduced variability near the edges (Green & Silverman 1994). The effect however is limited and our results show little if no evidence of it.
We find that, in practice, convergence of the parent distribution can be rather slow, as peaks grow at the expense of valleys that sink toward zero. With our sample of only 13 objects, iterating until a stringent convergence criterion is satisfied may be dangerous. In order to be conservative in our conclusions regarding the frequency of triaxiality, we stop iterating when the maximum fractional change in $`F(T,c)`$ per iteration falls below 10%. Typically this occurs after about 7 iterations.
## 3 Data
### 3.1 Kinematics
All of the galaxies modeled are taken from the sample of radio ellipticals for which DB obtained multiple position angle rotation curve measurements. The sample contains more E3–E4 and fewer E0 galaxies than the general population and, as DB point out, includes an overabundance of “unusual” objects. Where appropriate we have supplemented the DB data with data from Franx et al. (1989), Binney et al. (1990), Bender et al. (1994) and Fried & Illingworth (1994).
The published rotation curves are first oriented to match our convention that radii west of north are positive. Since the models assume that the rotation curves are antisymmetric, we fold the profiles about the center of the galaxy to reduce the formal errors in the average rotation velocity. For each individual galaxy we approximate by eye the radius at which the rotation curve flattens and use the data outside of this in the models. At large radii the kinematic data become unreliable for reasons which vary from galaxy to galaxy (see section 3.3). We therefore set an outer radius beyond which we discard the data. We average the data points which are left between the inner and outer radii on each PA, weighted by the inverse square of the published errors. The uncertainty associated with the average is taken to be the $`(1/\sigma ^2)`$-weighted standard deviation following Statler (1994c). The inner and outer radii and the adopted mean velocities are given in columns 4, 5 and 9, respectively, in Table 1.
### 3.2 Photometry
With the exception of the data for NGC 4839, all of the photometry is drawn from Peletier et al. (1990), who tabulate the ellipticity, major axis PA and surface brightness as functions of radius. Similiar photometry for NGC 4839 is drawn from Joergensen et al. (1992). For each galaxy the adopted major axis position angle is the average between the inner and outer limiting radii. The ellipticities are determined by taking the unweighted mean in the same interval, with the standard deviation serving as the uncertainty. The adopted mean ellipticities and major axis PAs are given in columns 2 and 3 of Table 1.
The slope of the surface brightness profile is calculated by differentiating numerically. The surface brightness slope is then deprojected into a volume brightness slope ($`k`$) by adding 1. Although this is strictly valid only for pure power-law profiles, it is fine for our level of approximation. For most galaxies the logarithmic slope of the surface brightness profile is not constant in the relevant intervals, and so two values are used that span the ranges of $`k`$. We compute all of the models using both values. The spanning values of $`k`$ for all of the galaxies are in columns 6 and 7 of Table 1.
### 3.3 Notes on Individual Galaxies
Of the 14 galaxies in the DB sample, four, NGC 1600, NGC 4374, NGC 4636 and NGC 4839, do not show any significant rotation at DB’s level of accuracy. We therefore model them using only their photometric data. A fifth object, NGC 4278, does show significant rotation but is not used. It shows a $`20\mathrm{°}`$ isophotal twist between $`20\mathrm{}`$ and $`60\mathrm{}`$ and a drop in rotation velocity to zero outside of $`20\mathrm{}`$ that conflicts with our assumption that the rotation curve is flat at large radii. It could be modeled but a more sophisticated method involving fitting at multiple radii would be required.
Details of how we have handled the data for the remaining sample of 13 galaxies are as follows:
NGC 1600, NGC 4374, NGC 4636 and NGC 4839. Because of the lack of any significant rotation in these galaxies, photometric data alone is used to estimate their shape likelihood distributions. The triaxialities of these four galaxies are therefore poorly constrained. They are included on the grounds that omitting them could bias our results away from strongly triaxial systems, most of which are probably slowly rotating. In each case, the data are averaged from the center out to the largest radius for which kinematic data is available (see Table 1). The ellipticity of NGC 1600, NGC 4374 and NGC 4636 varies by about $`0.10`$ in this interval, but the ellipticity of NGC 4839 rises steadily from $`0.20`$ near the center to $`0.50`$ at $`32\mathrm{}`$.
NGC 315. The turnover radius of the rotation curve is easily located at $`5\mathrm{}`$ by inspection. At $`27\mathrm{}`$ on PA 40 the rotation velocity is more than $`3\sigma `$ from the mean. Since we do not know if this is a real effect, we eliminate the total of six data points outside of $`20\mathrm{}`$.
NGC 741. The relatively low surface brightness of this galaxy results in large uncertainties in the kinematic data. Nonetheless, its six position angle measurements of the rotation curve make it very attractive to model. Outside of $`15\mathrm{}`$ some rotation appears on PA 10 and PA 40. Although it is not at all clear that the turnover radius of the rotation curve has been found, all of the data from $`15\mathrm{}`$ to the outermost data point at $`30\mathrm{}`$ are used.
NGC 1052. The data for this galaxy is the best for any in the sample. DB present kinematic data on four PAs, Binney et al. (1990) on the major and minor axis and Fried & Illingworth (1994) on the major axis. The turnover radius of the rotation curve is easily seen to be at $`15\mathrm{}`$ for this galaxy. A velocity difference of almost 50 km/sec between the southern and northern parts of the galaxy outside of $`37\mathrm{}`$ on PA 117 and PA 164 in both the DB and Binney et al. (1990) data sets imply that NGC 1052 may not be antisymmetric outside of this radius as is assumed in our models. The total of 11 questionable points outside of $`35\mathrm{}`$ are therefore eliminated. Fried & Illingworth (1994) measure the rotation curve to be flat out to $`40\mathrm{}`$ on PA 117.
NGC 3379. The turnover radius of the rotation curve appears at about $`15\mathrm{}`$ so only data from this radius to the outermost data point at $`34\mathrm{}`$ is used.
NGC 3665. The rotation curve is flat from inside of $`5\mathrm{}`$ to close to $`30\mathrm{}`$, but there is a discontinuity at $`10\mathrm{}`$ where the ellipticity suddenly drops from $`0.35`$ to almost zero. The ellipticity then slowly rises to approximately $`0.2`$ at $`15\mathrm{}`$ outside of which it is constant. There is also a $`10\mathrm{°}`$ isophotal twist in the same range. Only data outside of $`15\mathrm{}`$ is used.
NGC 4261. DB provide kinematic data on four position angles out to $`55\mathrm{}`$. The major and minor axis data are supplemented with data from Bender et al. (1994), who place their slits $`4\mathrm{°}`$ from those of DB. The average of the slit positions of the two papers is therefore used when combining the two datasets. Although this does introduce some error into the data it is very minor compared to the uncertainty in the velocity measurements. This galaxy is clearly a minor axis rotator with a turnover radius of the rotation curve at $`20\mathrm{}`$.
NGC 4472. The turnover radius of the rotation curve on all the PAs is at approximately $`25\mathrm{}`$. Between $`3\mathrm{}`$ and $`30\mathrm{}`$ the ellipticity of NGC 4472 increases from $`0.06`$ to $`0.17`$ but is constant outside this range, so only data points beyond $`30\mathrm{}`$ are used. Outside of $`60\mathrm{}`$ two data points on PA 160 are more than $`4\sigma `$ from the average so we discard the points outside of this radius on all position angles.
NGC 4486. This galaxy’s slow rotation makes the errors relatively large in the velocity measurements, but outside of $`20\mathrm{}`$ the data is statistically consistent with a flattening of the rotation curve out to the last datapoint at $`60\mathrm{}`$. The data between these radii are therefore used.
NGC 7626. This is another slow rotator. The only significant rotation is on the minor axis. Outside of $`20\mathrm{}`$ the rotation curve seems to reverse, but the errors are so large that the reversal is not statistically significant. We model the data only inside $`20\mathrm{}`$. A turnover radius in the rotation curve on the major axis at approximately $`5\mathrm{}`$ sets the inner radius.
## 4 Results
### 4.1 The “Maximal Ignorance” Shape Distribution
As in previous papers, we take the result from an unweighted combination of all models to represent the case of “maximal ignorance,” i.e., minimal assumptions as to the character of the internal dynamics. The parent shape distribution after seven iterations is shown in Figure 6$`a`$. The distribution is plotted in terms of $`T`$ and $`c`$ such that oblate spheroids, prolate spheroids, and spheres lie, respectively, along the right, left, and top margins. This distribution is bimodal, dominated by one group of nearly oblate, moderately flattened systems and a second group of rounder, nearly prolate systems. The valley between the two peaks represents a dearth of very triaxial galaxies. The bimodality is almost entirely a consequence of the kinematic data; to illustrate, we show in Figure 6$`b`$ the result obtained from photometry alone, ignoring the kinematics. As discussed in the Introduction, photometry is effective in constraining the overall flattening distribution but reveals little about triaxiality.
A more succinct description of the frequency of triaxiality in this distribution comes from the one-dimensional distribution $`F(T)`$, obtained by integrating Figure 6$`a`$ over $`c`$. The result is shown in Figure 6$`a`$. We somewhat arbitrarily set boundaries at $`T=0.2`$ and $`T=0.8`$ to delineate “nearly oblate,” “triaxial,” and “nearly prolate” regions. By this definition, the maximal ignorance distribution is 47% nearly oblate, 18% nearly prolate, and 35% triaxial. If we continue iterating beyond our nominal stopping criterion, the triaxial fraction decreases further, so we can take this as a conservative estimate of the rarity of triaxial systems implied by our subsample of the DB galaxies. The result is influenced somewhat by the four galaxies without kinematic data; if these objects are omitted, the fractions change to 55% oblate, 25% prolate, and 21% triaxial.
We can obtain some measure of whether the DB subsample is representative of the elliptical galaxy population at large by calculating the expected ellipticity distribution for a randomly-oriented population drawn from the inferred parent. We plot this as the smooth curve in Figure 6$`b`$, compared with the observed ellipticity distribution from Ryden (1992). The two distributions are similar, though our predicted distribution contains a slight excess of very round galaxies. A Kolmogorov-Smirnov (KS) test implies a 14% probability that the observed sample was drawn from our distribution. However, the KS probability can be affected by details of how the mean ellipticities are defined. The ellipticities tabulated by Ryden are weighted by luminosity, whereas we exclude data from the brightest parts of the galaxies. Applying a systematic shift as small as $`\mathrm{\Delta }ϵ=0.019`$ to our expected distribution would increase the KS probability to 99%. We conclude that our maximal ignorance parent distribution is consistent with the ellipticities of the general population of elliptical galaxies.
The final posterior densities describing the shapes of the individual galaxies in the sample with rotation data are shown in Figure 6. Some well-known objects are found to have well constrained triaxialities; NGC 1052, NGC 3379, and NGC 4472 are probably oblate or nearly so. The famous minor-axis rotator NGC 4261, not surprisingly, turns out to be most likely prolate, though there are oblate models not excluded at the $`2\sigma `$ level. The shapes of other objects are not as well constrained, and bimodal posterior densities are less a consequence of the kinematic data for the individual galaxies than a reflection of the parent distribution.
### 4.2 Dynamical Configurations That Can Be Ruled Out
Just as the marginal posterior densities describing the shape of each galaxy (Fig. 6) can be computed for a given parent distribution, we can compute marginal densities describing the orientation of each galaxy according to
$$P_i(\mathrm{\Omega })=𝑑T𝑑c\frac{1}{4\pi }F(T,c)L_i(T,c,\mathrm{\Omega }).$$
(13)
The 4-dimensional likelihoods $`L_i(T,c,\mathrm{\Omega })`$ are obtained by integrating the original likelihood function $`L_i(T,c,\mathrm{\Omega },𝐝)`$ over the dynamical parameters. The factor $`1/4\pi `$ reflects the assumed isotropy of the parent distribution; in other words we have assumed that the 4-dimensional parent has the form $`F(T,c,\mathrm{\Omega })=F(T,c)/4\pi `$. We could, in fact, have worked our whole procedure in 4 dimensions instead of 2. Had we done so, isotropy of the parent would have been imposed at each iteration by explicitly smoothing away all of the $`\mathrm{\Omega }`$ dependence from the stacked $`P_i(T,c,\mathrm{\Omega })`$ functions. One would expect, for a plausible set of models leading to a plausible parent distribution, that the stacked $`P_i`$’s should have an $`\mathrm{\Omega }`$ dependence not too far from isotropic, before it is smoothed away. This gives us an important consistency check: the sum, $`\mathrm{\Sigma }_iP_i(\mathrm{\Omega })`$, of the final posterior densities from equation (13) ought to be reasonably flat. Even though the parent distribution is, by construction, isotropic, there is no guarantee that the sample is isotropic. If we find a strong orientation bias in the sample despite assuming an isotropic parent, this constitutes a contradiction and signals a false assumption.
Two applications of this test are shown in the bottom four panels of Figure 6. Figure 6$`c`$ shows the parent distribution derived under the assumption that all galaxies rotate about their intrinsic long axes ($`C=0`$). Most objects are close to oblate and quite flat, with a small but significant fraction of rounder, triaxial systems. This is clearly different from the maximal-ignorance distribution in Figure 6$`a`$. However, this case can be ruled out by the orientation distribution of the sample, shown in Figure 6$`e`$. For the galaxies all to be long-axis rotators, we must be seeing them in nearly the same orientation; the line of sight lies inside one of two $`45\mathrm{deg}`$-wide cones for about 40% of the sample.
A better quantitative measure of the orientation bias is the rms deviation of $`\mathrm{\Sigma }_iP_i(\mathrm{\Omega })`$ from perfect isotropy, normalized to unit mean; we refer to this as the sample anisotropy, $`A_s`$. For the case in Figure 6$`e`$, $`A_s=1.17`$. Table 2 gives $`A_s`$ values for the distributions calculated using various subsets of the dynamical models. Unfortunately, it is not straightforward to link a value of $`A_s`$ with a confidence limit. The expected $`A_s`$ distribution for an ensemble of random isotropic samples depends on the forms of the individual $`P_i`$’s, which depends on both data and models. We can make a very rough correspondence to an easier statistical problem if we imagine that each $`P_i(\mathrm{\Omega })`$ simply marks a fraction $`f`$ of the sphere as allowed and a fraction $`1f`$ as excluded. The $`P_i`$’s are then $`n`$ patches thrown down at random onto the sphere. At a random point on the sphere, the number $`m`$ of overlapping patches is given by a binomial distribution. In Table 2, we find that, except for the $`C=0`$ and $`C=\mathrm{}`$ cases, all of the models using the kinematic data hover around $`A_s0.2`$. This would follow from the binomial distribution for $`f=0.63`$, which is a not-unreasonable characterization of the $`P_i`$’s. We calculate that, if $`0.2`$ is the expected $`A_s`$ for a random sample of 13 objects and if $`A_s`$ is distributed as in the patch problem, we can reject cases with $`A_s>0.40`$ at $`99\%`$ confidence and cases with $`A_s>0.48`$ at $`99.9\%`$ confidence. A more realistic simulation with 9 patches that exclude 50% of the sphere and 4 that exclude only 10% gives very similar results. Thus the hypothesis that elliptical galaxies rotate about their long axis is firmly ruled out. Of course, this is neither a particularly surprising nor new result; Binney (1985) reached the same conclusion from essentially the same data.
Figure 6$`d`$ shows the parent distribution under the assumption that all objects rotate around their intrinsic short axes ($`C=\mathrm{}`$, also known as “zero intrinsic misalignment”). Here, most objects are triaxial, again very different from the maximal ignorance result. Figure 6$`f`$ shows the orientation distribution for the sample, which has $`A_s=0.43`$. The assumption of zero intrinsic misalignment for all systems is excluded at approximately the $`99.6\%`$ confidence level. This result differs from that of Franx et al. (1991), who were able to reproduce the observed distribution of ellipticities and kinematic misalignment angles<sup>4</sup><sup>4</sup>4For galaxies with only major and minor axis kinematics, the misalignment angle is $`\mathrm{tan}^1(v_{\mathrm{minor}}/v_{\mathrm{major}})`$. with a family of triaxial models rotating about their short axes. We have not explored the source of this disagreement in depth. While it may be due simply to our smaller sample, we suspect that the models with which Franx et al. can fit galaxies with large kinematic misalignments fail on more detailed comparison with multi-position-angle data.
### 4.3 Dynamical Configurations That Cannot Be Ruled Out
Of the parent distributions we have derived from various subsets of the dynamical models, we find no other cases that can be ruled out on the basis of the $`A_s`$ values. Some of the unexcludable cases nonetheless differ significantly from the maximal ignorance distribution. Of particular interest are the cases in the last four rows of Table 2, for which the derived parent distributions are shown in Figure 6. These distributions differ only in whether the mean rotation is assumed to be disklike or spheroidlike (see § 2.4). The parent distribution is more sensitive to this assumption than to any of the other dynamical parameters, save for the cases already ruled out above.
Figure 6 shows the triaxiality distributions for these four cases, indicating the fraction of nearly oblate, nearly prolate, and triaxial systems as defined in § 4.1. The prevalence of axisymmetric systems over triaxial ones is significantly affected by the rotation characteristics, in the sense that axisymmetry becomes less common if rotation is more spheroidlike. Moreover, the fractions of nearly prolate and nearly oblate objects are largely determined, respectively, by the character of the rotation in the long-axis and short-axis tubes. The peak at the prolate limit seen in the maximal ignorance result disappears entirely if the rotation in the long-axis tubes is spheroidlike. The dominant peak at the oblate limit is lowered by nearly a factor of two if the short-axis tube rotation is spheroidlike rather than disklike.
We consider this the most important result in this paper: if rotation in ellipticals is generally disklike, then triaxiality is rare; if spheroidlike, triaxiality is common. It follows that understanding the shapes of elliptical galaxies is closely linked with understanding whether weak disks are common structural components. It also follows that a physical understanding of what conditions during formation are likely to impose disklike or spheroidlike rotation on a hot stellar system would be extremely valuable.
## 5 Discussion
### 5.1 Previous Results on the Shape Distribution
A number of attempts have been made in the past to determine the parent intrinsic shape distribution of elliptical galaxies, mostly using photometry alone. It is interesting to see how our maximal-ignorance result compares to some of these.
Ryden (1992) fits a parent distribution to a sample of 171 measured ellipticities by letting the distribution assume the form of a circular Gaussian in axis ratio space. She finds a best-fit center to the distribution at $`b=0.98`$, $`c=0.69`$, implying that the most common shape is nearly oblate. The distribution is wide, however, with $`61\%`$ of galaxies having a triaxiality between $`0.2`$ and $`0.8`$, compared to $`35\%`$ for our sample. This difference may be attributable to Ryden’s assumption of a single peak; our method shows that the distribution may be bimodal. When recast in terms of $`(T,c)`$, Ryden’s distribution has a short-to-long axis ratio expectation value $`c=0.68`$, similar to our value of $`0.71`$.
Lambas et al. (1992) take a similar approach using 2135 measurements of ellipticities from the APM Bright Galaxy Survey. Using a Monte Carlo technique they find the elliptical Gaussian in axis ratio space which best reproduces their observations. Their results are remarkably different both from ours and from Ryden’s. They find the center of their distribution at a flattening $`c=0.55`$ with a width of $`0.2`$ in that dimension, implying that $`30\%`$ of ellipticals have $`c<0.4`$. The main reason for this difference is an excess of flat galaxies in their sample. Only $`2\%`$ of the galaxies in Ryden’s (1992) sample have an apparent ellipticity $`ϵ>0.6`$, but the APM sample has $`30\%`$ to $`40\%`$ in that range. Lambas et al. do not offer an explanation for this apparent inconsistency with previous photometric studies of elliptical galaxies. Conceivably a large S0 contamination could be the cause.
A nonparametric, maximum-entropy shape distribution for the Ryden (1992) ellipticity sample is derived using a modified Lucy’s method by Statler (1994a). He finds a rather broad distribution in triaxiality, with $`47\%`$ of the galaxies having $`T<0.5`$ compared with $`70\%`$ in our distribution.
Using the same data as Statler (1994a), Tremblay and Merritt (1995) use a nonparametric maximum penalized likelihood estimator to derive the maximum-entropy shape distribution. They find that it is weakly bimodal and weighted towards oblate figures. Our distribution is significantly more bimodal and predicts fewer triaxial galaxies.
Using the same technique, Tremblay and Merritt (1996) estimate the parent distribution from a sample of 220 ellipticities. They assume that all galaxies have the same triaxiality and then proceed to calculate the distribution of intrinsic flattenings $`c`$. They find that a pure oblate or prolate distribution is inconsistent with the available data and that a division of intrinsic flattenings exists between bright and faint galaxies with peaks at $`c=0.75`$ and $`c=0.65`$ respectively. All our galaxies are bright and therefore our expectation value of $`c=0.71`$ agrees well with theirs.
Although the above studies, with the exception of Lambas et al. (1992), give similar results for the axis ratio $`c/a`$, none is able to put any real constraints on triaxiality, even when large samples are used. This demonstrates the need to include kinematic data in the models. Franx, Illingworth and de Zeeuw (1991) attempt to address this need by including the misalignment between the photometric and kinematic axes in their models. Studying a sample of 38 ellipticals, they conclude that a wide variety of distributions are consistent with the data, including ones similar to ours with both an oblate and a prolate peak.
### 5.2 Previous Results for Individual Galaxies
Some of the individual galaxies in our sample have been modeled previously. Statler (1994c) treats NGC 3379 using essentially the same data and methods applied here, except that the galaxy is fit in isolation, using a flat parent distribution. The result is that flattened nearly oblate shapes or rounder triaxial configurations are allowed by the data. Compared with this earlier result, the posterior density shown in Figure 6 is more constrained toward small $`T`$ due to the preference for near axisymmetry in the parent distribution.
Some objects in our sample have available additional kinematic or morphological constraints which are not included in our models. The best-studied example is NGC 1052, which has been modeled by Binney et al. (1990), Tenjes et al. (1993), and Plana and Boulesteix (1996). This galaxy has the best constrained shape in our sample; Figure 6 shows only a small permitted region around the oblate spheroid with $`c=0.63`$. The small triaxiality supports the use of axisymmetric models by Binney et al. (1990) to constrain the phase space distribution function. Applying the Jeans equation to the observed surface photometry and comparing the predicted velocity dispersion and azimuthal streaming to the observed kinematics, they find that NGC 1052 is consistent with a two integral distribution function. Tenjes et al. (1993), using the method of Franx, Illingworth and de Zeeuw (1991) and the presence of a gas disk to constrain the viewing angles, find that, depending on the specific kinematic model used, $`c`$ lies between $`0.4`$ and $`0.6`$ and the triaxiality is well constrained between $`0.56`$ and $`0.61`$. These values imply a very highly triaxial galaxy, and lie well outside of our $`95\%`$ highest posterior density region. Using similar methods Plana & Boulesteix (1996) calculate the triaxiality to be $`0.48`$ with a flattening of $`0.5`$. This is much flatter and more triaxial than our result. It is possible that including orientation constraints from the gas would alter our derived shape. However, the results of Tenjes et al. (1993) and Plana and Boulesteix (1996) are very sensitive to the orientation of the disk, and consequently to assumptions about its intrinsic flatness and circularity; even a small error here could change their results dramatically.
In a study similar to that of Binney et al. (1990), Van der Marel et al. (1990) model the distribution functions of NGC 3379, NGC 4261, and NGC 4472. Their use of oblate axisymmetric models for NGC 3379 and NGC 4472 is supported by our results for these galaxies. For NGC 4261 they fit the observations to a prolate model with $`c=0.59`$, which is consistent with our triaxiality estimate and is within our $`95\%`$ highest posterior density region.
At the risk of disappointing the reader, we have avoided discussing the orientations of individual galaxies in the sample. This is, admittedly, counter to the original motivation of DB, which was to determine if there is any relationship between the orientations of the galaxies and their radio jets. Although we have calculated orientation constraints for each of the galaxies, a full discussion of this topic would of necessity be lengthy, and is outside the scope of this paper. We will deal with this issue in a future publication.
## 6 Summary and Conclusions
By combining photometric and kinematic data with dynamical models using the method of Statler (1994b), we have derived constraints on the intrinsic shapes and orientations of 13 ellipticals from the Davies and Birkinshaw (1988) sample of radio galaxies. Using an iterative Bayesian approach we have then combined those results to estimate the parent shape distribution from which they were drawn, under the assumption that this parent distribution has no preferred orientation. In the process we have obtained improved constraints on the shapes of the individual objects.
We have found that the parent shape distribution shows a tendency toward bimodality, with peaks at the oblate and prolate limits. In the distribution derived under minimal assumptions about the galaxies’ internal dynamics, only about a one-third of the objects would be strongly triaxial ($`0.2<T<0.8`$). However, the parent distribution does depend on dynamical assumptions. Some of these assumptions can be ruled out because they would require the sample to have a strong orientation bias; configurations in which all galaxies rotate purely about either their long axes or their short axes can be excluded on these grounds. On the other hand, configurations in which the mean motions in the short-axis and long-axis tube orbits are either disklike—dropping off away from the symmetry planes—or spheroidlike—staying approximately constant at a given radius—cannot be distinguished at this point. Whether the rotation is disklike or spheroidlike has a strong effect on the inferred shape distribution. Spheroidlike rotation in the long-axis or short-axis tubes, respectively, significantly reduces the fraction of nearly prolate or nearly oblate galaxies; bimodality is completely eliminated if the long-axis tubes are spheroidlike and the short-axis tubes disklike. In a nutshell, if rotation in ellipticals is generally disklike, then triaxiality is rare; if spheroidlike, triaxiality is common.
This inferential link between diskiness and axisymmetry complements the intuitive physical notion that the two ought to go hand in hand. There is evidence from the width of the Tully-Fisher relation that the disks of spiral galaxies are very nearly circular (Franx & de Zeeuw 1992), and indications from numerical experiments that growing even a weak disk in a triaxial halo can render the latter axisymmetric (Dubinski 1994). Whether weak disks in elliptical galaxies are detectable is another long-standing issue receiving renewed attention (Magorrian 1999). High-accuracy, multi-position-angle kinematic mapping may be able to reveal hidden disks, but the expected signatures are subtle. Some support is lent to the possibility that weak disks may be common by the kinematic similarities that the “standard elliptical” NGC 3379 shares with the S0 galaxy NGC 3115 (Statler & Smecker-Hane 1999). Theoretically, however, the origin of these particular kinematic features is not understood. As we have stressed, a physical understanding of the processes that may establish disklike or spheroidlike rotation in a hot stellar system is sorely needed.
We are indebted to Barbara Ryden and the referee, David Merritt, for numerous constructive comments. This work was supported by NASA Astrophysical Theory Program Grant NAG5-3050 and NSF CAREER grant AST-9703036.
|
no-problem/0003/astro-ph0003020.html
|
ar5iv
|
text
|
# Ambiguities in fits to the complex X-ray spectra of starburst galaxies
## 1 Introduction
There have been a number of papers recently discussing ASCA and ROSAT X-ray observations of nearby starburst galaxies, in particular the two best-studied systems NGC 253 and M 82, with different interpretations of the resulting spectral fits (Moran & Lehnert 1997; Ptak et al. 1997; Strickland et al. 1997; Tsuru et al. 1997; Vogler & Pietsch 1999). All authors agree on the general complexity of the X-ray properties, but–depending on the data and on the spectral models used for the spectral analysis–they reached different conclusions. Especially the choice of spectral model components and the resulting best-fitting element abundances are under debate.
By treating all available imaging and spectroscopy data from ASCA and ROSAT (both HRI and PSPC) in a self-consistent manner, Dahlem et al. (1998; hereafter Paper I) and Weaver et al. (2000; hereafter Paper II) were able to reconcile the apparent discrepancies based on a mini-survey of 5 nearby edge-on starburst galaxies.
The results from Paper I and Paper II suggest that the combined ASCA and ROSAT PSPC integral spectra of NGC 253 and M 82 can be fit with comparable values of $`\chi ^2`$ by different combinations of spectral components, which means that there is an ambiguity in the choice of the best-fitting spectral model. By cross-checking the spectral results with ROSAT PSPC and HRI imaging data, a spectral composition of (at least) two thermal plasmas, with temperatures in the ranges 0.1–0.4 keV and 0.6–0.8 keV, respectively, plus a hard power law component turns out to be the only model combination that can explain all observational data of all galaxies in the sample simultaneously (Paper II).
This is in contrast with the recent findings by Cappi et al. (1999; hereafter C99), based on BeppoSAX data of NGC 253 and M 82. These authors claim that there is “compelling evidence for the presence of an extended hot thermal gas” of several keV temperature in these two galaxies. The purpose of the current letter is to investigate this apparent discrepancy between their results and ours by re-analyzing the BeppoSAX observations of NGC 253 and M 82, taking into account earlier results based on ROSAT and ASCA observations.
## 2 Observations and data reduction
All parameters of the BeppoSAX observations of M 82 and NGC 253 are as described by C99. The data were reduced in the standard fashion, using SAXDAS 2.0. LECS and MECS data were extracted for joint spectral fitting from a circular region centered on the position of the sources, using radii of $`4^{}`$. Background subtraction was performed using standard files (Parmar, Oosterbroek, & Orr 1999 \***check\***).
Spectral fitting was performed using xspec in the following way. We first used the input model preferred by C99 to ensure that we can reproduce their results. Then we tried the model used by us for the analysis of the joint ASCA+ROSAT PSPC spectrum (Paper I and Paper II).
In the following we will list and discuss our results for both NGC 253 and M 82. However, since we used the same data extraction, reduction, and spectral fitting technique for both galaxies, only one (NGC 253) will be presented in figures.
## 3 Results and discussion
### 3.1 NGC 253
We could reproduce the results by C99 within the uncertainties, using a spectral model with 2 Mekal plasma components (2M; see their Fig. 4). The fit to the BeppoSAX data of NGC 253 following the model preferred by us, with a Mekal and a power law (M+P) component, is displayed in Fig. 1. This is evidently also an acceptable fit. The goodness of fit for our preferred model (Paper II) is $`\chi ^2`$ = 261.9 for 265 d.o.f. ($`\chi ^2/\nu `$ = 0.99), while that for the 2 Mekal (hereafter “2M”) model favored by C99 is $`\chi ^2`$ = 282.5 for 268 degrees of freedom (d.o.f.) and thus $`\chi ^2/\nu `$ = 1.05. The M+P model fits the data better than the 2M model at the highest and lowest energies of the passband. The results of the two spectral fits to the BeppoSAX data of NGC 253 are tabulated in Table 1. All uncertainties are given at the 90% confidence level for one interesting parameter; note that these only apply under the assumption that the choice of model components represents the different contributing emission mechanisms correctly.
The softest thermal emission component found in the ROSAT data is not required. When including a thermal plasma of 0.26 keV energy, $`\chi ^2`$ is improved, but not significantly. Therefore it was left out in the fits to the BeppoSAX data.
The hard part of the X-ray spectrum can be fit with a power law that is compatible with those of Galactic X-ray binaries (XRBs) and can thus be explained naturally as the continuum emission from HMXRBs (Paper II). The integral spectrum of all the point sources from the ROSAT PSPC observations is consistent with this interpretation, as is the contribution of this spectral component to the total X-ray flux (Paper I). There is no reason why all compact sources should emit a thermal spectrum.
The claim that there is a hot thermal gas at a few keV energy (C99) hinges only on the assumption that this is the only mechanism that could explain the observed Fe line around 6.7 keV. However, supernova remnants (SNRs) and XRBs, including high-mass XRBs (HMXRBs), can produce both fluorescent and thermal Fe line emission, i.e., at 6.4 keV and 6.7 keV, respectively (e.g., Nagase 1989; White, Nagase, & Parmar 1995; Liedahl et al. 1999), while C99 assume that all line emission is of thermal origin. Thus, the observed Fe line might well be a superposition of emission from hot gas and from X-ray binaries. The fitted equivalent width of the Fe line (Table 1) is in agreement with this interpretation. It, too, might be a composition of a (broad) thermal component and a (narrow) XRB contribution. However, a composite line fit cannot be performed based on the current data, because the line is only marginally resolved.
The above finding that different models can fit the data equally well proves that the “optimal” fit is model-dependent, as already stated in Paper II. Therefore, there is no reason to reject the M+P model. Moreover, it is, as argued in Paper II, the physically most plausible model choice.
Taking into account the ROSAT imaging results, which indicate clearly that there is a considerable number of unresolved compact sources in the central part of the disk of NGC 253 (Paper I), the most likely identification of these sources–based on their spectral properties and soft X-ray luminosities, $`L_\mathrm{X}`$–is that they constitute a population of HMXRBs (Paper I). Thus, part of the emission distribution seen by BeppoSAX is not truly “diffuse”, but smeared out by its broad point-spread function.
These point sources detected by ROSAT contribute about 50% of the flux from the central disk (Paper I and Paper II). Thus, they are very significant contributors to the measured total flux, especially in the hard part of the X-ray spectrum.
On the other hand, the spectral model preferred by C99 does not take into account the presence of HMXRBs and their spectral signature. Given the luminosities of emission mechanisms tracing the presence of high-mass stars in galaxies like NGC 253 and M 82, especially far-infrared radiation, a large number of HMXRBs must be expected to be present in them.
It is still unclear how the previously detected X-ray emitting thermal plasma (with temperatures in the range of a few tenths of a keV) is heated, especially in the galaxy halos, up to several kpc away from the disk planes of the starbursts. The presence of another, extremely hot medium of several keV energy contributing of order 2/3 of the total 2–10 keV flux, as suggested by C99, would further excruciate the problem of energy supply.
When taking into account the trade-off between metallicities and absorbing H I column densities in fitting the softest part of X-ray spectra (which cannot be resolved by BeppoSAX data only, but requires the low-energy response of ROSAT), extreme subsolar metallicities, $`Z`$, are not required to obtain a good fit (Paper II). This $`N_\mathrm{H}`$ vs. $`Z`$ dichotomy is another, independent ambiguity in the minimum $`\chi ^2`$ space of the spectral fits. Low metallicities in starburst galaxies, i.e., the galaxies with the highest star formation rates in the local Universe, would be hard to understand because of the proven presence of large numbers of massive stars, which are the most prolific producers of metals.
### 3.2 M 82
The same ambiguities are present in fits to the BeppoSAX data of M 82. We could fit almost equally well the model by C99 and ours from Paper I and Paper II. Just as for NGC 253, the M+P model fits the data points at the very highest energies slightly better than a 2M model. The goodness of fit is 517.4/446 d.o.f. = 1.16 (2M model) and 466.9/442 d.o.f. = 1.06 (M+P model), respectively. Note that, just as for the combined ROSAT \+ ASCA data, the BeppoSAX data require another, soft thermal component to be added to the M+P model.
With the M+P spectral model composition, we obtain almost equally good fits with two very different metallicities. In one case, $`Z=17Z_{}`$ (constrained to be $`>2.2`$ at the 90% confidence level), in the other $`Z=0.13Z_{}`$. In the high-metallicity case the flux at $`1`$ keV is modeled primarily as Ne and Fe-L line emission, while in the low-metallicity case it is modeled as a peak in the thermal distribution. The energy resolution of the LECS of $`200`$ eV (FWHM) at 1 keV is insufficient to discriminate between the two options. Note that these two fits do not yet take into account the additional information obtained with ASCA and ROSAT requiring an additional soft thermal component (Paper I and Paper II).
There is less evidence from ROSAT imaging for the existence of large numbers of compact sources in M 82. Instead, there appears to be a spatially extended, hard spectral component. Part of this might be truly diffuse, in which case the most likely interpretation is that of a very hot gaseous component, as suggested by C99. Only recently the Chandra image by Griffiths et al. (2000) showed that there is indeed a population of compact sources in M 82, surrounded by diffuse emission. The compact sources in the central part of M 82 could not be resolved by ROSAT, because they are too close to each other. The most likely identification is again that they are HMXRBs (Griffiths et al. 2000). Individual HMXRBs could also explain the observed X-ray variability in the hard part of the spectrum, while there is no evidence in the Chandra data for the presence of an AGN (Ptak & Griffiths 1999, Matsumoto & Tsuru 1999, Gruber & Rephaeli 1999, C99).
The measured position and equivalent width of the Fe line in M 82 of $`6.63\pm 0.21`$ keV and $`60\pm 40`$ eV, respectively, leaves open whether the line emission comes from either binaries or diffuse hot gas or a superposition of both. In both M 82 the width of the Fe line near 6.6 keV is unresolved. Thus, except for the (poorly constrained) position of the line centroid, no further information on the relative contribution of thermal or fluorescent line emission can be made based on the existing data. Both model compositions tested above fit the data (statistically) so well that no useful constraint can currently be derived on the possible contribution of both a hot thermal plasma and HMXRBs to the 2–10 keV flux of M 82, when fitted simultaneously.
## 4 Summary
There are several ambiguities in the fits to complex X-ray spectra of starburst galaxies, such as NGC 253 and M 82. The “best-fitting” model is not necessarily unique, because the spectral models required to explain all observations are more complex than can be fit unambiguously to one single dataset. In such cases statements that a fit is good at a certain significance level can be misleading, because they only apply if the correct spectral model composition was chosen. There are also intrinsic degeneracies, i.e., trade-offs of different fit parameters against each other (e.g., $`N_\mathrm{H}`$ vs. $`Z`$).
This study makes it clear how important it is to consider all available information, including in particular X-ray imaging results of extended sources, when interpreting their integral spectral properties. The new generation of X-ray satellites, Chandra, XMM, and Astro-E, will resolve much of this ambiguity because of their high spectral resolution, combined with high sensitivity and good imaging capabilities over wide bandpasses, rendering possible spatially resolved spectroscopy of individual (classes of) sources within nearby galaxies.
|
no-problem/0003/hep-th0003024.html
|
ar5iv
|
text
|
# Untitled Document
|
no-problem/0003/astro-ph0003056.html
|
ar5iv
|
text
|
# THE NUCLEAR ACTIVITY OF THE GALAXIES IN THE HICKSON COMPACT GROUPS
## 1 INTRODUCTION
It is known that compact groups of galaxies provide the densest galaxy environment rather than binary galaxies, loose groups of galaxies and clusters of galaxies (Hickson 1982; Hickson et al. 1992). Therefore, frequent galaxy collisions are expected to trigger either some nuclear activity or intense star formation in their member galaxies (Hickson et al. 1989; Zepf, Whitmore, & Levison 1991; Zepf & Whitmore 1991; Zepf 1993; Verdes-Montenegro et al. 1998). Further, compact groups would evolve into other populations in the universe because they would be able to merge into one stellar system within a timescale shorter than the Hubble time (Hickson et al. 1992; Barnes 1989; Weil & Hernquist 1996). Indeed previous studies have shown possible evidence that galaxy collisions may trigger either the nuclear activity or starbursts in the HCGs; e.g., HCG 16 (Ribeiro et al. 1996; de Carbalho & Coziol 1999), HCG 31 (Iglesias-Páramo & Vílchez 1997a), HCG 62 (Valluri & Anupama 1996), HCG 90 (Longo et al. 1995), and HCG 95 (Iglesias-Páramo & Vílchez 1997b).
On the other hand, other statistical studies have shown that there may be no strong evidence for the unusually enhanced activity in the HCGs. Hickson et al. (1989) found that the far-infrared (FIR) emission is enhanced in the HCGs. However, later careful analysis of FIR data of HCGs showed that there is no firm evidence for the enhanced FIR emission in the HCGs (Sulentic & de Mello Rabaca 1993). Radio continuum properties of the HCG galaxies do not show evidence for the enhanced nuclear activity with respect to field spiral galaxies although the radio continuum emission from the nuclear region tends to be stronger than that from field spirals (Menon 1992, 1995).
More recently, Coziol et al. (1998) have shown from a spectroscopic survey for 17 HCGs (de Carvalho et al. 1997) that active galactic nuclei (AGN) are preferentially located in the most early-type and luminous members in the HCGs. This result suggests possible relations among activity types, morphologies, and densities of galaxies in HCGs. Vílchez & Iglesias-Páramo (1998a) made an H$`\alpha `$ emission imaging survey for a sample of HCGs and found that over 85% of the early-type galaxies in their sample were detected in H$`\alpha `$ (Vílchez & Iglesias-Páramo 1998b). However, they interpreted that the excess emission in H$`\alpha `$ is attributed to photoionization by massive stars rather than AGN. Therefore, it is still uncertain what kind of activity is preferentially induced in the nuclear regions of HCG galaxies.
In order to investigate nuclear emission-line activity of HCG galaxies in detail, our attention is again addressed to an investigation on how frequent galaxy collisions are related to the occurrence of both nuclear activity and star-formation activity in HCG galaxies. In this paper, we present results of our optical spectroscopic program for a sample of 69 galaxies belonging to 31 HCGs which are randomly selected in the list of HCG (Hickson 1982). In the original catalog of HCG (Hickson 1982), 100 compact groups with 493 galaxies are entried. However, eight groups are now dropped out from the original sample because they do not have more than two galaxies whose redshifts are accordant (Hickson et al. 1989; Hickson 1993; see also Sulentic 1997). Therefore, our sample is selected from the remaining 92 HCGs.
## 2 OBSERVATIONS
We have performed optical spectroscopy of 69 galaxies in the 31 groups (see Table 1). The spectroscopic observations were made at Okayama Astrophysical Observatory (OAO) 188 cm telescope with the new Cassegrain spectrograph and an SITe 512$`\times `$512 CCD camera during a period between 1996 February and 1997 January. The slit dimension was 1.8 arcsec (width) $`\times `$ 5 arcmin (length). Two-pixel binning was made along the slit and thus the spatial resolution was 1.75 arcsec per element. The 600 grooves mm<sup>-1</sup> grating was used to cover 6300 – 7050 Å region with the spectral resolution of 3.4 Å ($``$ 157 km s<sup>-1</sup> in velocity at 6500 Å). The observations were made under photometric conditions. The typical seeing during the runs was 2 arcsec.
The data were analyzed using IRAF<sup>1</sup><sup>1</sup>1Image Reduction and Analysis Facility (IRAF) is distributed by the National Optical Astronomy Observatories, which are operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation.. We also used a special data reduction package, SNGRED (Kosugi et al. 1995), developed for OAO Cassegrain spectrograph data. The reduction was made with a standard procedure; bias subtraction, flat fielding with the data of the dome flats, and cosmic ray removal. Flux calibration was obtained using standard stars available in IRAF. The nuclear spectra were extracted for individual galaxies with a 1$`\stackrel{}{\mathrm{.}}`$8 $`\times `$ 1$`\stackrel{}{\mathrm{.}}`$75 aperture. The extracted nuclear spectra are shown in Figure 1. A journal of the observations is given in Table 1. We also give morphological types of galaxies taken from Hickson (1993; see also Hickson, Kindle, & Huchra 1988; Mendes de Oliveira & Hickson 1994) and de Vaucouleurs et al. (1991) in Table 1.
## 3 RESULTS
### 3.1 Classification of Emission-line Activity
In usual classification schemes for emission-line galaxies, some combinations of two emission-line intensity ratios (e.g., \[O III\]$`\lambda `$5007/H$`\beta `$ versus \[N II\]$`\lambda `$6583/H$`\alpha `$) are often used (Veilleux & Osterbrock 1987). However, since our spectroscopic program was originally devoted to finding kinematical peculiarity of HCG galaxies (Nishiura et al. 1999), our nuclear spectra cover only a wavelength range between 6300 – 7050 Å. Therefore, emission lines available for the classification of nuclear activities are \[O I\]$`\lambda `$6300, \[N II\]$`\lambda \lambda `$6548,6583, H$`\alpha `$, and \[S II\]$`\lambda \lambda `$6717,6731. Among several combinations between a couple of the emission lines listed above, the most reliable indicator to classify nuclear activities seems the \[N II\]$`\lambda `$6583/H$`\alpha `$ ratio (hereafter \[N II\]/H$`\alpha `$). In fact, Ho, Filippenko, & Sargent (1997) showed from the spectroscopic analysis of more than 300 nearby galaxies that this ratio is useful in distinguishing between AGN and H II nuclei; i.e., \[N II\]/H$`\alpha `$ 0.6 for AGN while \[N II\]/H$`\alpha <`$ 0.6 for H II nuclei. Therefore, applying this criterion, we classify the emission-line activity of our HCG galaxies. Galaxies without emission are referred as “Abs”; i.e., only stellar absorption features are seen in the optical spectra. For eight galaxies, we detected only \[N II\] line emission and did not detect H$`\alpha `$ line emission (HCG 10a, 30b, 37a, 51b, 62a, 68a, 88a, and 93c). We classify them as AGN. The emission line flux data and the results of the classification are given in Table 2 and Table 3, respectively. .
As shown in Figure 1, some nuclei show evidence for the H$`\alpha `$ absorption. Since the H$`\alpha `$ absorption leads to an underestimation of the H$`\alpha `$ emission, it would be better to subtract a template spectrum whose absorption spectral features are nearly the same as those of the concerned spectrum from the target galaxy spectrum (see, for example, Ho et al. 1997). Since, however, we do not have such a template database, we used the observed \[N II\]/H$`\alpha `$ ratios in our classification.
In particular, in the case of very weak emission-line galaxies, the H$`\alpha `$ emission may not be seen if the H$`\alpha `$ absorption feature is strong. The most serious case may be poststarburst galaxies which show very strong Balmer absorption (e.g., Taniguchi et al. 1996 and references therein). Poststarburst galaxies have H$`\alpha `$ absorption equivalent widths, $`EW`$(H$`\alpha `$) $``$ 3 Å. However, the galaxies with H$`\alpha `$ absorption in our sample have $`EW`$(H$`\alpha `$) $``$ 2 Å; i.e., our sample contains no conspicuous poststarburst galaxy. We therefore expect that our emission-line classification is not affected by the effect of H$`\alpha `$ absorption seriously.
Recently, Coziol et al. (1998) studied the nuclear activity of southern HCG galaxies. They obtained optical spectra of 82 brightest galaxies in a sample of 17 HCGs (de Carvalho et al. 1997). Among the 82 galaxies, 40 galaxies are original HCG members identified by Hickson (1982). Although their sample is taken from the HCGs located in the southern hemisphere, 13 galaxies in their sample were also observed by us. Since they used the template subtraction method in their classification of nuclear activity, their classification seems to be more reliable than ours. In order to examine how our classification based on the \[N II\]/H$`\alpha `$ ratio without absorption correction is reliable, we compare our results with those of Coziol et al. (1998). The basic data of the 13 HCG galaxies commonly observed by both Coziol et al. (1998) and us are summarized in Table 4. We find that both studies give the same activity types for late-type galaxies. However, for early-type galaxies, though we have classified three galaxies (HCG 40a, 42a, and 87b) as absorption galaxies, they classified them AGNs (dwarf LINERs). These differences appear attributed to that we do not apply the template subtraction method while they did. However, it is noted that all the three galaxies are not typical Seyfert nuclei but dwarf LINER nuclei. Although our analysis may not miss typical Seyfert nuclei, it is safe to mention that about a half (e.g., 3/7 $``$ 43 %) of early type galaxies classified absorption galaxies in our study are AGNs. This point will be taken into account in later discussion.
Finally, we classified 63 of 69 galaxies we observed as 28 AGNs, 16 H II nuclei, and 19 no line emissions. Three of the remaining six galaxies are redshift-discordant galaxies (HCG 73a, 87d, and 92a). For the other three, the signal-to-noise ratio of their spectrum are too low to classify (HCG 34a, 42b, and 52a). We exclude these six galaxies from the sample in later statistical analyses.
### 3.2 Nuclear Activity versus Group Properties
Although the selection of HCGs was made homogeneously with the above criteria, it is known that the dynamical properties are different from HCG to HCG (Hickson et al. 1992). Therefore, it is interesting to compare the nuclear activity in the member galaxies with the dynamical properties of the groups.
As we mentioned previously, we adopt the \[N II\]/H$`\alpha `$ intensity ratio as a measure of the nuclear activity. Since it is known that the nuclear activity type depends on the morphological types of host galaxies (e.g., Ho et al. 1997); i.e., AGN favors early-type galaxies while star-formation activity favors later-type ones, it is necessary to investigate relationships between the nuclear activity and the group properties for each morphological type. However, it is generally difficult to classify the morphology of galaxies which are interacting with their partner(s) (Mendes de Oliveira & Hickson 1994). Therefore, although we give detailed morphological types for the member galaxies in our sample in Table 1, we classify them broadly into the following three classes; 1) early-type galaxies (E/S0), 2) early-type spirals (S0a – Sbc), and 3) late-type spirals (Sc or later). In Figures 2 – 4, we show diagrams of \[N II\]/H$`\alpha `$ against the number density of the groups $`\rho _\mathrm{N}`$ (Hickson et al. 1992), the radial velocity dispersion of the groups $`\sigma _\mathrm{r}`$ (Hickson et al. 1992), and the crossing time of the groups $`t_\mathrm{c}`$ (Hickson et al. 1992), respectively. We adopt the null hypothesis that the \[N II\]/H$`\alpha `$ ratio is correlated with each dynamical parameter and apply the Spearman-rank statistical test for all the correlations shown in Figures 2, 3, and 4. A summary of the statistical tests is given in Table 5. We find that there is no statistically significant correlation. Therefore, it is concluded that the nuclear activity of galaxies studied here has no physical relation to the dynamical properties of the groups. For disk galaxies in nearby HCGs, Iglesias-Páramo & Vílchez (1999) have found no clear correlations between the $`L_{\mathrm{H}\alpha }/L_\mathrm{B}`$ ratio and the dynamical properties of the groups. Our results are consistent with their results.
### 3.3 Comparison of the Nuclear Activity between the HCG galaxies and Field Galaxies
Our spectroscopic analysis shows that AGN is found in almost half of the HCG galaxies and star-forming activity is found in a quarter of the sample. An important question arises as whether or not these frequencies are unusual with respect to those in environment with less galaxy collisions. In order to examine this issue, we at first make a control sample which consists of so-called field galaxies and then compare the nuclear activity between the HCG galaxies and the field galaxies.
Recently Ho et al. (1995, 1997) have made an extensive spectroscopic survey for nearby galaxies using the Palomar Observatory 5 m telescope. Their sample contains 486 galaxies with $`B_T12.5`$ and $`\delta >0^{}`$ where $`B_T`$ is the apparent total $`B`$ magnitude and $`\delta `$ is the declination. In order to make a sample of field galaxies, we have omitted the following galaxies from their sample; 1) galaxies belong to the Virgo cluster, 2) binary/interacting galaxies, 3) HCG galaxies (HCG 44a = NGC 3190, HCG 44b = NGC 3193, HCG 44c = NGC 3185, HCG 61a = NGC 4169, HCG 68a = NGC 5353, and HCG 68b = NGC 5354), 4) NGC 1003 whose activity type is uncertain, and 5) the Hubble type is uncertain for five galaxies (NGC 63, 812, 2342, 7798, and UGC 3714). Excluding the above galaxies, we obtain a sample of 382 field galaxies which consist of 167 AGNs, 174 H II nuclei, and 41 normal galaxies. This sample has no matching to the HCG sample in both apparent magnitude and morphology. Since the majority of the HCG galaxies are fainter than the field galaxies observed by Ho et al. (1997), it is difficult to obtain a magnitude-matched sample of field galaxies. However, when we compare the nuclear activity between the HCG galaxies and the field galaxies, we will take account of the morphological difference between the HCGs and the fields.
In Figure 5, we show the frequency distributions of activity types for the HCGs (upper panels) and for the field (lower panels). Applying the $`\chi ^2`$ test, we examine whether or not the frequency distributions of the activity types for the HCGs are significantly different from those for the field galaxies for the morphological samples of E – S0, S0a – Sbc, Sc or later, and all the galaxies (the total sample). We adopt the null hypothesis that the HCG galaxies and field galaxies come from the same underlying distribution of the activity types. The results of our statistical test are summarized in Table 6. Although the difference in the frequency distribution is not statistically significant for each morphological type, the difference for the total sample is significant in that the HCGs have less H II nuclei while have more absorption galaxies than the field galaxies. The H II nuclei and the absorption galaxies are found in 26% and 31% of the HCG galaxies, respectively. On the other hand, in the field, the H II nuclei share 46% while the absorption galaxies share only 11% of the sample.
Taking account that the nuclear activity type depends on the morphological types of host galaxies (e.g., Ho et al. 1997), we examine the difference in the morphological type distribution between the HCG galaxies and the field ones. In Figure 6, we show the frequency distributions of morphological types for each activity type and for the total sample. Applying the $`\chi ^2`$ test, we examine whether or not the frequency distributions of the morphological types for our HCG galaxies are significantly different from those for the field galaxies for the nuclear activity type of AGN, H II, absorption, and the total sample. We adopt the null hypothesis that the HCG galaxies and field galaxies come from the same underlying distribution of the morphological types. The results of our statistical test are summarized in Table 7. Our HCG sample contain more E – S0 galaxies while less late-type spirals than the field. This leads to the under population of H II nuclei in the HCG sample because H II nuclei favor such late-type spirals. However, the frequency of occurrence of AGN in the HCGs is nearly the same as that in the field. A remarkable difference may be that the H II nuclei are found in E – S0 galaxies more frequently in the HCGs ($``$ 13%) than in the field ($``$ 2%). This result appears consistent with the finding by Zepf et al. (1991); there are a number of early-type galaxies with unusually blue colors, suggesting the enhanced star formation in early type galaxies.
We have found some interesting difference in the frequency distributions of the activity types between the HCGs and the field described above. However, since the frequency distribution of morphological types is different between the two samples, we cannot conclude that the differences are real. In order to check the effect of the difference in the morphological type distributions, we estimate the frequency of AGN, H II nuclei, and absorption galaxies in the HCGs if the morphological type distribution in the HCGs is the same as that in the field. For example, we can estimate the expected number of AGN in the HCGs as $`N_{\mathrm{AGN}}^{\mathrm{exp}}(\mathrm{HCG})=N_{\mathrm{E}\mathrm{S0}}\times P_{\mathrm{AGN},\mathrm{E}\mathrm{S0}}(\mathrm{Field})+N_{\mathrm{S0a}\mathrm{Sbc}}\times P_{\mathrm{AGN},\mathrm{S0a}\mathrm{Sbc}}(\mathrm{Field})+N_{\mathrm{Sc}}\times P_{\mathrm{AGN},\mathrm{Sc}}(\mathrm{Field})`$ where $`P_{x,y}(\mathrm{Field})`$ is the probability that galaxies with the morphological type $`y`$ have the activity type $`x`$ in the field sample. We can also estimate both $`N_{\mathrm{HII}}^{\mathrm{exp}}(\mathrm{HCG})`$ and $`N_{\mathrm{Abs}}^{\mathrm{exp}}(\mathrm{HCG})`$ in a similar way. We also adopt the null hypothesis that the observed distribution is the same as the expected distribution and apply the $`\chi ^2`$ test. The results are given in Table 8. We find that there is no statistical difference in the activity-type distributions between the HCGs and the field. Hence, we conclude that the nuclear activity in the HCGs is not different from that in the field under the assumption that the morphology-activity relation is the same between the HCGs and the field.
As mentioned in section 3.1, our spectral analysis may miss dwarf LINERs roughly in a half of early-type galaxies studied here. If we assume that a half of the early-type galaxies classified as “Abs” could be AGNs, our 63 HCG galaxies are classified into 36 AGNs, 16 H II nuclei, and 19 Abs. In this case, we obtain $`P(\chi ^2)=0.50`$. This means that the activity distribution of HCG galaxies is again indistinguishable from that of the field galaxies.
## 4 DISCUSSION
Our main results are summarized below. (1) We have described the results of our spectroscopic program for a sample of 63 galaxies in the 28 HCGs. We have found in our sample; 28 AGN, 16 H II nuclei, and 19 normal galaxies which show no emission line. We used this HCG sample for statistical analyses. (2) Comparing the frequency distributions of activity types between the HCGs and the field whose data are taken from Ho, Filippenko, & Sargent (382 field galaxies), we find that the frequency of occurrence of H II nuclei in the HCGs is significantly less than that in the field. However, our HCG sample contains more early-type galaxies than the field, the above difference for the H II nuclei may be due to this morphology bias because it is known that H II nuclei are rarer in early-type galaxies than in later ones. (3) Correcting this morphological bias to the HCG sample, we find that there is no significant difference in the frequency of occurrence of emission-line galaxies between the HCGs and the field. This implies that the dense galaxy environment in the HCGs does not affect triggering both AGNs and nuclear starbursts. (4) Since our classification of nuclear activities are judged by the raw optical spectra, we may miss some less-luminous AGNs, in particular in early-type galaxies. Even though this effect is taken into account, the distributions of activity types of HCG galaxies are indistinguishable from those of field galaxies.
Our finding seems surprising because it is widely accepted that galaxy interactions lead to either nuclear activity such as AGN or nuclear starbursts or both (see for a review Shlosman, Begelman, & Frank 1990; Barnes & Hernquist 1992). Indeed, in 1980’s, several systematic observational investigations of interacting or binary galaxies suggested that galaxy collisions may raise both nuclear activity and intense star formation (e.g., Kennicutt et al. 1984; Keel et al. 1985; Dahari 1985; Bushouse 1986, 1987) although the statistical significance was not so high; i.e., $``$ 90 – 95% (see for recent papers; De Robertis, Yee, & Hayhoe 1998; Taniguchi 1999). In addition, luminous and ultraluminous infrared galaxies are often detected in strongly interacting galaxies and merging galaxies (Sanders et al. 1988; see for a review Sanders & Mirabel 1996). Numerical simulations of interacting or merging galaxies have shown that gas fueling driven by galaxy interaction occurs efficiently (e.g., Noguchi 1988; Olson & Kwan 1990a, 1990b; Mihos & Hernquist 1994b).
If tidal interactions lead to the formation of AGN and/or nuclear starbursts, we would observe a large number of such active galaxies in the HCGs because the member galaxies are expected to have experienced many tidal interactions during the course of their dynamical evolution. Galaxy interactions affect the star formation activity in galactic disks because the effect of tidal interactions is much stronger in the outer parts than in the nuclear regions (e.g., Noguchi & Ishibashi 1986; see also Kennicutt et al. 1987). Indeed, some radio studies have revealed that a large fraction of HCG spirals are H I deficient (Williams & Rood 1987; Huchtmeier 1997). If a HCG contains several gas-rich spiral galaxies, the average star formation rate would be more enhanced than that in the field galaxies (e.g., Young et al. 1986). However, such excess has not yet been confirmed by IRAS observations (Sulentic & De Mello Rabaca 1993 and references therein). Although deficient of the atomic hydrogen gas in HCG spirals implies that it is expected to occur intense star formation in HCG galaxies, Moles et al. (1994) have concluded that there are no strong starbursting galaxies in HCGs by optical and infrared observations. These results indicate that frequent galaxy collisions are not always able to increase the star formation rate intensely. Although no enhancement of far infrared emission may be partly attributed to that the HCGs prefer early-type spiral galaxies as well as elliptical ones, it should be noted that roughly half galaxies in the HCGs are late-type spirals and irregular galaxies (Hickson et al. 1988; Mendes de Oliveira & Hickson 1994). Therefore, it is suggested that off-nuclear star-formation activity is also not enhanced in the HCGs with respect to field galaxies.
Coziol et al. (1998) have shown from a spectroscopic survey for 17 HCGs (de Carvalho et al. 1997) that AGN are preferentially located in the most early-type and luminous members in the HCGs, suggesting a correlation between activity types, morphologies, and densities of galaxies in HCGs. They searched more possible member galaxies outside the original HCG members and then found the above interesting observational properties. However, our spectroscopic survey was made for only the original HCG members. Therefore, we do not think that our results are inconsistent with their results. An interesting point suggested by Coziol et al. (1998) is that AGN is preferentially found in luminous, early-type galaxies. Verdes-Montenegro et al. (1998) showed from their <sup>12</sup>CO($`J`$=1-0) emission survey for a large number of HCG galaxies that a number of early-type galaxies are detected in CO as well as in FIR. In addition, early-type galaxies with unusually blue colors are also found by Zepf et al. (1991). Although all these may be still circumstantial lines of evidence, it is suggested that the majority of early-type galaxies in the HCGs are affected by some environmental effect. One possible important effect is a merger between an early-type galaxy and a gas-rich galaxy such as a late-type spiral or a small satellite galaxy since galaxy mergers between unequal galaxies may lead to the formation of S0 galaxies (Bekki 1998).
The above arguments suggest that mere tidal interactions between galaxies are not responsible for the triggering intense nuclear activities. Recently, instead of mere tidal interactions, minor mergers have been appreciated as a more important triggering mechanism both for nuclear starbursts (Mihos & Hernquist 1994a; Hernquist & Mihos 1995; Taniguchi & Wada 1996) and for Seyfert nuclei (De Robertis et al. 1998; Taniguchi 1999; see for an earlier indication Gaskell 1985). If this is the case, it is not surprising that the nuclear activities in the HCG galaxies are not significantly different from those in the field galaxies. Furthermore, if major mergers are more important to activate mode luminous starbursts and AGNs (e.g., Sanders et al. 1988), it is suggested that most of the HCGs have not yet experienced such major mergers in the member galaxies. Since the dynamical relaxation timescale for the HCGs is shorter than the Hubble times, it is expected that each HCG will merge into one within a timescale of several Gyr (Hickson et al. 1992). Therefore, the HCG are expected to evolve either to luminous or ultraluminous infrared galaxies via multiple mergers (Xia et al. 1997; Taniguchi, Wada, & Murayama 1997; Taniguchi & Shioya 1998; Lípari et al. 2000; Borne et al. 2000), or to quasars (Sanders et al. 1988; Taniguchi, Ikeuchi, & Shioya 1999) or to ordinary-looking elliptical galaxies (Barnes 1989; Weil & Hernquist 1996; Nishiura et al. 1997).
We are grateful to the staff of OAO for kind help of the observations. We would like to thank an anonymous referee for useful comments and suggestions. YO and TM are JSPS Fellows. This work was partly supported by the Ministry of Education, Science, Culture, and Sports (Nos. 07044054, 10044052, and 10304013).
|
no-problem/0003/astro-ph0003155.html
|
ar5iv
|
text
|
# The Morphologies of the Small Magellanic Cloud
## 1 Introduction
Galaxy interactions play a key role in current understanding of galaxy formation and evolution. The dominant physical effect of an interaction is generally thought to arise from the tidal forces exerted between galaxies. Those tidal forces remove angular momentum from the gas, the gas falls toward the center of the galaxies, and that inflow results in a nuclear, or at least centrally concentrated, episode of star formation (cf. Mihos, Richstone, & Bothun 1992). However, for small, gaseous galaxies within a larger dark matter halo (for which the encounter velocity is greater than the internal velocity dispersion), the dominant effect of a close interaction or collision could be hydrodynamic as the gaseous components of the two galaxies interact. Perhaps this type of interaction is more typical in the protogalactic environment, where small sub-galactic stellar aggregates are coalescing into a larger galaxy, and among interacting satellite galaxies within the halos of current galaxies. The nearest interacting satellite galaxies for study are the Small and Large Magellanic Clouds (SMC and LMC, respectively).
The Magellanic system is highly complex and clearly interacting. It is an ideal laboratory for examining the effects of interactions, such as the development of tidal material and the triggering of star formation. On the largest scales, the Magellanic Stream (Mathewson, Cleary, & Murray 1974) is evidently a relic of an interaction, although its origin as tidal, rather than hydrodynamic, is still debated because of the lack of stars found in the stream (cf. Guhathakurta & Reitzel 1998). Stellar structures in the Magellanic system that appear to be tidal in origin were identified by Shapley (1940; the eastern SMC wing), by de Vaucouleurs (1955; the LMC tidal tails, although later de Vaucouleurs (de Vaucouleurs & Freeman 1972) partially retracted his claim due to potential confusion with galactic diffuse emission), by Hindman, Kerr, & McGee (1963; the H I bridge between the LMC and SMC), by Irwin, Demers, & Kunkel (1990; the stellar bridge between the LMC and SMC), and by Putman et al. (1998; the leading Magellanic Stream). Recent numerical simulations (only including gravity; Gardiner and Noguchi 1996) show that such dynamics reproduce many of the observed features, including the SMC wing and the line-of-sight depth of the outer regions of the SMC (Hatzidimitriou and Hawkins 1989).
Our study of the distribution of stars in the SMC is complementary to that of the outer SMC by Gardiner and Hatzidimitriou (1992). The structure of the central SMC is currently somewhat more uncertain than that of the outer regions (cf. Hatzidimitriou, Cannon, and Hawkins 1993 for a discussion). We demonstrate, based on the distribution of stars of different ages within the central SMC (4$`\times `$ 4), that although the existence of tidal forces on the SMC is not in dispute, the visible appearance of the central SMC arises primarily from hydrodynamic effects. The underlying, old stellar population of the central SMC is relatively undisturbed (as concluded by Dopita et al. 1985; Hardy et al. 1989 on the basis of kinematic arguments). These results provide direct evidence of star formation triggered by the interaction of the SMC with another gaseous system (presumably, but not demonstrated to be, the LMC). Several common features, such as the “bar” and the “outer arm” are not seen in the underlying population and so are not true dynamical structures.
## 2 The Data
The data come from the ongoing Magellanic Cloud Photometric Survey (cf. Zaritsky, Harris, & Thompson 1997). Using the Las Campanas Swope telescope (1m) and the Great Circle Camera (Zaritsky, Shectman, & Bredthauer 1996) we have been drift scanning both Magellanic Clouds in $`U,B,V,`$ and $`I`$. The effective exposure time is between 4 and 5 min for SMC scans and the pixel scale is 0.7 arcsec/pixel. The data are reduced using a pipeline that utilizes DAOPHOT (Stetson 1987) and IRAF<sup>2</sup><sup>2</sup>2IRAF is distributed by the National Optical Observatories, which are operated by AURA Inc., under contract to the NSF. Only stars with both $`B`$ and $`V`$ detections are included in the catalog. A complete description of the SMC data will be presented by Zaritsky et al. (2000).
A V-band luminosity map of the SMC is constructed from the photometric catalog of stars with $`m_V<20`$ and shown in Figure 1. Several well-known, distinctive features are evident. First, there is a general elongation from North-East to South-West, most noticeably in the central region that is often referred to as the “bar” of the SMC. Second, there are high surface brightness knots directly to the east of the main body of the SMC that are also accompanied by a distortion of the fainter SMC isophotes. These features are part of the inner SMC wing, first identified by Shapley (1940), that extends to the bridge and intercloud region between the SMC and LMC (Irwin, et al. 1990). The general impression from this Figure is that the SMC is an irregular system, possibly with a bar, and little, if any, spiral structure.
By having a stellar catalog rather than the images alone, we can separate different stellar populations and independently examine their distributions. First, we select upper main sequence stars ($`m_V<18.5,M_V<0.4`$ , for $`mM=`$ 18.9 and $`0.3<BV<0.3`$, which corresponds to an age $`\mathrm{}<2\times 10^8`$ years) and show their spatial density distribution in Figure 2. The greyscale value of each pixel corresponds to the number of stars within that pixel, with dark pixels indicating a high density of stars. Here we see a different morphology than that illustrated in Figure 1. The NE-SW elongation is more marked, there is a pronounced extension of these stars toward the extreme SW that was not visible in Figure 1, a concentration in the NE that appears to be a distinct linear feature perpendicular to the main elongation of the SMC, and the knots in the wing are relatively more prominent than in Figure 1. The young stars extend along three different directions toward the edges of the survey regions and apparently beyond the survey region toward the East along the wing (cf. Grondin, Demers, & Kunkel 1992; Demers & Battinelli 1998) and toward the SW. The system of young stars is highly disturbed and somewhat concentrated toward the body of the SMC, but it extends to large angular distances. The young star distribution is qualitatively similar to the H I distribution (Stanimirovic et al. 1999).
In contrast, our second selected stellar population (giants and red clump stars, $`m_V<19.5,M_V<0.6`$ and $`BV>0.7`$, or correspondingly stars with ages $`\mathrm{}>1`$ Gyr) is highly regular (Figure 3). There is no evidence of the wing, the SW extension or the NE shell. There are distinct differences in stellar densities among scans, mostly due to variable seeing conditions and hence catalog completeness levels. We have reobserved regions with poor ($`>`$ 2 arcsec) seeing, but those data are not included here. We have also obtained data to fill in the gaps between scans, but again those are not yet available. A complete discussion of the surface brightness distribution of the SMC with the final data will be given elsewhere, but it is evident that the distribution of the older stellar population is much more regular than that of the younger population. Tidal forces have not noticeably distorted the projected morphology of the older, central SMC population. The only potential distortion of the older population appears to be a subtle asymmetry toward the South, but we cannot yet determine if this is an artifact of the scan-to-scan variations. The conclusion to be derived from Figure 3 is that the features that one might have thought were tidally induced (the wing and the NE and SW extensions) are not present in the distribution of old ($`>`$1 Gyr) stars, and so cannot solely originate from tidal effects (i.e. normal stellar populations extracted from the central SMC by tidal forces).
## 3 Discussion
There are several aspects of the observations that appear in conflict with the hypothesis of a tidal origin for the morphology of the system. First, the recent star formation is more extended than the older stellar system, rather than being more concentrated as most generic interaction models suggest. Second, the older stars do not follow what initially appear to be the tidal “tails” as noticeably as the younger stars. There are old stars at large radius from the SMC, but their distribution is much more regular than that of the younger stars (Gardiner and Hatzidimitriou 1992). One must conclude either that the tides predominantly affected the outer part of the SMC, which presumably contained large amounts of gas, that then formed the young stars — or that star formation in the central SMC was triggered when the SMC interacted directly with a gaseous component (either the LMC gaseous halo or perhaps a third body).
The outer tidal hypothesis has several difficulties. First, the stellar extensions appear to go in several directions rather than the standard tail/bridge geometry. Second, if the gaseous material was originally at large radius and of insufficient density to form stars, one would expect that tidal forces would further lower the densities at large radii where the gas has no means to lose angular momentum and dissipate energy. In its favor, at least as an explanation for the wing structure, is the continuity between the wing and bridge (both in stars and gas) that join the SMC and LMC. But if the dominant tidal force created the LMC-SMC bridge, what formed the NE-SW structure of young stars in the SMC?
We speculate that the NE-SW geometry arises either from shocking of the gas in the central SMC as the SMC moves in a perpendicular direction (i.e. NW or SE) to a second gaseous object (e.g., speculatively, a hot Milky Way halo or the outer gaseous envelope of the LMC), or that the infall of a gaseous cloud along the NE-SW axis formed stars that have orbits aligned with the infall axis. We see no evidence for shocking in the H I map of Stanimirovic et al. 1999, although the H I morphology is highly irregular. Furthermore, the H I kinematics contain a large velocity gradient along the “bar” axis, which would not arise naturally in a ram pressure model but which could arise in a model of the collision of two gaseous components along the “bar” axis. The latter hypothesis is also supported by the presence of the stellar “shell” feature (upper left, Figure 2), which could arise if young stars are on radial orbits along the major axis of the distribution. This discussion is highly speculative and a definite conclusion awaits numerical simulations that include both gravity and hydrodynamics, and additional measurements of the stellar kinematics.
Despite the disturbed visual appearance of the SMC and the observations supporting large line-of-sight depths in the outer SMC, the bulk of the stars in the SMC’s central region evidently form a spheroidal population. This result confirms previous observations of the kinematics of older stars. Dopita et al. 1985 from PN observations and Hardy et al. 1989 from C star observations found that the older stellar component in the central SMC region has the kinematics of a spheroidal component, with no clear signs for multiple kinematic components or strong rotation. In contrast, the H I shows a large kinematic gradient along the NE-SW axis (Stanimirovic et al. 1999). Furthermore, we find no evidence for a bar in the underlying older stellar distribution and the “outer arm” is entirely a product of the young main sequence stars.
The discrepancies between morphological classifications based on images, which inappropriately weigh the younger populations, and stellar catalogs, from the which the numerically dominant, underlying population can be extracted, highlight the inherent difficulties in morphological studies from integrated photometry. These difficulties are exacerbated in studies at higher redshifts, where the younger stars are even more disappropriately weighted due to K-corrections and where the spatial resolution is poorer.
We conclude, 1) that the central SMC is principally a spheroidal system, 2) that its visual morphology is dominated by highly irregular recent star formation, and 3) that hydrodynamics, rather than tidal forces, must play the key role in SMC star formation over the last several hundred million years.
ACKNOWLEDGMENTS: DZ acknowledges partial financial support from an NSF grant (AST-9619576), a NASA LTSA grant (NAG-5-3501), and fellowships from the David and Lucile Packard Foundation and the Alfred P. Sloan Foundation. EKG acknowledges support from NASA through grant HF-01108.01-98A from the Space Telescope Science Institute.
|
no-problem/0003/cond-mat0003516.html
|
ar5iv
|
text
|
# Cluster Monte Carlo study of multi-component fluids of the Stillinger-Helfand and Widom-Rowlinson type
## I Introduction
Several years ago Stillinger and Helfand introduced a simple but nontrivial model of fluid demixing. Their original model consists of a binary mixture of $`A`$ and $`B`$ particles. Particles of the same type do not interact with one another, but $`A`$ and $`B`$ particles interact with a repulsive potential such that the Mayer $`f`$-function is a Gaussian. This choice for the $`AB`$ potential, known as the Gaussian molecule potential, greatly simplifies the calculation of virial coefficients and most work for this potential has been done using series methods. The main motivation for this work is to confirm Ising universality for the critical exponents of continuum systems.
In this paper we study the Stillinger-Helfand model and some of its generalizations using cluster Monte Carlo methods. Where possible, we compare our results to the series analyses and to results for the Ising-Potts universality classes. Although the Gaussian molecule potential yields a more tractable virial expansion, it is easier to implement a cluster algorithm for the repulsive step potential. We also consider a generalization of the Stillinger-Helfand model to $`q`$ species (components), such that particles of the same species do not interact but particles of different species interact with a repulsive potential. We expect that this generalization will be in the same universality class as the $`q`$-state Potts model for $`q`$ not too large and another motivation for this study is to confirm this correspondence. For example, we know that the two-dimensional (2D) Potts model for $`q>4`$ has a first-order transition. Does the $`q`$-component 2D Stillinger-Helfand model also have a first-order transition for $`q>4`$? In addition, we consider the effect of quenched disorder by randomly adding fixed scattering centers. There are general arguments that quenched disorder causes first-order transitions to become continuous. These arguments hold rigorously for 2D Potts models, but have not been studied for continuum models.
In previous work cluster Monte Carlo methods were applied to the Widom-Rowlinson model. The Widom-Rowlinson and Stillinger-Helfand models are closely related, the only difference is that the Widom-Rowlinson model has a hard-core interaction between different species. In this paper, the invaded cluster Monte Carlo method introduced in Ref. is extended to soft-core repulsive potentials and is used to find the phase transition point for a given temperature without prior knowledge of the critical fugacity. The invaded cluster method has almost no critical slowing for the Widom-Rowlinson model, and we find that similar results hold for the Stillinger-Helfand models studied here.
## II Description of the Models and Notation
We consider $`q`$-component ($`q1`$) fluids in $`d`$ dimensions with $`d=2,3`$. The components (species) have no self-interaction but particles of one species interact with particles of all other species via an isotropic repulsive potential, $`U(r)`$. We consider two choices for $`U(r)`$,
$$U_{\mathrm{step}}(r)=\{\begin{array}{cc}U_0,\hfill & \mathrm{if}r<\sigma \hfill \\ 0,\hfill & \mathrm{if}r\sigma \text{ ,}\hfill \end{array}$$
(2)
$$U_{\mathrm{gm}}(r)=kT\mathrm{ln}(1e^{r^2/\sigma ^2}).$$
(3)
The limit $`\beta =1/kT\mathrm{}`$ for the step potential corresponds to the Widom-Rowlinson model. For the Gaussian molecule potential, the temperature $`T`$ plays no role because the Boltzmann factor, $`e^{\beta U_{\mathrm{gm}}(r)}=1e^{r^2/\sigma ^2}`$ is, by design, independent of $`T`$.
In general, each component of the $`q`$-component fluid may have a distinct fugacity; however symmetry considerations dictate that a demixing transition occurs with all fugacities equal, and hence we set all the fugacities equal to a single value, $`z`$. For sufficiently small $`z`$, the $`q`$ species are mixed, while for large $`z`$, there are $`q`$ distinct phases because different species repel one another, with each phase predominately composed of one species. If $`q`$ is not too large, there is expected to be a single demixing transition separating these regimes that is in the same universality class as the $`q`$-state Potts model.
For very large $`q`$, the correspondence between Potts models and Widom-Rowlinson models must break down. Although Potts models have a single ordering transition, Widom-Rowlinson models can be presumed to have an intermediate crystalline phase for $`d3`$ and large $`q`$. To understand this phase, consider the limits $`z1`$ and $`q1`$, with the product $`\lambda =qz`$ order unity. Then non-overlapping particles appear with an effective fugacity of $`\lambda `$. However when two particles overlap, the cost is an additional factor of $`1/q`$ because the overlapping particles must be of the same species. Hence the limiting model is precisely the hard sphere gas which we presume has a crystalline phase in $`d3`$. It is therefore reasonable to assume that such a phase occurs in the Widom-Rowlinson models for large $`q`$ and $`z`$ of order $`q^1`$. Needless to say, for fixed $`q`$, when the fugacity is sufficiently large, the model will demix, and thus the crystalline phase is an intermediate phase. We also expect an intermediate crystalline phase for large $`q`$ soft-core Stillinger-Helfand models based on a mapping to a single component fluid with a repulsive soft-core potential. For example, the Gaussian core model is known to crystallize in $`d=3`$. Although intermediate phases do not occur for the usual Potts models, they are not a consequence of the continuum; indeed such phases are known to occur on the lattice for the site-dilute (annealed) Potts models as well as for the lattice version of the Widom-Rowlinson model.
## III Cluster Algorithm
The algorithms used here are, broadly speaking, examples of cluster algorithms of the type first introduced by Swendsen and Wang. Cluster algorithms have been found to be much more efficient than local algorithms such as the Metropolis algorithm for simulating spin systems and lattice gases near critical points. Cluster algorithms would be very useful for off-lattice systems, but no general cluster method has yet been developed; indeed, the only off-lattice models for which highly efficient cluster methods are known are models of the Stillinger-Helfand and Widom-Rowlinson type. The distinguishing features of this class of models are that particles of the same species have no self-interaction and that there is a purely repulsive interaction between particles of different species. In this case, graphical representations and clusters algorithms are available and have been implemented for the Widom-Rowlinson model.
Cluster algorithms for spin systems work by identifying clusters of spins and then randomly flipping these clusters. Cluster are defined by placing bonds between nearest neighbor aligned spins with a probability that depends on the temperature. For fluid systems, bonds are placed between particles of the same type with a probability that depends on the temperature and the interaction potential. Instead of flipping spins, clusters of particles are removed from the system and then new particles are added via a nonuniform Poisson process that depends on the fugacity and the potential due to the remaining particles.
Cluster algorithms are typically used with fixed values of the external parameters such as temperature or fugacity. However, when the location of the phase transition is not known, much computational effort in studying the transition is spent in locating the transition. To avoid this problem, invaded cluster methods can be used which automatically adjust a thermodynamic parameter (for example, temperature or fugacity) to its value at the phase transition. This adjustment is accomplished by using the fact (proved for the $`q=2`$ case) that the clusters just percolate at the transition. In invaded cluster algorithms, clusters are grown until a signature of percolation is observed. The value of the thermodynamic parameter at the transition is an output of the simulation obtained from the fraction of successful attempts to add particles or bonds to the system. The invaded cluster algorithm also may be used to distinguish first-order from continuous transitions as discussed in Ref. for Potts models. This method for distinguishing the order of the transition is discussed below and will be used in Section IV C.
We first describe the cluster algorithm discussed in Section 3.5 of Ref. for Stillinger-Helfand models and then discuss how it can be modified to be an invaded cluster algorithm. We assume that we have a configuration consisting of particle positions and a set of bonds connecting some of the particles and describe how to obtain the next configuration:
1. Identify all clusters of particles defined by the bonds. A particle with no bonds is considered to be a singleton cluster. For each cluster, independently and with probability $`1/q`$, label it a black cluster and with probability $`11/q`$ label it white.
2. Remove all particles in black clusters. The remaining white particles are at a set of positions $`W`$.
3. Replenish the black particles via a Poisson process with local intensity $`y(x)`$ given by
$$y(x)=ze^{\beta V(x)},$$
(5)
$$V(x)=\underset{yW}{}U(|xy|).$$
(6)
where $`z`$ is the fugacity and $`U(r)`$ the potential.
4. For each pair of black particles, place a new bond between them with probability $`p(r)`$ given by
$$p(r)=1e^{\beta U(r)},$$
(7)
where $`r`$ is the separation between the particles. Note that $`p(r)`$ is minus the Mayer $`f`$-function for the potential.
5. Eliminate the white and black labels for the clusters.
This procedure comprises one Monte Carlo step.
Given a configuration of particle positions and bonds without species labels, it is possible to obtain a full multicomponent configuration where each particle has a species label. This assignment is accomplished by identifying clusters and then randomly and independently assigning one of the $`q`$ species labels to each cluster. The species label of each particle is taken to be the species label of its cluster. This labeling of particles is only possible if $`q`$ is a positive integer. However, the algorithm makes sense for all $`q1`$, in analogy to the relation between Potts models, which are defined for positive integer $`q`$, and random cluster models which interpolate between them and are defined for all $`q>1`$.
It is instructive to consider the nature of the cluster configurations generated by the algorithm as a function of the fugacity for a fixed temperature. Suppose that the fugacity and, hence, the density is very small. Then $`p(r)`$ is typically small because the particles are far apart, and most clusters are singletons. In step 2, a fraction $`1/q`$ of the particles is removed. In step 3, particles are replenished as a nearly ideal gas because the exponential factor in Eq. (5) is a small perturbation except in the vicinity of the remaining particles. The end result is a nearly ideal multi-component gas. In the limit of large fugacity and density, we expect a phase in which a single species is predominant with a small admixture of the other species. The bonds connecting particles of the dominant species are sufficiently dense that almost all members of this species are in a single large cluster. The minority species are almost all in widely scattered singleton clusters. When the majority species is white, as occurs in about $`1/q`$ of the Monte Carlo steps, the large cluster is removed and then replaced as a nearly ideal gas in a slightly perturbed background potential generated by the minority species. An important feature of this picture is that the clusters do not percolate at small fugacity and do percolate at large fugacity. At some intermediate value of the fugacity, there must be a percolation transition. As discussed in Refs., the percolation transition of the clusters coincides with the demixing transition of the fluid.
The coincidence of the percolation transition and the demixing transition justifies an invaded cluster version of the above cluster algorithm. The invaded cluster algorithm is very similar to the fixed $`z`$ cluster algorithm described above except that steps 3 and 4 are modified as follows. Instead of putting down new black particles as a Poisson process at a fixed intensity, black particles are added to the system one at a time according to the potential $`V(x)`$ (see below). After each black particle is added, bonds between the new particle and all previously placed black particles are put down with probability $`p(r)=1\mathrm{exp}(\beta U(r))`$. The black clusters defined by these bonds are monitored after each particle is added, and the process of adding particles is stopped when a stopping condition is satisfied. For simulating the phase transition, the stopping condition is that one cluster spans the system. For periodic boundary conditions, spanning is taken to mean that a cluster wraps around the system in at least one of the $`d`$ directions. The spanning condition insures that the algorithm simulates the phase transition.
In practice, a particle is added to the system according to the potential $`V(x)`$ by the following procedure. A particle is tentatively placed at a random position $`x`$. A random number $`r`$ is chosen in the interval $`[0,1)`$, and the particle placement is accepted if
$$r<e^{\beta V(x)};$$
(8)
otherwise the particle is rejected and another attempt is made to place a particle. Let $`\stackrel{~}{z}=N_{\mathrm{tot}}/L^d`$, where $`N_{\mathrm{tot}}`$ is the total number of attempted particle placements in a Monte Carlo step, including both accepted and rejected placements, $`L^d`$ is the system volume, and the brackets $`\mathrm{}`$ indicate an average over the simulation. Because the intensity $`y(x)`$, defined in Eq. (5), and the Boltzmann factor $`e^{\beta V(x)}`$ governing particle placements differ by a factor of the fugacity, we conclude that $`\stackrel{~}{z}`$ is an estimator of $`z_c`$, the value of the fugacity at the transition. Note that if the fluctuations $`\sigma _{\stackrel{~}{z}}`$ in $`N_{\mathrm{tot}}/L^d`$ are small, then the invaded cluster algorithm is essentially identical to the fixed fugacity algorithm operating at $`z=z_c`$. This identification justifies the use of the invaded cluster method. A more complete discussion of the invaded cluster method and the use of $`\stackrel{~}{z}`$ as an estimator of a critical parameter is given in Ref..
Whenever the invaded cluster method simulates a system at its critical point, scaling methods can be used to obtain critical exponents from the size dependence of divergent thermodynamic quantities such as the compressibility or the susceptibility. To study the latter, we consider the quantity
$$\chi \frac{1}{L^d}\underset{i}{}s_i^2$$
(9)
where $`s_i`$ is the number of particles in the $`i`$th cluster. We now show that $`\chi `$ is related to the usual susceptibility. Consider, for simplicity, the discretized version of the Stillinger-Helfand model on a lattice of linear dimension $`L`$ with spacing $`ϵ`$ so that the total number of sites is $`[L/ϵ]^d`$. The demixing order parameter at site $`x`$ is given by $`\delta \rho _1(x)n_1(x)n(x)/q`$, where $`n_1(x)=1`$ if there is a particle of type 1 at site $`x`$ and $`n_1(x)=0`$ otherwise; $`n(x)`$ counts the presence of a particle of any type. The relevant susceptibility $`\stackrel{~}{\chi }`$ is defined by the second derivative of the pressure with respect to the (ordering) chemical potential:
$$\stackrel{~}{\chi }=\frac{1}{L^d}\underset{x,y}{}\delta \rho _1(x)\delta \rho _1(y).$$
(10)
(The reason that $`ϵ`$ does not enter explicitly into Eq. (10) is that the derivatives are with respect to the log of the activity and it is the activity that is scaled by $`ϵ`$.) For a given particle and bond configuration, averaging over assignments of species labels, it is clear that $`\delta \rho _1(x)\delta \rho _1(y)`$ vanishes unless the sites $`x`$ and $`y`$ are both occupied and in the same cluster, in which case the result is $`q^2(q1)`$. Thus, for a fixed particle-bond configuration, we obtain the number of particles in the cluster at $`x`$ if we sum over $`y`$. Summing over $`x`$ yields the sum of the squares of the cluster sizes so that $`\stackrel{~}{\chi }=q^2(q1)\chi `$, and hence we conclude that $`\chi `$ is related to the usual susceptibility. Finally, finite size scaling predicts that
$$\chi L^{\gamma /\nu },$$
(11)
so that the scaling of $`\chi `$ with system size can be used to extract the magnetic exponent $`\gamma `$.
Cluster methods also may be used to distinguish first-order from continuous transitions. For this purpose, a fixed density stopping rule is used. Black particles are added to the system until the density $`\rho `$ reaches a fixed value and then $`\stackrel{~}{z}`$ is measured. In this way the canonical ensemble is simulated rather than the grand canonical ensemble. This procedure is done for a range of densities near the transition. If the transition is continuous, the fugacity is a strictly increasing function of $`\rho `$. However, if the transition is first-order, then the fugacity does not increase monotonically with increasing $`\rho `$ in the coexistence region. Why does the nature of the $`\stackrel{~}{z}`$ versus $`\rho `$ curve signify whether a transition is continuous or first-order? Suppose that the demixing transition of a $`q`$-component system is first-order. At the transition, there is coexistence of $`q+1`$ phases; $`q`$ demixed phases and one mixed phase. Because the repulsive interaction is reduced for the demixed phases, these phases have a higher density than the mixed phase. Thus, in the thermodynamic limit, there is a range of $`\rho `$ for which the fugacity is constant. Let $`\rho _1`$ be the density of the mixed phase and $`\rho _2`$ the density of the demixed phase. Because $`\mathrm{ln}z=s/\rho `$, where $`s`$ is the entropy density, we have that $`s`$ is a linear function of $`\rho `$ in the coexistence region. More specifically, $`s(\rho )`$ is a linear combination of $`s(\rho _1)`$ and $`s(\rho _2)`$, the entropy densities of the mixed and demixed phases. The linearity of $`s(\rho )`$ applies in the thermodynamic limit. However, for a finite system, the entropy density is not linear in the coexistence region. Consider a system with linear dimension $`L`$ and periodic boundary conditions at density $`\rho `$. This system also can be viewed as an infinite system with periodic constraints on the particles. Let $`s(\rho ,L)`$ be the entropy density of this periodically constrained system. Now suppose the constraints are removed and the system comes to equilibrium. If $`\rho _1\rho \rho _2`$, demixing will occur spontaneously so that $`s(\rho ,L)s(\rho )`$ with the equality holding only at the endpoints of the coexistence range. Because $`\mathrm{ln}z=s/\rho `$, we must have that $`z`$ is non-monotone in the coexistence region. This approach for distinguishing the order of a transition is very similar to the microcanonical Monte Carlo method used in Ref..
## IV Results
In Section IV A we present results for the 2D and 3D Stillinger-Helfand Gaussian molecule models. The two-component step potential model is discussed in Section IV B and the $`q`$-component step potential is discussed in Section IV C.
### A Gaussian molecule model in two and three dimensions
We simulated the Gaussian molecule model (with the potential $`U_{\mathrm{gm}}`$ defined in Eq. (3)) using the invaded cluster method and the spanning rule described in Section III for a range of linear dimensions $`L`$ up to 140 in $`d=2`$ and 40 in $`d=3`$. We choose units such that distances are measured in units of $`\sigma `$. We collected statistics for the number of particles in the spanning cluster $`M`$, the critical density $`\rho `$, the susceptibility $`\chi `$, the estimator of the critical fugacity $`\stackrel{~}{z}`$ and its standard deviation $`\sigma _{\stackrel{~}{z}}`$, and the normalized autocorrelation function for the spanning cluster size, $`\mathrm{\Gamma }_M`$. For each value of $`L`$ we averaged over $`10^5`$ Monte Carlo steps. The estimator of the critical density is the average number of particles (of any species) per unit area (volume) when the spanning condition is fulfilled. Although $`U_{\mathrm{gm}}(r)`$ does not go to zero at finite $`r`$, it becomes very small for larger $`r`$ and to speed the calculation, we set $`U_{\mathrm{gm}}(r)=0`$ for $`r3`$.
Tables I and II show the $`L`$ dependence of $`M`$, $`\rho `$, $`\chi `$, $`\stackrel{~}{z}`$, $`\sigma _{\stackrel{~}{z}}`$, and $`\tau _M`$ for the 2D and 3D Stillinger-Helfand models, respectively. The integrated autocorrelation time $`\tau _M`$ is defined by
$$\tau _M=\frac{1}{2}+\underset{t=1}{\overset{\mathrm{}}{}}\mathrm{\Gamma }_M(t).$$
(12)
This time is approximately the number of Monte Carlo steps between statistically independent configurations and enters into the error estimate for $`M`$. In practice, $`\mathrm{\Gamma }_M(t)`$ becomes indistinguishable from the noise for $`t10`$ Monte Carlo steps, and it is necessary to cut off the upper limit of the sum defining $`\tau _M`$ when the magnitude of $`\mathrm{\Gamma }_M`$ becomes comparable to its error.
Note that the fluctuations $`\sigma _{\stackrel{~}{z}}`$ in $`\stackrel{~}{z}`$ decrease with increasing $`L`$ and that $`\tau _M`$ is small and hardly increases with $`L`$. These results demonstrate the validity and efficiency of the invaded cluster algorithm. The decrease in $`\sigma _{\stackrel{~}{z}}`$ shows that as $`L`$ increases, the invaded cluster becomes essentially equivalent to a fixed parameter cluster algorithm for which detailed balance can be proven.
The error estimates for all quantities in Tables I and II except $`\tau _M`$ were obtained by computing the standard deviation of the quantity of interest and dividing by the square root of the number of measurements. This error estimate does not take into account correlations between successive Monte Carlo steps. To account for correlations, the error estimates in the tables for an observable $`O`$ must be multiplied by $`\sqrt{2\tau _O}`$, where $`\tau _O`$ is the integrated autocorrelation time for $`O`$. The statistical errors for quantities derived from fits such as $`\rho _c`$ and $`\gamma /\nu `$ include the factor $`\sqrt{2\tau }`$ except that $`\tau _O`$ is replaced by $`\tau _M`$.
Figure 1 shows the results for $`\rho (L)`$ versus $`1/L`$ for the 2D Stillinger-Helfand model. The value of $`\rho `$ in the limit of $`L\mathrm{}`$ is extrapolated from the finite size data by doing a linear least squares fit omitting the values for $`L=20`$ and $`40`$ yielding the result, $`\rho _c(2)=1.1644\pm 0.0004`$. A similar extrapolation for the critical fugacity yields $`z_c(2)=1.3536\pm 0.0008`$. Similarly, extrapolating the result for the 3D Stillinger-Helfand model using the data for all available $`L`$ yields $`\rho _c(3)=0.440\pm 0.001`$ and $`z_c(3)=0.5826\pm 0.0013`$. Our error values for these critical parameters are one standard deviation from the linear least squares fit of the fugacity or density versus $`1/L`$; no effort has been made to estimate systematic errors. All of the fits have acceptable goodness-of-fit probability values $`Q`$.
Our 3D value for the critical density is consistent with the series result of Lai and Fisher, $`\rho _c(3)=0.441\pm 0.001`$ (Eq. (36) of Ref.) but our critical fugacity is somewhat larger than their value, $`z_c(3)=0.5785\pm 0.0002`$ (Eq. (44) of Ref.). Note that Lai and Fisher report results using a different convention so that their values of $`\rho _c`$ and $`z_c`$ must be divided by $`\pi ^{d/2}`$ to compare with our values.
The exponent ratio $`\gamma /\nu `$ can be obtained from the scaling of the susceptibility $`\chi `$ with $`L`$ according to Eq. (11). Figure 2 shows a log-log plot of $`\chi `$ versus $`L`$ for the 2D Gaussian molecule model. A least squares fit of all the data to a simple power law does not yield an acceptable goodness of fit value $`Q`$. If the smallest value of $`L`$ is omitted, we obtain $`\gamma /\nu =1.745\pm 0.001`$ with $`\chi ^2=6.2`$, $`Q=0.19`$, and $`\mathrm{DF}=4`$ (degrees of freedom). The $`Q`$ value indicates a reasonable fit to a simple power law, but the fitted value of $`\gamma /\nu `$ is $`5\sigma `$ from the 2D Ising value of $`\gamma /\nu =7/4`$. A reasonable explanation of this result is that the 2D Gaussian molecule is, indeed, in the 2D Ising universality class, but that there are relatively slowly varying corrections to scaling.
A least squares fit to all the data for the susceptibility $`\chi `$ for the 3D Gaussian molecule model yields $`\gamma /\nu =1.9626\pm 0.0044`$ with $`\chi ^2=0.11`$, $`\mathrm{DF}=2`$, and $`Q=0.95`$. The $`Q`$ value near unity suggests that the data is well fit to a pure power law. Recent high precision Monte Carlo studies of the 3D Ising model yield $`\gamma /\nu =1.9630(30)`$ which is consistent with our results. Our results add weight to the hypothesis that the Stillinger-Helfand model is in the Ising universality class for both 2D and 3D. The relatively high precision results for $`\gamma /\nu `$ from the Gaussian molecule model suggests that models of the Stillinger-Helfand type may be useful for high precision studies of the 3D Ising universality class. The isotropy of the interaction and absence of an underlying lattice might make for smaller corrections to scaling in Stillinger-Helfand models compared to lattice spin models.
### B Step potential in 2D
Table III summarizes our results for the 2D Stillinger-Helfand model with the step potential given by Eq. (2) and temperature $`T=1`$ (measured in units of $`U_0`$). For each value of $`L`$ we averaged over $`10^6`$ Monte Carlo steps. The results are qualitatively similar to the Gaussian molecule model. Table IV shows the temperature dependence of the measured quantities for $`L`$ fixed at $`L=20`$. The values of $`\rho `$ and $`\stackrel{~}{z}`$ at low temperatures should reduce to the Widom-Rowlinson model. In Ref. we measured the critical parameters of the Widom-Rowlinson model using the invaded cluster method. For $`L=40`$ (the smallest size measured) we obtained $`\rho =1.525`$ and $`\stackrel{~}{z}=1.720`$, values that are close to the values of $`\rho `$ and $`\stackrel{~}{z}`$ for the two lowest temperatures in Table IV. This agreement confirms that the step potential is continuously connected to the hard core potential.
If the $`L=20`$ data point is omitted, we obtain from a least squares fit to the data for the susceptibility $`\chi `$, $`\gamma /\nu =1.7434\pm 0.0009`$ with $`\chi ^2=0.52`$, $`Q=0.81`$, and $`\mathrm{DF}=3`$.
### C Dependence of the order of the transition on $`q`$ and on impurities
The critical properties of the $`q`$-component Stillinger-Helfand model are expected to be closely related to the $`q`$-state Potts model. One of the features of the $`q`$-state Potts model is that the transition is continuous for small $`q`$ and is first-order for $`q>q_c(d)`$, where $`q_c(2)=4`$ and $`2<q_c(3)<3`$. We have used the method described in Section III to determine the order of the transition as a function of $`q`$ for the $`q`$-component Stillinger-Helfand step potential model. Figure 3 shows the fugacity $`\stackrel{~}{z}`$ as a function of $`\rho `$ for $`d=2`$ for $`L=40`$ and $`T=1`$. Note that for $`q=3`$ the curve is clearly monotonically increasing, which implies a continuous transition. For $`q5`$ the curves are clearly non-monotonic, which implies a first-order transition. For $`q=4`$ the curve is essentially flat within the error bars (whose size is approximately that of the symbols). Although the effective value of $`q_c`$ is expected to vary with $`L`$, these results are consistent with the hypothesis that $`q_c(2)=4`$ for the 2D Stillinger-Helfand step potential model.
Figure 4 shows $`\stackrel{~}{z}`$ as a function of $`\rho `$ for the 3D Stillinger-Helfand step potential model for $`L=20`$ and $`T=1`$. The $`q=2`$ curve is clearly monotonically increasing while the $`q=3`$ is clearly not, implying that $`2<q_c(3)<3`$ as for the 3D Potts model.
Finally, we have studied the effect of quenched impurities on the nature of the transition for the $`q=3`$ Stillinger-Helfand step potential model. The impurities consist of randomly placed scatterers that interact with all the fluid particles via the same repulsive step potential that exists between different components. Figure 5 shows a plot of $`\stackrel{~}{z}`$ versus $`\rho `$ for the $`q=3`$ Stillinger-Helfand step potential model in 3D for four impurity densities ranging from 0.025 to 0.0625. For each of the 10 impurity configurations considered for a given density, data from $`10^3`$ Monte Carlo steps are collected. For the two lowest impurity concentrations, the $`\stackrel{~}{z}`$ versus $`\rho `$ curve is non-monotonic as is the case for the pure system, while for the two highest impurity concentrations, the curve is monotonic indicating a crossover to a continuous transition. This behavior is in accord with general arguments that the presence of quenched impurities should cause a first-order transition to become continuous. It is not clear from our data whether there is a critical value of the disorder below which the transition remains continuous or whether the crossover at finite disorder strength is a finite size effect and that any strength of disorder is sufficient to make the transition continuous in the thermodynamic limit.
## V Discussion and Conclusions
We have studied the Stillinger-Helfand model and several generalizations using the invaded cluster algorithm. Our results for $`q`$-component Stillinger-Helfand models with $`2q8`$ are consistent with the hypothesis that these models are in the same universality class as the corresponding Potts models. In addition, we have shown that the addition of quenched disorder causes the demixing transition to change from first-order to continous for those values of $`q`$ for which the pure system transition is first-order. For the case $`q=2`$ and $`d=2`$, our results for the magnetic exponent are outside the statistical error bars of the exact Ising value. However, we believe that this difference is most likely the result of slowly varying corrections to scaling. It would be useful to consider larger system to confirm Ising universality. It would also be interesting to consider larger values of $`q`$ to explore the possibility of an intermediate crystalline phase in Stillinger-Helfand models.
## VI Acknowledgements
This work was supported by NSF grants PHY-9801878 (RS), DMR-9633385 (HG), DMR-9978233 (JM), DMS-9971016 (LC), and NSA grant MDA904-98-1-0518 (LC). We thank Gregory Johnson for useful discussions.
Present address: Courant Institute of Mathematical Sciences, 251 Mercer Street, New York, NY 10012.
|
no-problem/0003/hep-th0003056.html
|
ar5iv
|
text
|
# Contents
### 1 Introduction
The purpose of this paper is to examine the question: “Exactly what consequences do the holographic principle, and the related entropy bounds, have for the construction of the quantum theory of gravity?”. We will take it as given that a quantum theory of gravity should be both background independent and cosmological. This is because both background dependence and boundaries are almost certainly artifacts of approximations which, while convenient for certain purposes, exclude significant aspects of the problem of constructing a theoretical framework which includes and extends the principles of both relativity and quantum theory.
The question is not easy because most of the results which are so far known to bear on it are concerned with semiclassical approximations or weak coupling limits. Many are also limited to situations with boundaries, either asymptotic or finite. It is, of course, always possible that the holographic principle is only a characterization of the semiclassical theory, perhaps because it is no more than a re-expression of the generalized second law of thermodynamics. On the other hand, given that the entropy bounds involve inverse powers of $`\mathrm{}G`$ it is very possible that they are deep clues to the structure of the fundamental theory, and that some version of the holographic principle may even turn out to be a fundamental principle of the quantum theory of gravity. If so it will be the first principle that is genuinely quantum gravitational, rather than just being imported from general relativity or quantum theory.
But if this is to be the case the true, fundamental statement of the holographic principle must be made in the language of some background independent quantum theory of cosmology. This is likely to be phrased in a different language than its semiclassical formulation, for the same reason that the laws of thermodynamics are expressed in very different language when expressed fundamentally in quantum statistical mechanics than they are when one first meets them as characterizations of the thermodynamic limit. The problem is then to discover what features of the entropy bounds and holographic principles so far discussed might be artifacts of the semiclassical limit, and to separate these from the principle’s true content.
In this paper a line of reasoning is presented, which leads to the identification of a form of the holographic principle that can survive passage to a background independent quantum theory of cosmology. This is called the weak holographic principle; it is both logically weaker and conceptually more radical than the forms of the principle originally contemplated in the literature. It is logically weaker in that it makes no assertion as to a relationship between a bulk and a boundary theory. As has been found also by Fischler and Susskind and others, that idea already fails at the semiclassical level for cosmological theories. We found in our investigations further reasons why such a strong form of the principle cannot be fundamental. Instead, the weak holographic principle comes into a background independent quantum theory of cosmology as a framework for that theory’s interpretation and measurement theory. Its role is to constrain the quantum causal structure of a quantum spacetime in a way that connects the geometry of the surfaces on which measurements may be made with a measure of the information that those measurements may produce. In this context the entropy bound becomes a definition, by which the notion of geometry is reduced to more fundamental notions coming from the quantum theory of cosmology. To put it simply, the Bekenstein bound is turned on its head and the notion of area is reduced fundamentally to a measure of the flow of quantum information. This form of the principle was first suggested in ; the present paper can be taken as an argument that no form of the principle which is logically stronger, or conceptually less radical, can survive passage to a background independent quantum theory of gravity.
One difficulty of the subject is that different authors have proposed different ideas under the name of the holographic hypothesis or principle. It is necessary first to bring a bit of order to the situation by classifying the different proposals in a way that uses a common language and makes clear their logical relations to each other. To do this we use the language of screens. In this paper a screen will always mean an instantaneous, spacelike, two dimensional surface<sup>1</sup><sup>1</sup>1Or, in $`D+1`$ dimensions a surface of dimension $`D1`$. on which quantum mechanical measurements are made. These will always be measurements of fields on the surface, which then result in information concerning the causal past of the surface.
To make progress it is first of all necessary to distinguish between entropy bounds and holographic principles. The former are limitations on the degrees of freedom attributable to either the screens themselves, or spacelike or null surfaces bounded by the screens. In the literature entropy bounds are sometimes called holographic bounds, but we will stick to the former expression to avoid confusion. A holographic principle extends an entropy bound by postulating a form of dynamics in which the quantum evolution of the spacetime and matter fields is described in terms of observables measurable on the screens.
We find that the different entropy bounds and holographic principles that have been proposed fall each into three classes, which we call the “strong” “null” and “weak” forms. The different entropy bounds all postulate that some measure of information or of a“number of degrees of freedom” is bounded by the area of a screen. The strong forms are those that postulate that the bound applies to the degrees of freedom on a spacelike surface bounded by the screen. The null forms are those, suggested by Fischler and Susskind and put in a very elegant form by Bousso and Flanagan, Marolf and Wald, in which there is a bound on the number of degrees of freedom of certain null surfaces, bounded by the screens. The weak form, proposed with Markopoulou in , postulates only a relationship between the area of the screens and the dimension of the Hilbert spaces which provide representations of algebras of observables on them.
The main conclusion of this paper will be that the strong entropy bound cannot hold in a cosmological theory, and that the null form may only hold in a semiclassical theory in which quantized matter degrees of freedom evolve on a fixed spacetime manifold, but cannot survive the quantization of the gravitational field. It appears that only the weak form, which as the name suggests is logically weaker and therefore requires less, may survive in a full theory of quantum gravity.
The different forms of the entropy bound stem from different interpretations that may be given to the Bekenstein bound. These, may be called the strong and weak Bekenstein bounds. They are presented in the next section. The distinction is that the weak form bounds only the information measurable by observers just outside the horizon of a black hole, while the strong form bounds the total number of degrees of freedom measurable in the interior of the horizon. We find that only the weak form is required by the usual arguments based on the laws of thermodynamics. The strong form of the Bekenstein bound follows only if we add an independent assumption, which is that the number of degrees of freedom measurable on the interior does not exceed those measurable on the exterior. We call this the strong entropy assumption. It must be postulated independently, as it does not follow from any argument which involves only measurements made exterior to the black hole horizon. This conclusion has been reached also by Jacobson. One of the conclusions of this paper is that the strong entropy assumption is false. Among other things it is inconsistent with both inflation and gravitational collapse.
The strong, null and weak forms of the holographic principle depend on the corresponding forms of the entropy bounds. They extend each of them by giving a framework for dynamics. As only the weak form of the entropy bound seems to be possible in a full quantum theory of gravity, only a weak form of the holographic principle may be true in such a theory.
The author is aware that this is not a completely welcome conclusion. In fact, it goes against his own proposal for a bulk to boundary isomorphism in quantum general relativity and supergravity. It unfortunately conflicts also with some of the hopes which have been held concerning the $`AdS/CFT`$ correspondence. It is then necessary to discover if there is any conflict between the conclusions reached here and the many results which have been found which support some version of the $`AdS/CFT`$ conjecture. We find that there is not. This is likely because most of the results so far found are consequences of much weaker assumptions, which involve only the transformation properties of observables under the super-symmetric extension of $`SO(D,2)`$. In fact, Rehren has shown rigorously that a correspondence will always exist between theories on an $`AdS_D`$ background and conformal field theories on $`Mink_{D1}`$, subject only to the condition that the latter exist. The results so far found concerning $`AdS/CFT`$ then may hold as a consequence of this theorem. To the extent that this is true they do not then provide any independent evidence for a strong version of the holographic principle that would go beyond this case.
Does this mean that there is something wrong with the idea that the holographic principle may play a role in string theory, as was suggested by the original arguments for the $`AdS/CFT`$ correspondence? Certainly, not, what it means is that if it is to go beyond the level of description in terms of dynamics of strings and branes in fixed classical background spacetimes, string theory must be formulated in a background independent langauge. Forms of the holographic principle that may suffice in the context of physics on a single fixed background are likely to be of limited validity, but the results of our arguments is that there are forms of the bounds and principle that may hold in a background independent theory. In fact, as we argued in , the weak holographic principle may hold in background independent formulations of string theory.
We now give an outline of the paper, emphasizing the logical structure of its argument.
This paper is divided into two parts. The first concerns entropy bounds. In the next section we discuss the weak and strong versions of the Bekenstein bound and establish the claims made above.
In section 3 we turn our attention to cosmology and find that the weak and strong Bekenstein bounds each imply a cosmological bound, called respectively the weak and strong cosmological entropy bounds. As in the non-cosmological case, the strong form cannot be derived without making the strong entropy assumption.
In section 4 we then give five counterexamples to the strong cosmological holographic bound. These are
1. The gravitational collapse problem.
2. The inflation problem.
3. The wiggly surface problem.
4. The two-sided problem.
5. The throat problem.
The conclusion is that the strong cosmological entropy bound is false. Since this followed from known physics plus the strong entropy assumption, the likely conclusion is that the latter cannot hold in a gravitational theory.
We then describe, in section 5, the new cosmological entropy bound proposed by Bousso, which we call the null entropy bound. It seems to be correct at the classical and semiclassical level, as a bound on the matter entropy in a fixed spacetime. However, to play a role in a quantum theory of gravity, an entropy bound should extend to a case in which the gravitational degrees of freedom are dynamical. In section 6 we present two arguments why the null entropy cannot hold once the gravitational degrees of freedom are turned on, either classically or quantum mechanically.
The only form of an entropy bound that can then survive at the level of a full quantum theory of cosmology is then the weak form. This is the conclusion of our discussion of entropy bounds.
The second part of the paper concerns the question of whether, given the conclusions reached in the first part, there is any form of a holographic principle that may hold in a quantum theory of gravity. Such a principle must give a framework within which to describe the dynamics of the degrees of freedom constrained by the entropy bounds. Most forms of the holographic principle which have been discussed assume the strong form of the entropy bound. Dynamics is then formulated in terms of a map between the bulk and boundary Hilbert spaces that preserves unitary evolution. Since the strong form of the entropy bound seems to disagree with things we believe to be true, such a strong form of the holographic principle is ruled out, at least for the case of gravitational theories.
In section 8 we consider this situation carefully, as it is not what many people’s intuition seems to suggest. We show that there is no contradiction with what we know, even taking into account all the results found concerning the $`AdS/CFT`$ correspondence. We also note that an elegant solution to the black hole information paradox is still available.
We then raise the question of whether there might be a weaker form of the holographic principle which may still hold. We consider first, in section 9, the question of whether some form of the holographic principle may be associated with the null entropy bound. We reach the conclusion that such a principle may exist, but it must be based on a modification of quantum theory in which there are many Hilbert spaces, one for each screen.
However, if the null entropy bound cannot survive the turning on of the gravitational degrees of freedom, neither can the null version of the holographic principle. We are then left with the question of whether there might be a weak version of the holographic principle, which would correspond with the weak entropy bound. We first, in section 10, discuss the question of which two surfaces may be screens in such a formulation. We come to the conclusion that none of the possible criteria for distinguishing screens from other two surfaces can survive passage to the full quantum theory. Therefor every spacelike two surface may be considered a screen. This opens up the possibility of defining geometry in terms of the properties of screens, rather than visa versa.
In section 11 we then list the conclusions of the argument reached till this point, which then may be considered to motivate and constrain the possible forms of a weak holographic principle. One possible form of the principle, given in is then reviewed in section 12. There we also describe briefly two independent arguments for a weak form of the holographic principle. These come from considerations of the role of quasi-local observables in general relativity and, in a form made originally by Crane, from relational formulations of quantum cosmology.
The paper then closes with a short summary of the main conclusions.
## Part I ENTROPY BOUNDS
### 2 The weak and strong forms of the Bekenstein bound
The different possible entropy bounds, as well as the different possible forms that the holographic principle might take, have their origin in the fact that different meanings may be given to the entropy of a black hole. To see this, let us distinguish
* The thermodynamic black hole entropy
$$S_{bh}=\frac{A}{4G\mathrm{}}$$
(1)
which enters the laws of black hole mechanics.(here $`A`$ is the area of the black hole horizon.)
* $`I_{bh}^{weak}`$, Weak black hole entropy This is a measure of how much information an observer external to its horizon can gain about its interior, from measurements made outside the horizon. Besides the mass, angular momentum and charges, this includes measurements of the quanta emitted by the black hole.
* $`I_{bh}^{strong}`$, Strong black hole entropy This is a measure of how much information is contained in the interior of the black hole. This can also be expressed as the “number of degrees of freedom” inside the black hole, or the number of distinct ways in which it might have been assembled.
We may note that the generalized second law requires require that
$$I_{bh}^{weak}S_{bh}$$
(2)
This is because all of the arguments for it concern exchanges of matter and radiation between the black hole and observers situated outside its horizon. They do so because they assume that the semiclassical approximation is valid so that the only way that matter or information can cross from the interior to the exterior is in the form of thermalized Hawking radiation.
We may note that the number of quanta emitted by a black hole during Hawking radiation is on the order of (1), this is consistent with eq. (2).
We are, of course, ignorant of what happens when a black hole evaporates to any state in which it has a mass of the order of the Planck scale. The only thing we know with any confidence is that the semiclassical approximation breaks down. To attack this problem many authors either implicitly or explicitly make the following assumption
* Strong entropy assumption:
$$I_{bh}^{strong}=I_{bh}^{weak}$$
(3)
This is an attractive assumption. For example, it suggests that there may be a single Hilbert space, which is a representation of operators at infinity, within which the full evolution of a system, from prior to black hole collapse to the aftermath of complete evaporation, may be represented as unitary evolution. However, we should note that the logic is not symmetric, as there are remnant scenarios under which unitary evolution does not imply (3). Nor can (3) be supported by any arguments for the generalized second law, as they concern only exchanges of material across the black hole horizon described in the semiclassical approximation. Thus, it is logically possible that (3) is false and that
$$I_{bh}^{strong}>I_{bh}^{weak}$$
(4)
It is also possible that $`I_{bh}^{strong}`$ is not even a well defined quantity.
The arguments which are usually taken as supporting some version of the holographic hypothesis depend strongly on whether or not (3) is assumed. To see this, let us run the standard Bekenstein argument<sup>2</sup><sup>2</sup>2This form of the argument is taken from , but it is due originally to Bekenstein. There is also some confusion because a different bound, $`S<RE`$, where $`E`$ is the energy of a system was also postulated by Bekenstein to hold in ordinary quantum field theory, in the absence of gravity. The bound we need,(2) is logically weaker than that, and so arguments against this hypothesis do not necessarily contradict (2)..
#### The Bekenstein argument
Consider a timelike three dimensional region $``$ of an asymptotically flat spacetime $``$, the quantum dynamics of which we wish to study. We will assume $``$ has topology $`=\mathrm{\Sigma }\times R`$, where $`\mathrm{\Sigma }`$ is a spatial manifold. We will restrict attention to the physics within $``$ by the imposition of boundary conditions on $`=\mathrm{\Sigma }\times R`$. We will denote $`𝒮=\mathrm{\Sigma }`$. These will restrict the degrees of freedom of the gravitational field on the boundary; as a result a reduced set of observables will be able to vary at the boundary.
Let $`𝒜_𝒮`$ be the complete algebra of the unconstrained observables on a spatial slice of the boundary, $`𝒮`$. This will have a representation on a Hilbert space $`_𝒮`$. We will always assume that $`_𝒮`$ is the smallest non-trivial representation, i.e. it contains no operators that commute with the representatives of $`𝒜_𝒮`$. We will call these the boundary observables algebra and boundary Hilbert space. We may assume that among the elements of $`𝒜_𝒮`$ are the Hamiltonian, $`H_𝒮`$ and the areas of regions $``$ of the boundary $`𝒮`$, which I will denote $`A[]`$. Recall that in general relativity the Hamiltonian is, up to terms proportional to constraints, defined as an integral on the boundary and is thus an element of $`𝒜_𝒮`$.
Since the system contains gravitation, we may assume that among the spectrum of states in $`_𝒮`$ are a subspace which correspond to the presence of black holes in $`\mathrm{\Sigma }`$. These are semiclassical statistical states, and we will assume that their statistical entropies, given by the dimensions of the corresponding subspaces of $`_𝒮`$ are given by the usual formula (1), in the semiclassical limit when their masses and areas are large in Planck units.
We will consider only systems in thermal equilibrium. This rules out examples from cosmology or astrophysics in which the thermalization time or light-crossing time is longer than the time in which the system will gravitationally collapse.
The argument is simplest in the case that we assume that the induced metric on $`𝒮`$ is spherical, up to small perturbations corresponding to weak gravitational waves passing through the boundary. The argument proceeds by contradiction. We assume that the region $`\mathrm{\Sigma }`$ can contain an object $`𝒪`$ whose complete specification in the boundary Hilbert space $`H_\mathrm{\Sigma }`$ requires an amount of information $`I_𝒪`$ which is larger than
$$I_𝒮=\frac{A[𝒮]}{4l_{Pl}^2}$$
(5)
which is of course the entropy of a black hole whose horizon just fits inside of $`𝒮`$.
Let us assume that initially we know nothing about $`𝒪`$, so that $`I_𝒪`$ is a measure of the entropy of the system. However, with no other information we can conclude that $`𝒪`$ is not a black hole, as the largest entropy that could be contained in any black hole in $`\mathrm{\Sigma }`$ is $`I_𝒮`$. We may then argue, using the Hoop theorem that the energy contained within $`\mathrm{\Sigma }`$ (as measured either by a quasi-local energy on the surface or at infinity) must be less than that in a black hole whose horizon has area $`𝒜[𝒮]`$. But this being the case we can now add energy slowly to the system to bring it up through an adiabatic transformation to the mass of that black hole. By the hoop theorem this will have the result of transforming $`𝒪`$ into the black hole whose horizon just fits inside the sphere $`𝒮`$.
This can be done by dropping quanta slowly into the black hole, in a way that does not raise the entropy of its exterior. As a result, once the black hole has formed we know the entropy of the system, it is $`I_𝒮`$. But we started with a system with entropy $`I_𝒪`$, which we assumed is larger. Thus, we have violated the generalized second law of thermodynamics. The only way to avoid this is if $`I_𝒪<I_𝒮`$.
Since this is a bound on the total information that could be represented in $`_𝒮`$, we have
$$\mathrm{ln}Dim\left[_𝒮\right]=I_𝒮=\frac{A[𝒮]}{4l_{Pl}^2}$$
(6)
We may remark that this argument employs a mixture of classical, statistical and semiclassical reasoning. For example, it assumes both that the hoop theorem from classical general relativity applies, in the case of black hole masses large in Planck units, to real, quantum black holes. One might attempt to make a detailed argument that this must be the case if the quantum theory is to have a good classical limit. However worthy of a task, this will not be pursued here, as it is unlikely that any such argument can be elevated to establish the necessity, rather than plausibility of the Bekenstein bound, in the absence of a complete theory of quantum gravity.
#### What does the Bekenstein argument imply?
We may note that the above argument involves only the weak form of the black hole entropy, $`I_{bh}^{weak}`$. This is because what is under discussion is the description of the system as given by states in the boundary Hilbert space $`_𝒮`$. This is, as emphasized, a representation of the algebra of operators $`𝒜_𝒮`$ measurable on the boundary. This is sufficient to make the argument, as the crucial steps involves a) use of the hoop theorem and b) adiabatically feeding energy into the system, both of which concern only measurements or operations which may be made on the boundary. The conclusion of the argument then only concerns the dimension of $`_𝒮`$, and hence only external measurements. We may express this by saying that the argument demonstrates the
* Weak Bekenstein bound Let a system $`\mathrm{\Sigma }`$ be defined by the identification of a fixed boundary $`\mathrm{\Sigma }=𝒮`$, and a Hilbert space $`_𝒮`$ be defined as the smallest faithful representation of the algebra of observables $`𝒜_𝒮`$ measurable on the boundary only. Either the area $`A[𝒮]`$, is fixed or is in $`𝒜_𝒮`$. In the first case,
$$Dim_𝒮e^{\frac{A[𝒮]}{4G\mathrm{}}}$$
(7)
where $`G`$ is the physical, macroscopic Newton’s constant. In the case that $`A[𝒮]𝒜_𝒮`$, the Hilbert space $`_𝒮`$ must be decomposable into eigenspaces of $`A[𝒮]`$ such that (7) is true in each.
Without further assumptions this implies nothing for quantities that refer essentially to the “bulk” such as “the number of degrees of freedom contained in the region $`\mathrm{\Sigma }`$”. In order that the argument goes further we may add to it the independent assumption that (3) holds. This then does imply the
* Strong Bekenstein bound. Under the same assumptions, let $`_{bulk}`$ be the smallest faithful representation of the algebra of local observables measurable in the interior of $`\mathrm{\Sigma }`$. Then
$$Dim_{bulk}e^{\frac{A[𝒮]}{4G\mathrm{}}}$$
(8)
To summarize, the important points are that the generalized second law implies only the weak Bekenstein bound, and that the strong entropy assumption is an independent hypothesis. The logic is then that
$$\text{Black hole thermodynamics}+\text{second law}+\text{Hoop theorem}\text{weak Bekenstein bound}.$$
(9)
and
$$\text{weak Bekenstein bound}+\text{strong entropy assumption}\text{strong Bekenstein bound}$$
(10)
### 3 The weak and strong cosmological entropy bounds
We now turn to the question of whether some form of a holographic bound may apply to a cosmological theory in which no boundary conditions have been enforced. Let us consider any closed surface, $`𝒮`$, which bounds a region $``$ in a compact spatial slice, $`\mathrm{\Sigma }`$ of a cosmological spacetime. No boundary conditions have been imposed on $`𝒮`$, thus its interior, $``$ should contain more “degrees of freedom” than would be the case were boundary conditions imposed, because boundary conditions always act by suppressing degrees of freedom, and hence reducing the number of classical solutions, in the neighborhood of the boundary. This means that the above bounds have implications for the representation spaces of algebras of observables that describe regions without boundary conditions imposed.
To make this precise, let $`𝒜_𝒮^{free}`$ be the total algebra of observables measurable on $`𝒮`$, when no boundary conditions have been imposed, and let $`𝒜_𝒮^{bc}`$ be the algebra of observables which remain unconstrained when a particular set of boundary conditions have been imposed. Let $`_𝒮^{free}`$ and $`_𝒮^{bc}`$ be their corresponding representation spaces. Clearly $`𝒜_𝒮^{bc}𝒜_𝒮^{free}`$, which implies that
$$_𝒮^{bc}_𝒮^{free}$$
(11)
This means that
$$dim(_𝒮^{bc})dim(_𝒮^{free})$$
(12)
We assume that the set of variables which are fixed by the boundary conditions make up a commuting subalgebra of $`𝒜_𝒮^{free}`$, otherwise they could not all be imposed at once. It is also natural to assume that the amount of information concerning the state in $`_𝒮^{free}`$ which is necessary to fix the boundary conditions is proportional to the area $`A[𝒮]`$. It then follows that
$$\mathrm{ln}dim(_𝒮^{free})=\mathrm{ln}dim(_𝒮^{bc})+\alpha \frac{A[𝒮]}{G\mathrm{}}$$
(13)
where $`\alpha `$ is some dimensionless constant. We call this the boundary condition area assumption.
By putting this together with the weak Bekenstein bound for the system with boundary conditions, (7) we find that,
$$\mathrm{ln}dim(_𝒮^{free})=\left(\frac{1}{4}+\alpha \right)\frac{A[𝒮]}{G\mathrm{}}$$
(14)
Note that this follows that even though no boundary conditions have been applied at $`𝒮`$. Thus, we have a bound that applies to surfaces inside cosmological spacetimes.
* Weak cosmological entropy bound. Let $`𝒮`$ be a spacelike surface of spacetime codimension $`2`$ that splits a complete spacelike hypersurface into two regions, let $`𝒜_𝒮^{free}`$ be the complete algebra of observables measurable on $`𝒮`$ and let $`_𝒮^{free}`$ be its smallest representation space. Then,
$$\mathrm{ln}dim(_𝒮^{free})=C\frac{A[𝒮]}{G\mathrm{}}$$
(15)
for some $`𝒮`$ independent constant $`C`$.
There is also a strong form of this argument. If we assume the strong entropy assumption, (3), then the same argument leads to
* Strong cosmological entropy bound. Let $`𝒮`$ be a spacelike surface of spacetime codimension $`2`$ that splits a complete spacelike hypersurface into two regions, let $`𝒜_𝒮^{strong}`$ be the complete algebra of observables measurable on the interior of $`𝒮`$ and let $`_𝒮^{strong}`$ be its smallest representation space. Then,
$$\mathrm{ln}dim(_𝒮^{strong})=C\frac{A[𝒮]}{G\mathrm{}}.$$
(16)
We again summarize the logic,
$$\text{weak bekenstein bound}+\text{bc. area assumption}\text{weak cosmological entropy bound}$$
(17)
weak cosmological entropy bound $`+`$ srong entropy assumption (18)
$`\text{strong cosmological entropy bound}`$
### 4 Counterexamples to the strong cosmological entropy bound
Unfortunately, the strong cosmological entropy bound contradicts known physics. This is shown by the following five counterexamples.
#### The gravitational collapse problem
Consider a co-moving region $`R(\tau )`$ in a closed Friedman Robertson cosmology, where $`\tau `$ is the standard $`FRW`$ time coordinate. Let us assume that at the time of maximum expansion, $`\tau _0`$, $`R(\tau _0)`$ contains a uniform gas with entropy $`S(\tau _0)`$, while its boundary has area $`A(\tau _0)`$. If we assume the strong cosmological holographic bound (16) then $`S_0<A(\tau _0)/4G\mathrm{}`$. However as the volume of the universes decreases after $`\tau _0`$, so will $`A(\tau )`$. But, by the second law $`S(\tau )`$ will increase. There will then be a time $`\tau _1`$ such that $`S_0=A(\tau _0)/4G\mathrm{}`$. After that the strong cosmological bound will be violated. Since the spacetime geometry, and the distribution of gas, are uniform, the bound cannot be saved by the formation of a black hole. A similar problem occurs for boxes of radiation dropped into black holes.
Note that this example escapes the conditions of the Bekenstein argument because the universe is not asymptotically flat.
#### The inflation problem
It is not hard to see that inflation provides counterexamples to the strong cosmological holographic bound, arising from the fact that in the aftermath of inflation a universe will have approximately uniform regions exponentially larger than the Hubble scale<sup>3</sup><sup>3</sup>3This argument has been raised independently in .. The real horizon size $`R_H`$ at any given time can then be arbitrarily large compared to the hubble scale $`H`$, and still contain entropy created in a single causally connected region since the initial singularity.
To see this we follow the exposition of Kolb and Turner, . We follow a causally connected region which begins as a patch of the size of $`H^1`$ at the time inflation starts, which is equal to
$$H^1=\frac{m_P}{M^2}=R_{initial}$$
(19)
where $`M`$ is a mass scale associated with the inflaton potential, which is between the Planck scale and the weak scale. The past lightcone of this patch will just touch<sup>4</sup><sup>4</sup>4so that it corresponds to the case Fischler and Susskind considered. the initial surface $`t=0`$.
There is then a period of inflation, in which the patch expands to a size $`e^NR_{initial}`$ which is followed by a period of reheating, during which it expands by a further factor of
$$\left(\frac{M^4}{T^4}\right)^{1/3}$$
(20)
where $`T`$ is the reheating temperature. During reheating a bath of black body radiation is created from the dissipation of the inflaton field with temperature $`T`$, after which the inflation sits in the bottom of its potential and the universe is, to a very good approximation, spatially flat. Agreement with observations seems to require
$$N>60$$
(21)
Just after reheating the region we may try to apply the Bekenstein bound to the huge bubble that the patch has grown up to be, which is of radius
$$R_r=l_Pe^N(\frac{m_p}{T})^{4/3}(\frac{m_p}{M})^{2/3}$$
(22)
as space is flat it encloses a volume $`8\pi /3R_r^3`$, which contains an entropy
$$S_r=\frac{8\pi \nu }{3}T^3R_r^3=\frac{8\pi \nu }{3}e^{3N}\frac{m_P^3}{M^2T}$$
(23)
where
$$\nu =\frac{\pi ^2}{40}g^{}$$
(24)
is of order 20 as $`g^{}`$ is of order 100. If we ask that this entropy be bounded by $`1/4`$ the horizon area in Planck units we have
$$S_r4\pi ^2R_r^2$$
(25)
This seems to put a bound on $`N`$ which rules out inflation. This happens because the entropy contained in the horizon grows as $`e^{3N}`$ while its area only grows as $`e^{2N}`$. The result is that (6) implies a strict bound on $`R_r`$,
$$R_r\frac{3}{2\nu }(\frac{m_P}{T})^3$$
(26)
which means that the number of efoldings is bounded by
$$N\mathrm{ln}\frac{3}{2\nu }+\frac{5}{3}\mathrm{ln}\frac{m_P}{T}\frac{2}{3}\mathrm{ln}\frac{m_P}{M}.$$
(27)
If we use physically reasonable values for $`M`$ and $`T`$ it is impossible that there were as many as $`60`$ e-foldings. Thus, the strong cosmological holographic bound (16) is in conflict with the standard inflationary scenario.
What went wrong? To see that the problem is inflation we may note that if we were ignorant of inflation having taken place, and took the inverse hubble scale $`H^1`$ for the horizon size just after reheating instead of the much larger $`R_r`$ the strong cosmological holographic bound (16) yields the reasonable statement that,
$$T\sqrt{\frac{2}{3\pi \nu }}m_P.$$
(28)
So, we may wonder, why isn’t the huge region of radius $`R_r`$ unstable to gravitational collapse. It is clearly, for it has a Schwarzchild radius
$$R_{Sch}=l_P4\pi \nu e^{3N}(\frac{m_P}{M})^2$$
(29)
Requiring that $`R_r>R_{Sch}`$ yields an even stricter bound on the number of efoldings,
$$N<\frac{2}{3}\mathrm{ln}(\frac{M}{T})\frac{1}{2}\mathrm{ln}(4\pi \nu )$$
(30)
So the region created by inflation is unstable to gravitational collapse. Given any inhomogeneities these will grow and, if they are large enough, form black holes. But this is nothing new, it is just the process of galaxy and structure formation. Indeed, because $`\mathrm{\Omega }`$ is now very close to one, large regions of the bubble must be unstable to the gravitational collapse that must eventually occur in any region in which, locally, $`\mathrm{\Omega }>1`$.
All of the entropy contained in the region blown up by inflation corresponds to ordinary thermal fluctuations in the radiation produced by reheating. As the process of reheating is an ordinary physical process, and as the inflation field may be assumed to have been in a coherent state before inflation began, we must believe that there are all the degrees of freedom in that region given naively by the entropy we have computed. That entropy is, indeed, a measure of how much information would be needed to determine the precise quantum state which resulted from the process of reheating.
However, because the causal horizon has blown up to such a big size from inflation, that information required is much much greater, for standard inflation models, than the area of the horizon just after reheating in Planck units. (recall indeed that in most standard inflationary scenarios, $`N`$ is much great than its minimal value of $`60`$ and may easily be $`>10^4`$.
The inflation problem shows that there can be a surface in the universe, the causal horizon, $`S_{ch}`$, whose information content, proportional to its area, is too small to reconstruct the state of all the thermal photons in its interior. Is this a problem for a consistent cosmological formulation of the holographic principle? To answer this we have to ask what information about the interior may arrive at a surface at the causal horizon. The key point is that because of the exponential expansion, an observer there is not able to observe that thermalization has taken place over all but a small shell of the interior, in the neighborhood of $`S_{ch}`$. For the rest the observer can see causal effects only from the region prior to inflation and reheating, when a description in terms of a pure state is completely adequate. This is because, prior to reheating, the state during that era is very close to the vacuum, and hence can be described with very little information.
We may contrast this with the information available for a surface $`S_{oh}`$ within the conventional ordinary horizon, with $`r<H`$, the Hubble scale. An observer at such a surface sees the region in the interior after reheating and thus sees a thermal distribution of photons. But the region is small enough that enough information is available on $`S_{ohs}`$ to reconstruct the states of those thermal photons.
Is there a conflict between these two descriptions? No, not if one takes into account two facts. First, the observer at the smaller surface must see a mixed state, because the photons in the interior of $`S_{oh}`$ will be correlated with photons in their exterior. Only from the much larger surface $`S_{ch}`$ can an observer reconstruct a pure state, because they see all the correlations between the thermal photons created by the inflation and subsequent reheating. However, it takes much less information to describe the pure state than to describe the thermal state, in the whole of the interior of $`S_{ch}`$ because once the quantum correlations are neglected one must account for all the individual states of all the individual photons.
Second, because of causality, the observer at the larger surface $`S_{ch}`$ is only able to observe the state in the interior at a much earlier time, before inflation and reheating, when a pure state description, requiring much less information, is appropriate.
This examples teaches us that the holographic bound concerns only the information available on a surface $`S_{ch}`$ by virtue of quanta which reach it from the interior. This is not the same information as would be required to reconstruct the state of the system on a spacelike surface spanning $`S_{oh}`$. They are different because by causality, the information available on a surface is that information that can reach the surface by causal propagation of information from the interior<sup>5</sup><sup>5</sup>5One may ask why this example does not provide a counterexample to the Bekenstein bound. The reason is that a large region of an inflationary universe is excluded because the light crossing, and hence thermalization time is long compared to the time scale for subregions to gravitationally collapse. These cases were excluded explicitly in the argument, because the step in which one evolves equilibrium states adiabatically by slowly dripping in energy cannot be realized..
#### The wiggly surface problem
Next, we consider three two-dimensional surfaces in a compact spatial slice $`\mathrm{\Sigma }`$. The first two are $`𝒮_1`$ and $`𝒮_2`$, with $`𝒮_1Int(𝒮_2)`$, where $`Int(𝒮_2)`$ is the region in $`\mathrm{\Sigma }`$ to the interior of $`𝒮_2`$. For example, these could be constant $`r`$ surfaces in a constant $`t`$ slice of Schwarzchild-De Sitter, (using standard coordinates) with $`r_1<r_2`$. In such a case we can choose $`A_1<A_2`$, which implies that less information could be represented on $`A_1`$ than on $`A_2`$. This makes sense because there is a region between the two surfaces that contains physics that may be observed by the observer at $`S_2`$ that is not observed by the observer at $`S_1`$.
Now consider a surface $`S_1^{}`$, just to the interior of $`A_1`$ which is gotten by displacing $`S_1`$ slightly into its interior, and then wiggling it, for example by superposing on it some set of waves. The wiggled surface can easily have area $`A_1^{}>A_2>A_1`$. What are we to make of the apparent fact that the surface $`S_1^{}`$, can contain an amount of information greater than the other two? If $`𝒮_1`$ contains all the information about its interior, then the information coded on $`S_1^{}`$ cannot be greater than that coded on $`S_1`$. But as it has a greater area, it seems to have a greater information capacity.
The wiggly surface problem tells us that the area that is relevant for the measure of information is not the actual area of the surface $`𝒮`$. Rather it must correspond to the information reaching $`𝒮`$ from its interior. This can be achieved if we identify the surface $`𝒮`$ with a cross-section $`\sigma (𝒮)`$ of a congruence of light rays which intersect $`𝒮`$. We may note that the original arguments of Susskind and others were phrased in terms of such congruences of light rays.
This has an important implication. The bounds on the information on a screen, $`𝒮`$, cannot refer just to that surface. It must refer instead to the minimal area of cross-sections through a congruence of light rays that arrive at $`S`$ from the past.
#### The two-sided problem
Consider now a surface $`𝒮`$ of area $`A`$ and topology $`S^2`$ embedded in a compact spatial manifold $`\mathrm{\Sigma }`$, which we take to be an $`S^3`$. Then $`𝒮`$ splits the universe into two three-balls $`B^\pm `$,such that $`\mathrm{\Sigma }=B^+B^{}`$, each bounded by a side of $`𝒮`$, which we will call $`𝒮^\pm `$. The problem is that if $`𝒮`$ is a screen there are actually two possible holographic descriptions, associated with $`𝒮^\pm `$. One should codes a description of $`B^+`$, the other should codes a description of $`B^{}`$.
Assume that the universe is in a semiclassical state, so that we may to a reasonable approximation describe the geometry of $`\mathrm{\Sigma }`$ classically. Then consider taking $`A`$ in Planck units smaller and smaller. The holographic principle must associate to each screen $`𝒮^\pm `$ a state space $`^\pm `$. These must have the same dimension, which is shrinking as $`e^{A/4l_{pl}^2}`$. But one of the two balls, say $`B^+`$ contains almost the whole universe, while the other $`B^{}`$ contains only a small region. Since the universe is classical, it is large in Plank units. We have no problem imagining that the physics in $`B^{}`$ is coded in a state space of dimension bounded by $`e^{A/4l_{pl}^2}`$, as that is a very small region. But it seems the physics in $`B^+`$, which is almost the entire universe must also be describable in terms of state space of this small dimension.
This seems at first paradoxical. It seems that this will require that an arbitrarily small screen may be required to code information about an arbitrarily large region. Is it possible to resolve this paradox?
It is, if we apply the conclusion of the wiggly surface problem. We see that what is relevant is not the information that may reside on a spacelike surfaces spanning $`𝒮^+`$ and $`𝒮^{}`$ but the information reaching those surfaces transmitted by a congruence of light rays from their pasts. This is not necessarily the same thing because to transmit information the light must be focused on the surface. We must then consider the cost in entropy of focusing light from a large universe onto a small surface. The apparent loss of information in recording the holographic image of the large universe on a small surface may be explained if the entropy generated (or information required) by the processes of focusing the light on the surface is large.
Consider a small surface, $`S`$, with area of $`100l_{pl}^2`$ in a large universe with volume $`10^{180}l_{Pl}^3`$. In order for information about the state of the whole universe to arrive at $`S`$ a congruence of light rays originating all over the universe must be focused very precisely so that its focal plane is the surface $`S`$ at a fixed time $`t`$ (measured by a clock at $`S`$.) In a universe in thermal equilibrium, the operation of focusing a congruence of light rays so precisely, in a manner that compensates for all the structure in the gravitational field due to the presence and motion of matter will generate a huge amount of entropy. The result<sup>6</sup><sup>6</sup>6This can be made more quantitative, and will be elsewhere. Note that as we are discussion the properties of a full quantum theory of gravity, counter-examples based on classical solutions with isometries are irrelevant. It is always possible to find counterexamples to statistical theorems from examples with non-generic symmetries, consider for example the ellipsoid with a point source of light at one focal point. It apparently will not stay in equilibrium. is that a great deal of information about the universe is then stored, not on $`S`$, but in the configuration of matter or lenses that had to be organized in order to get the light to focus on $`S`$.
This will only be unnecessary if the universe is completely symmetric. However such a universe will, by virtue of its symmetry, contain only a few bits of information<sup>7</sup><sup>7</sup>7It might be objected that there is a limit in which the area, and hence the information capacity, of the surface is strictly zero. But this is not true, under quite generic assumptions in quantum gravity and supergravity there is a minimal unit of area, which is greater than zero..
#### The throat problem
There are spacetimes, $`(M,g)`$, in which the following situation occurs. $`(M,g)`$ contains a spacelike slices $`\mathrm{\Sigma }`$, with three embedded two dimensional surfaces $`𝒮_1,𝒮_2,𝒮_3`$, with $`𝒮_3Int(𝒮_2)Int(𝒮_1)`$, but in which their areas satisfy $`A_1>A_2<A_3`$. This can happen if, for example, $`(M,g)`$ is a Kruskal completion of a black hole solution, $`𝒮_1`$ is a surface outside the horizon, $`𝒮_2`$ is the throat of the black hole, and $`𝒮_3`$ is a two surface which is at smaller $`r`$ than the throat, and is topologically contained in it, but yet has larger area<sup>8</sup><sup>8</sup>8This example is not cosmological, but can easily be made so by inserting the black hole into a cosmological solution..
There is a large class of such examples in which the Kruskal spacetime is truncated inside the throat, and a compact region is glued on containing matter, describing what is sometimes called a “baby universe”. Such universes are conjectured to arise in a large class of scenarios in which quantum effects lead to an avoidance of the formation of the singularity. The problem is that the baby universe is topologically inside $`𝒮_2`$, but contains $`2`$-surfaces which have a larger area.
To make the problem more worrying we can also imagine that $`A_1>A_3`$. In such situations it seems like an observer inside the baby universe at $`𝒮_1`$ can have more information about the contents of the baby universe than can the observer at $`𝒮_3`$ who is outside the horizon of the black hole. This means that the observer at $`𝒮_3`$ may not have enough information to reconstruct the whole state in the interior of the black hole.
### 5 Identifying the wrong assumption
It is difficult to see how to escape the conclusion that the strong cosmological entropy bound is false. One might hope for escapes from one or two of the counterexamples, but it is difficult to see how to escape all of them. The first two are particularly difficult, as it would be hard to accept a universe without either gravitational collapse or inflation. The fact that the strong cosmological entropy bound prohibits either, in principle, means that it must be in conflict with the basic principles of general relativity.
If the strong entropy bound is false, then so must be at least one of the assumptions that went into its derivation. This is why we have been careful to summarize the logic at each step. The list of assumptions that went into it are:
* The first and (generalized) second law of thermodynamics
* Classical and semiclassical black hole thermodynamics.
* The hoop theorem.
* The boundary condition area assumption, eq. (13).
* The strong entropy assumption.
Of these, the boundary condition area assumption is a technical assumption, that helps make the argument cleanly, but if we had to drop it we could still construct the counterexamples. They would just take place within a box, rather than in a cosmological spacetime and so they would then contradict the strong form of the Bekenstein bound. But they would bite no less in that context.
There is a great deal of evidence for classical and semiclassical black hole thermodynamics and we have no independent evidence that the basic principles of thermodynamics are not to be trusted in this regime. The hoop theorem is also well understood and established. The only assumption on this list without independent support is the strong entropy assumption. It must then be wrong.
In fact, as we have emphasized, none of the arguments in the subject provide any independent support for the strong entropy assumption. There is then no argument for the validity of anything stronger than the weak Bekenstein bound and weak cosmological entropy bound.
### 6 The null entropy bound
We now turn to a different kind of entropy bound bound, proposed by Bousso, following a suggestion of Fischler and Susskind. They proposed that the bound restricts the information, or number of degrees of freedom on null, rather than spacelike surfaces bounding the screens.
This principle may be stated as follows.
* Null entropy bound (Bousso). We fix a spacetime manifold $`(,g)`$ on which a quantum field theory has been defined. A screen $`𝒮`$ will be an oriented two dimensional spacelike surface, possibly open. We then consider one of the four congruences of null geodesics which leave the screen orthogonally, either to the future or the past, and to the left or right of the screen. These may be labeled $`L_{l,r}^\pm `$. We call each a light surface associated to $`𝒮`$.
* Each of the four light surfaces $`L_{l,r}^\pm `$ may contain a subsurface, $``$ which satisfies the following condition: The expansion of null rays $`\theta `$ (in the direction going away from $`𝒮`$) is non-positive at each point of $``$. If the boundary of $``$ contains $`𝒮`$ then we call $``$ a light sheet of $`𝒮`$. The boundary of $``$ will generally contain, besides $`𝒮`$, a set on which the condition $`\theta 0`$ fails to hold, either because of the existence of crossing points or caustics, or because the lightsheet intersects a singularity of the spacetime.
* Now, let $`\stackrel{~}{s}^a`$ be the entropy current density of matter, so that
$$S[]=_{}d^3x_a\stackrel{~}{s}^a$$
(31)
is the entropy crossing the light sheet. Then the null entropy bound is
$$S[]\frac{A[𝒮]}{4G\mathrm{}}$$
(32)
Before turning to its implications, we should mention a possible counterexample, which was proposed by Lowe, as its refutation shows the subtlety of the null entropy bound. Consider a box containing a Schwarzchild black hole and thermal radiation, which are in equilibrium at a temperature $`T`$. If the box is small enough the ensemble including the black hole has positive specific heat, so the equilibrium is stable. Let us then consider a spherical spacelike two surface, $`𝒮`$ which is a slice of the horizon $`H`$ of the black hole. Lowe suggests that the horizon $`H^+`$ to the future of $`𝒮`$ should be considered a light sheet of $`𝒮`$. But as the geometry is static, $`H^+`$ has no boundary besides $`𝒮`$, so that $`_^+d^3x_a\stackrel{~}{s}^a`$ will diverge, since $`\stackrel{~}{s}^a`$ is also constant in equilibrium. Thus, (32) is apparently violated.
The problem, as pointed out by Bousso is that in determining the actual light sheets of $`𝒮`$ we cannot use the static geometry, as that is just an averaged description of the actual spacetime geometry. As in any case in which the second law is evoked in a statistical system, we must be careful to take the thermal fluctuations around equilibrium into account. They cause small fluctuations in the spacetime geometry, the result of which is that the actual light sheets $``$ of $`𝒮`$ will not coincide with $`H^+`$, when the latter is defined in terms of the average, static geometry. Instead, the small fluctuations will cause parts of the real light sheet to deviate either inside or outside of the averaged horizon. Those that fall inside will shortly hit the singularity (or else, if the singularity is avoided by a bounce, cross, causing caustics.) Those that deviate away from the horizon no longer satisfy the condition that $`\theta 0`$. Thus the real light sheets will have outer boundaries. Bousso then argues in that the bound will be satisfied.
We may note also that the possible counterexample could be avoided if one only required that the light surface satisfy $`\theta <0`$, so as to rule out the marginal case $`\theta =0`$. Bousso chooses not to do this as that would eliminate the important example of static black hole horizons.
Finally, we may note that a proof of a closely related conjecture has been given in .
As a result, it appears, at least as of this writing, that Bousso’s null entropy bound agrees with everything we know. It then may be considered to be a useful, and surprising feature of general relativity at the classical and semiclassical level.
### 7 Could the null entropy bound extend to quantum gravity?
While the preceding is very satisfactory, we must note that the null entropy bound is formulated in terms of the behavior of the entropy of matter, on a fixed spacetime background. This is already interesting, but for the possible application to quantum gravity we should ask more. This is because general relativity is a dynamical theory, with its own degrees of freedom. We would then like to know if there is an extension of the bound, even at the classical level, which applies not just to a single spacetime, but to a family of spacetimes which differ by the amplitudes of gravitational waves which may be present in the region containing a screen or a light sheet. If this were the case then a corresponding result would be more likely to hold in quantum gravity, in which there is no fixed spacetime.
Another reason to demand this is that supersymmetry, which seems to be required for the perturbative consistency of quantum gravity, tells us that the distinction between the matter and gravitational degrees of freedom is gauge dependent, and hence not physically meaningful. Furthermore, in perturbative string theory, which seems necessary for perturbative quantum gravity, both the matter and gravitational degrees of freedom arise from excitations of more fundamental degrees of freedom. A bound which requires a strict separation of matter and gravitational degrees of freedom cannot then be formulated in a manner consistent with local supersymmetry and is hence unlikely to extend to supergravity or string theory.
We then investigate, in this section the question of whether there might hold an extension of the null entropy bound that would hold in either of the following cases: i) as a statement on the phase space of general relativity, which allows the fluctuations in the gravitational degrees of freedom to be turned on, ii) in a quantum theory of gravity or iii) in a locally supersymmetric theory. While we do not decide the question, we find that there are two worrying issues, which we now describe.
#### First problem: Including the gravitational degree of freedom
When the gravitational degrees of freedom are turned on we face a paradox because the light sheets, $``$ on which the entropy is measured, depend for their definition on the actual values of the gravitational degrees of freedom. This dependence is not weak or gradual, as the positions of the singularities of $`\sigma `$, and hence the location of the boundaries that define $``$ depend non-linearly on the values of the gravitational degrees of freedom. Thus, even in classical general relativity, it is difficult to know what would be meant by the null entropy bound once the gravitational degrees of freedom are turned on.
The point may be put the following way. Consider a fixed spacetime $`(,g_{ab})`$, which has a Cauchy surface, $`\mathrm{\Sigma }`$, which has embedded in it a screen $`𝒮`$ which has a future light sheet $``$. $``$ is then to the future of $`\mathrm{\Sigma }`$. Now, consider a one parameter family of metrics $`g_{ab}^s`$, such that $`g_{ab}^0=g_{ab}`$ which, for $`s0`$ differ from $`g_{ab}`$ in a region $``$ which is to the causal future of a region $``$ of $`\mathrm{\Sigma }`$, not containing $`𝒮`$. For each $`s`$ one can identify the light surface formed by the future null congruence $`L(s)`$ from $`𝒮`$ such that $`L(0)`$ contains $``$. In fact, for each $`s`$ there will be a light sheet $`(s)L(s)`$. Using this one can identify for each $`s`$ a region $`U(s)=L(s)`$. Let us pick $``$ such that at $`s=0`$, $`U(0)`$.
Now, the notion of a light sheet would be preserved under variations in the gravitational degrees of freedom were it the case that for all $`s`$, and all such one parameter families, $`U(s)(s)`$. However it is easy to see that this is not the case. The reason is that one can always find one parameter families $`g_{ab}^s`$, specified by initial data in $``$ such that, for a finite $`s`$, all of $`U(s)`$ will not be in the future lightsheet $`L(s)`$. The reason is that the gravitational radiation will induce caustics to form in $`U(s)`$ causing $`\theta `$ to be positive on some part of $`U(s)`$.
This means that there is no definition of a light sheet which is independent of the initial data in a region $``$ of a Cauchy surface containing a screen $`𝒮`$, even if $``$ does not contain $`𝒮`$. The null entropy bound may hold for each light surface $`(s)`$ in each spacetime $`g_{ab}^s`$ but there is no extension of the result which holds on the space of solutions or initial data of a spacetime, even when the degrees of freedom are restricted to vary in regions that do not include the screen.
What this means is that one cannot extend Bousso’s bound to include the gravitational degrees of freedom in any way which involves defining the entropy of the gravitational degrees of freedom in terms of statistical ensemble of states or histories.
But if this is the case then it is hard to see how there could be an extension of the null entropy bound either to quantum gravity, in which case there is no fixed classical spacetime, or in supergravity, in which there cannot be an invariant distinction between gravitational and matter degrees of freedom.
#### Second problem: measurability
Another aspect of the problem just discussed is that once the gravitational degrees of freedom are turned on, either classically or quantum mechanically, whether a particular null surface $`L`$ is a lightsheet of some screen or not depends on the values of the degrees of freedom. This means that there are three choices: I) find a formulation of a cosmological entropy bound that does not require the identification of a lightsheet on which $`\theta 0`$, II) try to formulate the condition as an operator equation or III) try to formulate it in terms of expectation values.
The first possibility leads us to the weak entropy bounds, in which the lightsheet plays no role. In case II one must find a set of commuting operators in quantum gravity which are sufficient to define the notion of a light sheet and apply it to their eigenvalues. We must then ask whether the uncertainty principles which arise from the commutation relations of a quantum theory of gravity allow the simultaneous measurement of quantities that must be known to apply the bound. To investigate this we shall assume the standard equal time commutation relations
$$[A_a^i(x,t),\stackrel{~}{E}_j^b(y,t)]=\delta ^3(x,y)\delta _a^b\delta _j^i$$
(33)
where $`A_a^i`$ is the self-dual connection and $`\stackrel{~}{E}_j^b`$ is the dual of the pull back of the self-dual two form, $`\mathrm{\Sigma }`$ in any spacelike surface (and all other commutators vanish). We may note that these hold in a large class of theories, including all the extended supergravities, as they arise from the generic form
$$I=_{}\mathrm{\Sigma }^{AB}\dot{A}_{AB}+\mathrm{}$$
(34)
It is difficult to imagine that these do not hold in the effective field theory which is the low energy limit of string theory or whatever the true quantum theory of gravity is.
It is not hard to show that,
* If $`\theta ^\pm (s)`$ is the expansion of the future and past going null geodesics normal to a two surface $`𝒮`$ at a point $`s𝒮`$ and $`A[𝒮]`$ is the area of $`𝒮`$ then
$$[\theta ^\pm (s),A[𝒮]]=0$$
(35)
* Let $`(s,u)`$ be coordinates on a null surfaces, $`L`$, generated by the null geodesics leaving $`𝒮`$ orthogonally, where $`s`$ labels a congruence of null geodesics and $`u`$ is an affine parameter along each geodesic such that $`𝒮`$ is defined by $`u=0`$. If $`\theta ^\pm (s,u)`$ is the expansion of the congruence at $`(s,u)`$ then, for $`u0`$ we have,
$$[\theta ^\pm (s,u),A[𝒮]]0$$
(36)
$$[\theta ^\pm (s,u),\theta ^\pm (s)]0$$
(37)
Thus, there does exist a basis which comes from simultaneously diagonalizing the expansion of the congruence of null rays at a screen, and its area. But in such a basis the operators $`\theta (s,u)`$ off the screen are not diagonal. Thus, we cannot identify the outer boundary of the lightsheet in any basis in which we can identify a screen and measure its area. This suggests that there cannot exist an operator form of the null entropy bound.
The remaining possibility is to formulate the condition for a screen in terms of expectation values. Presumably this should be possible in the semiclassical limit, otherwise the null entropy bound could not be true in that limit. I am not aware of any proposal to implement it beyond the limit. One way to see why this is unlikely to work is to discuss how it would have to work in a path integral formulation of the theory.
Let us first note that to be relevant to the null entropy bound, a histories formulation will have to be formulated in terms of causal histories, as there are no analogues of light sheets in Euclidean metrics. Fortunately, there are now non-trivial proposals and results concerning formulations of quantum gravity in terms of causal,lorentzian path integrals. In such a causal histories formulation, each history in the sum over histories comes with its own causal structure. The problems we are discussing can then be stated as follows: it may be possible to pick out consistently a set of histories in which the area of a preferred family of surfaces and the expansions of null geodesics at those surfaces are given and fixed. But in these histories the properties of the light surfaces generated by following null geodesics from the screens cannot be controlled, and will fluctuate as the sum over histories is taken. Since caustics and other singularities are generic for such surfaces, the observer at the screen will be unable to control or measure any variables at the screen to prevent the formation of singularities in the light surfaces of the histories.
Another way to say this is that the degrees of freedom of the light surface include components of the metric and connection on the light surface itself. Singularities and caustics will form for generic values of these parameters, but where they form varies as the degrees of freedom fluctuate. One cannot then consistently count the number of degrees of freedom on the non-singular part of the light surface, this involves a logical contradiction since the presence and location of the singularities itself depends on the values of the fluctuating degrees of freedom.
Before closing this discussion, we note that our argument does not exclude one radical possibility, which is that there is an entropy associated with single classical configurations of the gravitational field. This has been suggested by Penrose, and there are some old arguments for it, which rely on the impossibility of either building a container to constrain gravitational radiation, inducing a gravitational ultraviolet catastrophe or measuring the pure state of gravitational radiation. In the present context this would suggests that $`_{}\sigma ^{ab}\sigma _{ab}`$, where $`\sigma `$ is the shear of the null congruence, might be taken as a measure of the gravitational entropy on a lightsheet $``$ in a single classical spacetime.
This is an intriguing possibility, which deserves investigation. It does not affect the following considerations, as the difficulties we find with a null form of the holographic principle would not be lessened.
## Part II HOLOGRAPHIC PRINCIPLES
A holographic principle is meant to be a formulation of the dynamics of a quantum theory in terms of its screen, or boundary hilbert spaces. A holographic principle requires some form of an entropy bound, but it requires also that the dynamics of the theory can be formulated entirely in terms of the degrees of freedom measurable on the screen.
There are different kinds of holographic principles, corresponding to the different possible kinds of entropy bounds. We consider in turn, strong, null and weak forms of the holographic principle.
### 8 The strong holographic principle
The classic formulation of the strong holographic principle is meant to apply to the case we discussed in section 2. We have a quantum or classical spacetime, $``$, with a boundary $`=R\times 𝒮`$, where $`R`$ corresponds to the time coordinate. One then postulates boundary and bulk algebras of observables, $`𝒜_S`$ and $`𝒜_{bulk}`$ and $`_S`$ and $`_{bulk}`$ as in section 2. In addition one specifies on each hilbert space a hermitian hamiltonian $`h_𝒮`$ and $`h_{bulk}`$. In a gravitational theory this may require the specification of gauge conditions on the boundary, in this case there is a family of bulk and boundary hamiltonians which depend on the gauge conditions. The principle is then formulated as follows
* Strong holographic principle There is an isomorphism
$$:_{bulk}_S$$
(38)
such that $`h_{bulk}=h_𝒮`$.
Can this principle be satisfied? There are two cases: gravitational theories and non-gravitational theories.
#### Failure of the strong holographic principle in gravitational theories
By a gravitational theory we mean here theories in which the gravitational degrees of freedom are allowed to fluctuate, so that any principle must hold for all the possible initial data that the theory allows. This will be specified classically or semiclassically by initial data on $`\mathrm{\Sigma }`$ or quantum mechanically by the specification of a state in $`_{bulk}`$. We do not mean quantum field theory on a particular fixed spacetime metric $`g_{ab}`$ on $``$.
There is some evidence that the strong holographic principle may hold in a quantum theory of gravity. One piece of evidence comes from canonical quantum general relativity, with a non-zero cosmological constant, and certain boundary conditions, called the Chern-Simons boundary conditions. In this case the boundary Hilbert space is found to be of the form
$$_𝒮=\underset{a}{}_𝒮^a$$
(39)
where $`a`$ is an eigenvalue of the area operator, which is known from to have a discrete spectrum. Each of the eigenspaces $`_𝒮`$ is a space of $`SU_q(2)`$ intertwiners on a punctured $`S^2`$ where the labels on the punctures, which are taken from the representations of $`SU_q(2)`$ are related to the area. The level $`k`$ is related to the cosmological constant by, $`k=6\pi /G^2\mathrm{\Lambda }`$. The dimension of the space of intertwiners does satisfy eq. ( 6), with a renormalization of Newton’s constant defined by $`G_{ren}=cG_{bare}`$, with $`c=\sqrt{3}/\mathrm{ln}(2)`$. Thus, it is clear that the weak Bekenstein bound is satisfied.
There are also some results concerning the bulk Hilbert space. Before the hamiltonian constraint is imposed, an infinite dimensional space of bulk states can be identified for each $`a`$, which has an orthonormal basis given by the distinct embeddings (up to diffeomorphisms) of quantum spin networks in the bulk, whose edges meet the boundary at the punctures. One can then show that there is, for each set of punctures, a finite dimensional space of solutions to the Hamiltonian constraint which is isomorphic to the corresponding boundary Hilbert space. These are constructing by moding out the infinite dimensional kinematical Hilbert spaces by a set of equivalence relations which generate the recoupling identities of quantum spin networks. It is known that for a certain class of states these recoupling identities realize the action of the Hamiltonian constraint.
In this case the strong entropy assumption then comes down to the conjecture that these provide a complete set of solutions to the Hamiltonian constraint in the bulk. There is presently no evidence either way on the correctness of this conjecture. It is attractive to argue that if it is true we have quantum general relativity in this particular case expressed in closed form as a theory which satisfies the strong holographic principle. We may also note that these results all hold for both the euclidean and lorentzian cases as well as for supergravity.
However, if we accept the conclusion of the previous arguments, then this conjecture must in fact be false. We have found instead that for the case of a gravitational theory the strong holographic principle cannot hold unless the boundary of the spacetime has either infinite or indeterminate area. The reason is that, as we have shown, the strong entropy assumption, and the strong forms of the Bekenstein bound and the cosmological entropy bounds all fail. As a result we cannot assume that the bulk and boundary Hilbert spaces have the same dimension. However, we have also shown that the weak Bekenstein and weak cosmological entropy bounds hold, which means that $`_𝒮`$ is finite dimensional or, generically, is composed of finite dimensional subspaces, which are the diagonal sectors of the area $`A[𝒮]`$.
Since no bound has been found to hold which restricts the dimension of $`_{bulk}`$ there are two possibilities, either it is infinite dimensional, or it does not exist. In the latter case there is nothing for an isomorphism to map the boundary state space to. If it is infinite dimensional it could only be mapped to the boundary which has either indeterminate or infinite area. Thus we conclude that a form of the strong holographic principle could only hold in those cases.
Let us now consider the case of indeterminate area more closely. Let us consider the Schroedinger picture operators, defined in terms of the time at the boundary. It is clear that by causality $`\widehat{A}[𝒮]`$, the operator that measures the area of the boundary must commute with $`h_{bulk}`$, as the latter is a function only of degrees of freedom in the bulk of $`\mathrm{\Sigma }`$ which are causally unrelated to degrees on $`𝒮`$. Another way to say this is that where we must be able to move the boundary locally, thus changing its area, without affecting the physics in regions of the bulk causally disconnected from the events of moving the boundary. Since $`[\widehat{A}[𝒮],h_{bulk}]=0`$, we must be able to construct projection operators in the bulk, $`\widehat{P}_a`$ corresponding to every eigenvalue, $`a`$ in the spectrum of $`\widehat{A}[𝒮]`$, and define the restricted Hamiltonian $`h_{bulk}^a=\widehat{P}_ah_{bulk}\widehat{P}_a`$. By causality it must then be the case that the strong holographic principle work between the corresponding subspaces $`_𝒮^a`$ and $`_{bulk}^a`$ for each value of $`a`$.
But now we can apply the argument for the case of a finite area boundary. Since there is no bound on the dimension of $`_{bulk}^a`$, it must have infinite dimension, thus it cannot be isomorphic to $`_𝒮^a`$.
This leaves only the case that the boundary has infinite area. But in this case there cannot be a cosmological version of the principle, as generic spatial regions in generic cosmological solutions have finite area boundaries. Thus, at best, the strong holographic principle could only apply to the case of non-compact spacetimes with boundary.
This may be satisfactory, but it comes with a price, which is that we will not be able to apply the principle to any case in which the boundary is moved inside the non-compact spacetime to coincide with a finite area surface. This means that for any such surface, labeled again by its area, $`a`$ the holographic correspondence (38) will map all but a finite dimensional subspace of $`_{bulk}^a`$ to degrees of freedom that are contained within $`_𝒮^{\mathrm{}}`$, but are not representable within $`_𝒮^a`$. This must hold for any finite $`a`$. It means that for no finite $`a`$ can there be any correspondence between $`_{bulk}^a`$ and $`_𝒮^a`$, as almost all of the information in the former is not representable in the latter. This is counterintuitive, it means no matter how far we move the boundary out, the representation space of the boundary observables do not capture most of the information about the bulk observables, so long as the area of the boundary is finite.
This would be very disappointing, what it really means is that there is no way going to an infinite boundary can save the situation, once it is realized that for any finite area boundary there can be no holographic isomorphism (38). Thus, we conclude that there cannot be an implementation of the strong holographic principle in a gravitational theory.
#### Realization of the strong holographic principle in non-gravitational theories
It is surprising, and striking, that in spite of its failure for gravitational theories, there are realizations of the strong holographic principle for non-gravitational theories. These occur in a special case which is Anti-DeSitter backgrounds in $`D+1`$ dimensions. In these spacetimes the asymptotic boundary is timelike and is in fact conformally compactified Minkowski spacetime ($`CM`$) in $`D`$ dimensions. The existence of such a correspondence was conjectured first by Maldacena in a string related argument, but has since been shown to hold quite generally for non-gravitational theories on AdS backgrounds. There is in fact a rigorous theorem in axiomatic quantum field theory that shows that gives such an isomorphism for generic field theories on $`AdS`$ spacetimes.
The reason behind this correspondence is clearly that $`SO(D,2)`$ acts as the symmetry group on $`AdS_{D+1}`$ and as the conformal symmetry group of $`CM_D`$. As a result one can establish an isomorphism (38) for general quantum field theories on $`AdS`$ backgrounds.
We can also see why the argument given just above is superseded in the $`AdS`$ case. A key fact is that $`AdS`$ spacetime has no Cauchy surface. The reason is that the evolution in the bulk requires the specification of data on the timelike asymptotic boundary of the spacetime. If the boundary fields are not specified there is no deterministic evolution for the bulk degrees of freedom. As a result the boundary degrees of freedom are part of a complete specification of the dynamics of the bulk theory. This makes it less surprising that the dynamics can be reduced to a description of boundary degrees of freedom, in this case the bulk to boundary map is plausibly a reduction to the data necessary to determine a solution, and may play a role similar to that of the map that relates a solution to boundary data in spacetimes with Cauchy surfaces. This is very different from what happens in asymptotically flat spacetimes in which only the only quantities measurable at spatial infinity are a finite set of conserved quantities.
The key question is then whether there are conformal quantum field theories for general $`D`$ on $`CM_D`$. There are certainly free field theories, for which the correspondence holds. It may then be expected to hold also for interacting theories in those special cases in which there is a conformal quantum field theory on $`CM_D`$. There is evidence that one such case is $`N=4`$ supersymmetric Yang-Mills theory for $`D=4`$. By the general arguments of one would expect there to exist on $`AdS`$ a supersymmetric theory, whose spectrum transformed under a supersymmetric extension of $`SO(4,2)`$, with 16 supercharges. There is a great deal of evidence that this is the case, at least in the limit $`N\mathrm{}`$. In this case the theory appears to be the weak coupling limit of supergravity compactified on an $`AdS_5\times S^5`$ background.
#### The big question
We then have the following question. We have argued that the strong holographic principle cannot hold in a gravitational theory. It can hold in a quantum field theory on a fixed background, and indeed in the particular case of $`AdS`$ spacetimes it seems to be a generic feature. But it should be expected to break down as soon as the gravitational degrees of freedom are turned on<sup>9</sup><sup>9</sup>9The same questions can also be asked in the $`2+1`$ dimensional case, where the $`AdS/CFT`$ correspondence has also been worked out. However, given that quantum gravity and supergravity in $`2+1`$ dimensions are topological quantum field theories, there may be little to learn from this case that is generally useful. TQFT’s are by definition theories whose observables and states are defined on boundaries of the spacetime.. More precisely, we expect our first two counterexamples to arise as soon as either gravitational collapse or inflation could occur in regions of the bulk.
At the same time, in the particular case of $`AdS_5\times S^5`$ the isomorphism seems to exist and the bulk theory is then the weak field limit of a gravitational theory. What then happens when the gravitational constant is turned up, so that the gravitational degrees of freedom are excited? This is the big question. There seem to be three possibilities:
* 1 Something is wrong with the above reasoning, at least in the case of supersymmetric theories.
* 2 In supergravity or string theory on spacetimes which are asymptotically $`AdS_5\times S^5`$ the counterexamples cannot arise. This means that there cannot be small black holes that form from gravitational collapse and there cannot be any possibility of choosing the initial conditions in the interior so as to drive the theory into an inflating phase.
* 3 The correspondence holds at the level of the background dependent quantum field theory defined on $`AdS_5\times S^5`$ by the weak coupling limit of supergravity or string theory, but breaks down as soon as the gravitational constant or the initial data is large enough that strong gravitational fields can arise.
The first possibility is of course always there, this is why we have been very careful to keep track of the logic leading to the conclusion that the strong entropy assumption, strong Bekenstein bound and strong cosmological entropy bounds must all be false in gravitational theories. If there is an error it must be either in the reasoning or in the unexpected failure of one of the other assumptions listed in section 5.
The second possibility seems unlikely, and in any case were it true it would mean that this particular case is non-generic in ways that suggest it is not a very good example of a quantum theory of gravity.
We must then ask if there are any results that contradict the third possibility. At of this writing all of the results found which support the conjecture in the $`AdS_5\times S^5`$ case relate boundary observables of supergravity on that background to expectation values of the supersymmetric Yang-Mills theory. While the construction of representatives in the Yang-Mills theory of bulk observables in the bulk theory have been discussed, there is so far no calculation which gives a non-trivial test of these correspondences. It is also the case that most, if not all, of the calculations of $`N`$-point functions which support the conjecture are in any case forced by the action of the super-symmetry group. It then seems to be the case that even if it disagrees with some interpretation of the conjectured correspondence, there are no actual results which so far contradict possibility 3.
There is a final remark which is consistent with this third possibility which is the following. Gravitationally bound systems including black holes have generically negative specific heat. However, the positivity of the specific heat for an equilibrium ensemble is guaranteed for any system defined by a partition function. In particular, the thermal quantum field theory gotten by raising the temperature in the $`N=4`$ supersymmetric Yang-Mills theory is defined by a partition function. Therefor all equilibrium configurations will have positive specific heat.
We can then ask how configurations such as a system of planets or small black holes in the bulk of the AdS spacetime are to be represented in terms of states of the $`N=4`$ supersymmetric Yang-Mills theory. There are two possibilities: this can be done, but involves configurations that are sufficiently far from equilibrium in the Yang-Mills theory that they cannot be described by a partition function. Or, the correspondence breaks down as soon as gravitationally bound states of the bulk theory arise whose statistical ensembles have negative specific heat.
As a final remark, we note that $`S`$ duality is still a conjecture, outside of the $`BPS`$ sector of either $`N=4`$ superYang-mills theory or string theory. Thus, if the isomorphism (38) fails beyond the $`BPS`$ sector there is nothing that constrains $`S`$-duality to hold on both sides of it.
#### What about the black hole information paradox?
One reason that the strong holographic principle has been advocated by some people is that it guarantees a solution to the black hole information paradox. Thus, one can wonder if there is an independent argument for the strong holographic principle which follows from the possibility that it is necessary to give a consistent resolution of the black hole information paradox.
The answer is negative, because a large part of the black hole information paradox depends on the strong entropy assumption, which we have found is false. Once it is realized that the strong entropy assumption is false, there is no reason to presume that the amount of information measurable by observers in the interior of the black hole horizon is constrained by the black holes’s horizon area. One can then imagine that an arbitrarily large amount of information may be stored in the region to the future of the horizon independent of its surface area.
One very plausible scenario, which is supported by several semiclassical calculations, is that there is a bounce as the collapsing star nears what would be the classical singularity, leading to the formation of a new expanding region of spacetime which could contain an arbitrarily large amount of information (measured from the point of view of internal observers.) Given the possibility of making a transition back to an inflationary phase this region could resemble our universe.
What will then happen when the horizon evaporates. In such a case there is no real spacetime singularity, and there is correspondingly no need for an event horizon. This means that attempts to construct a paradox by making small perturbations to the usual black hole global structure, which do not eliminate the singularity, are likely of no relevance to the real physical problem. There will be an apparent horizon and under evaporation it will shrink to a size at which quantum fluctuations of the gravitational field will be significant. At this point one will have a small wormhole, linking our spacetime to the origin of a large inflating region. Most of the information that went into the black hole will be trapped in the new region, but there will be no local violation of any physical principle. This does not mean that there cannot be global unitary evolution in the whole spacetime, but only that not all measurements made in the interior of the bulk can be communicated to null infinity.
Is this kind of scenario plausible? This is one of the key questions as we investigate what form a holographic principle could take in a gravitational theory, in which only the weak and null cosmological entropy bounds survive.
### 9 The null holographic principle
If we give up on the possibility of a strong holographic principle that could hold in either a gravitational or cosmological theory, we are forced back to the next strongest possibility, which is to construct a form of the holographic principle which would extend the null form of the cosmological entropy bound proposed by Bousso. Since that bound is only known to hold at the semiclassical level in a fixed cosmological spacetime $`(,g_{ab})`$, let us ask what form such a null holographic principle would have to take in this case.
The problem is clearly to find a collection of light sheets that cover the spacetime so that the evolution of matter fields may be described in terms of them. What is needed can be defined as follows.
* A classical spacetime $`(,g_{ab})`$ has a single null holographic structure if there exists a one parameter (continuous or discrete) family of screens $`𝒮(t)`$ with a corresponding one parameter family of light sheets $`(t)`$, (each possibly made by joining two lightsheets of $`𝒮(t)`$), such that for any two times, $`s`$ and $`t`$, the classical or quantum state of the matter on $`(s)`$ is completely determined by that on $`(t)`$. In the quantum mechanical case, this means there is a one parameter family of Hilbert spaces, $`(s)`$, which satisfies the bounds
$$dim(t)e^{A[𝒮(t)]/4G\mathrm{}}$$
(40)
such that there is for each $`s`$ and $`t`$ a unitary operator
$$U(s,t)(t)=(s)$$
(41)
This is the minimal requirement, if there is going to be a representation of the quantum dynamics of matter in the spacetime $`(,g_{ab})`$ that captures the basic principles of ordinary quantum mechanics.
The problem is that such a structure does not exist for generic spacetimes $`(,g_{ab})`$. By (40) and (41) we see that all the screens in the family must have the same area, otherwise their Hilbert spaces cannot be unitarily equivalent. The problem is that in generic spacetimes the lightsheets of any single screen will not cover the complete future or past of any Cauchy surface. The reason is that the lightsheets are compact, and of limited extent. This is in fact the whole point of Bousso’s bound. Consequently, given any two screens $`𝒮(s)`$ and $`𝒮(t)`$ it will almost never happen that the corresponding light sheets $`(s)`$ and $`(t)`$ form a complete pair. By a complete pair is meant a pair of non-timelike surfaces such that $`(s)`$ is within the causal future of $`(t)`$ and is complete in that no event can be added to $`(s)`$ which is also in the causal future of $`(t)`$, which is acausal to $`(s)`$, and the same is true reversing $`s`$ and $`t`$ and past and future.
It is only between complete pairs that one can expect to find deterministic evolution in either a classical or quantum theory on a fixed spacetime.
In a few very special cases involving highly symmetric spacetimes, one can find such a single null holographic structure. But these are special cases in which the symmetry allows the lightsheets to be complete futures of Cauchy surfaces. One can say that in highly symmetric spacetimes such as Minkowski or DeSitter spacetime complete lightsheets can exist because by the symmetry there is so little information for an observer in the spacetime to measure. Once any inhomogeneity is turned on we expect that the light surfaces will contract to finite regions and any two will be very unlikely to make a complete pair. But what is required generically is not only that we have a family of light surfaces any two of which make a complete pair. In addition the screens of those light surfaces must all have the same area. There is no reason to believe these conditions can be satisfied in a generic spacetime.
Can we weaken the condition? We can if we give up the idea that there is a one parameter family of light surfaces, each of which has a Hilbert spaces, all of which are unitarily equivalent. This idea conserved the structure of ordinary quantum mechanics in which there is a single Hilbert space on which evolution is unitarily implemented. However it is clearly ruled out.
For a generic spacetime it is clear that the lightsheets of no screen will be complete in the future or past of any Cauchy surface. In this case if we want a description in terms of screens we must allow the possibility that a complete description of the system will require generally more than one screen, representing information available to different local observers in the spacetime. This means that a complete holographic description of a quantum field theory in a cosmological spacetime system will generically involve multiple Hilbert spaces, each of which represents information available to observers at different screens. Time evolution must then be represented in terms of maps between density matrices in these HIlbert spaces. Unitary evolution will only be possible for pairs of such Hilbert spaces that describe causal domains that form complete pairs.
Is such a multiple Hilbert space description of a quantum theory in a cosmological spacetime possible? In fact exactly such a structure was proposed in , under the name of quantum causal histories. It arose from an independent line of thought, coming from attempts to take seriously the limitations on the algebra of observables coming from the causal structure of relativistic cosmological theories. As we showed in this structure does admit a formulation of a weak holographic principle.
### 10 Is every two surface a screen?
In some approaches to the cosmological holographic principle, screens are two surfaces satisfying special conditions. Such conditions are also used to distinguish which side of a two-surface may be a screen, for example for screens in normal regions in Bousso’s approach, only one side of a surface will in general be a screen.
It is then important to ask whether any such conditions may be imposed in the case of quantum cosmology. There seems to be a problem with each of the possible conditions that have been offered at the semiclassical level.
* As there are no asymptotic regions, and no boundaries to a cosmological spacetime, there are no global event horizons. Generic spacetimes do not generically contain any single null holographic structures, which means that the information measurable on any screen cannot be used to completely specify the state of a classical or quantum cosmology, and a holographic principle cannot be formulated in terms of any single one parameter family of screens.
* The operator that measures the convergence of null rays at a surface does commute with the operator that measures its area. This could be used in a quantum theory of gravity to distinguish the two sides of a screen. However, this does not have the same implications in the quantum theory because the local positive energy conditions on the energy momentum tensor do not hold even at the semiclassical level. Because of this null rays may diverge after beginning to converge and trapped surfaces cannot be distinguished by any local conditions. Furthermore, the operator that measures the convergence of a null ray a finite distance from the surface does not commute with its convergence on the surface. This means that in a quantum theory one cannot apply the tests we use in the classical theory to pick out a lightsheet. Consequently there seems to be no reason in the quantum theory to choose one side of a screen over another.
* No condition can be imposed having to do with the volume of a spacelike region bounding a screen, for the volume of a region is measured by a quantum mechanical operator. Generic states will be superpositions of eigenstates of the volume operator for any region. One can thus not require that the side of a two-surface which encloses the smallest volume is a screen.
* As we see from several of the counterexamples, there is no paradox in considering both sides of a surface to be a screen, so long as one understands the entropy bound weakly, so that it applies only to information gained by making measurements of fields at the surface, which may or may not allow deductions to be made concerning the density matrix or state to the causal past of the surface.
If there is no criteria which can be applied in a quantum theory of cosmology to pick out which surfaces are screens, or to pick one of the two sides of a two-surface to serve as a screen, then we must conclude that every two-surface may be a screen, and the opposite side of any screen may also be a screen. In the quantum theory one may still make observations on a screen, but one will not in general be allowed to deduce anything about the extent to which those observations allow a complete description of the physics on a finite lightsheet. Since that was the reason to prefer one screen over another, the conclusion is that in the quantum theory if a screen is a useful concept, then all two surfaces may be screens.
This conclusion will play an important role in the weak holographic principle because it means that in a quantum theory we may use the properties of a screen as a place where measurements may be made to constrain, or even define,its geometrical properties, rather than the reverse, which is what we do in the semiclassical theory.
### 11 Conclusions reached so far
To motivate the weak form of the holographic principle we summarize the results of the argument so far.
* The strong entropy conjecture is apparently false, which means that the weak, rather than the strong version of the Bekenstein bound is true.
* The strong cosmological entropy bound is false.
* The null cosmological entropy bound cannot be formulated in a quantum theory of gravity once the gravitational degrees of freedom are turned on, at least in the conventional terms in which entropy is related to the lack of purity of density matrices.
* The weak cosmological entropy bound may be satisfied in a quantum theory of gravity. This is formulated as a relationship between the information capacity of a screen $`𝒮`$, as measured by the dimension of the Hilbert space $`_𝒮`$ which provides the smallest faithful representation of the algebra of observables $`𝒜_𝒮`$ on the screen, and its area $`A[𝒮].`$
* From the wiggly surface problem we learn that the appropriate measure of the area of a screen is not the area of $`𝒮`$. Instead, the amount of information that can be stored on any screen, $`𝒮`$ is bounded by the minimal area of the cross sections of congruences of light rays that intersect $`𝒮`$. This means that a causal structure is required in order to make sense of a holographic bound in a quantum cosmological theory.
* The information coded on a screen $`𝒮`$ then concerns its causal past. But it then follows that in most histories there will be no single screen on which a complete description of the universe may be coded, for there will, in the classical limit, be generally no spacelike two-surface such that the past of its lightsheets contains a Cauchy surface. (We see this also from the throat and inflationary examples.) From this it follows that a holographic description in a quantum cosmology must involve many screens $`𝒮_i`$, and that the information available at any one screen will almost always be incomplete.
* One implication of this is that the most complete description of the quantum state available on any single $`𝒮_i`$ must be a density matrix $`\rho _i`$ on $`_i`$. This is because there will in general be quantum correlations that connect measurements made on $`𝒮_i`$ with degrees of freedom that are recorded on other surfaces $`𝒮_j`$.
### 12 The weak holographic principle
A weak form of the holographic principle must be consistent with these conclusions. One possible form, which is, is that given in . In somewhat less technical language than that give there, the principle holds that
1. A holographic cosmological theory must based on a causal history, that is, the events in the quantum spacetime form a partially ordered set under their causal relations.
2. Among the elements of the quantum spacetime, a set of screens can be identified. A screen $`𝒮`$, is a 2-sided object, which means that it consists of a left and right side, each of which has a distinct past and future, but such that the past right side is to the immediate past of the future left side, and visa versa.
3. Associated to each side of the screen, labeled $`L`$, and $`R`$ are an algebra of observables, $`𝒜_𝒮^{L.R}`$ each of which is represented on a finite dimensional Hilbert space $`_𝒮^{L,R}`$. The observables in $`𝒜_{L.R}`$ describe information that an observer at the screen may acquire about the causal past of one side of the screen, by measurements of fields on that side of the immediate past of the left or right side of the screen.
4. $`_𝒮^L=_𝒮^R`$, which means they have the same dimension.
5. All observables in the theory are operators in the algebra of observables $`𝒜(𝒮)`$ for some screen $`𝒮`$.
6. The area of a screen $`𝒮`$ is defined to be
$$A[𝒮]4G\mathrm{}\mathrm{ln}Dim\left(_𝒮\right)$$
(42)
More discussion of this principle may be found in . Its message is that all observables in a quantum theory of cosmology are associated with two-surfaces, and represent information reaching a surface from its causal past. Besides the logic we have followed here, there are two sets of arguments that might be used to support this hypothesis.
##### Quasi-local quantities in classical general relativity
Even in classical general relativity, it is well understood that diffeomorphism invariance and the equivalence principle forbid the possibility of local definitions of the basic dynamical quantities such as energy, momentum and angular momentum. These kinds of quantities can only be defined in terms of integrals over two dimensional surfaces in the spacetime. When those surfaces are taken to the boundary, in non-cosmological spacetimes, these become the well known asymptotic definitions of energy, momentum and angular momentum. However, even in cosmological spacetimes where there are no boundaries one may define what are called quasi-local observables, in which the energy, momentum and angular momentum of an arbitrary region are defined in terms of certain integrals over its boundary. Since Penrose’s original suggestion many different proposals have been made for such quasi-local observables.
If there are to be non-trivial notions of energy, momentum and angular momentum in a quantum theory of cosmology then, these must be defined so that their classical limits are these quasi-local quantities. The simplest possibility is that the hamiltonian in quantum gravity should itself be quasi-local, that is defined on two dimensional surfaces, which in the classical limit will become spacelike surface embedded in spacetime. This implies some form of the holographic principle, for if the Hamiltonian is associated with surfaces there must be many hamiltonians, each associated with a different choice of surfaces, and the same must be true of the algebra of observables and the hilbert spaces on which they are represented.
##### Relational approaches to quantum cosmology
Another kind of argument for the importance of surface observables in a quantum theory of cosmology was given by Crane, even before the holographic hypothesis of ‘t Hooft and Susskind was proposed. Crane noted the difficulties of defining a coherent measurement theory for a quantum state “of the whole universe” and proposed instead that the division of the universe into two parts-system and observer-that is basic to Bohr and Heisenberg’s measurement theory might be relativised, so that there would be not one quantum state of the universe, but a system of observable algebras and hilbert spaces, one associated with every possible splitting of the universe into two parts.
To realize this idea, Crane proposed a categorical framework to describe the association of Hilbert spaces with boundaries. This was based on positing functorial relationships between the category of cobordisms of manifolds and the category of Hilbert spaces. These structures are closely related to topological quantum field theory, as those theories can be formulated in such categorical terms. As topological quantum field theories are the only class of field theories that naturally yield finite dimensional Hilbert spaces, one may try to use them to construct examples of holographic theories. Furthermore, as Crane pointed out, it may be possible to extend these structures to quantum theories of gravity because it is a fact that at both the classical and quantum mechanical level, and for any dimension, general relativity and supergravity can be understood as deformed or constrained topological quantum field theories.
Crane’s proposal has been an inspiration for the development of what have been called relational or pluralistic approaches to quantum cosmology. Using the fact that general relativity and supergravity are constrained topological field theories, it has been possible to realize this idea in the context of full formulations of quantum gravity and $``$ theory .
An even stronger version of Crane’s argument was proposed recently by Markopoulou, who noted that even in classical general relativity the logic of propositions which can be given truth values by observers in a closed universe is non-boolean, because each observer can only assert the truth of falsity of propositions about their past. Rather than being a boolean algebra, the algebra of propositions relevant for a classical cosmological theory is a multivalued Heyting algebra. When quantized, the resulting algebra of projection-like operators cannot be represented on a single Hilbert space, instead, it requires a collection of Hilbert spaces, one for every possible event at which observations are made. As each observer receives information from a distinct past, the algebra of observables they can measure, and hence the Hilbert spaces on which they represent what they observe, must vary<sup>10</sup><sup>10</sup>10Related structures have been studied also by Isham and collaborators , who note that structures built of many Hilbert spaces can be used to formulate the consistent histories proposal precisely.. Given the conclusions reached in the preceding sections of this paper, this is framework is then appropriate for a formulation of the weak holographic principle.
### 13 Conclusions
The conclusion of the arguments we have given here is that the holographic bound and holographic principle can only survive in a quantum theory of cosmology in their weak forms, proposed in . While logically weaker, this form is more radical than the strong forms, in its implications for how a measurement theory of quantum cosmology must be constructed. First, the weak forms require that causal structure exist even at the Planck scale. This most likely cannot be realized in a conventional formulation of quantum cosmology in which the observables of the theory act on a single Hilbert space containing the physically allowed “wavefunctions of the universe.” Instead, such a description may have to be formulated along the lines proposed in in which there is a network of Hilbert spaces, each providing a representation for an algebra of observables accessible to a single local observer at an event or a local region of a spacetime history. These will be related to each other by maps which reflect the quantum causal structure.
In such a spacetime, evolution becomes closely intertwined with the flow of quantum information which also defines the causal structure at the Planck scale. Interactions have to do with the processing of the information at events; as noted in a quantum spacetime then becomes very like a quantum computer that can dynamically evolve its circuitry.
It is then difficult to escape the conclusion that the holographic principle, in its weak form, is telling us that nature is fundamentally discrete. The finiteness of the information available per unit area of a surface is to be taken simply as an indication that fundamentally, geometry must turn out to reduce to counting. Of course this conclusion has been reached independently through other arguments coming from quantum gravity and string theory. But, as can be seen most clearly from the argument of Jacobson, the entropy bounds and holographic principle tell us that the description of nature in terms of classical spacetime geometry is not only analogous to the laws of thermodynamics, it must be exactly the thermodynamics of the fundamental discrete theory of spacetime.
What we learn from the analysis of this paper is that in such a theory there is no room for the notion of a bulk theory, and hence no fundamental role for a bulk-boundary correspondence. There is instead a network of screen histories, which describe what possible observers might be able to observer from particular events in their spacetime. By averaging over histories a bulk description may emerge at the semiclassical level, but only as an approximation in which the past of a particular observer can be described to first order in a perturbation expansion in terms of a particular fixed classical history. Thus the proper role of a bulk-to-boundary map may be to serve as a correspondence principle to constrain the classical limit of a background independent quantum theory of gravity.
To put it most simply: the holographic principle is not about a relationship between two sets of concepts, bulk and screen and geometry and information flow. It is the statement that the former reduce entirely to the latter in exactly the same sense that thermodynamic quantities reduce to atomic physics. The familiar picture of bulk spacetimes with fields and geometry must emerge in the semiclassical limit, but these concepts can play no role in the fundamental theory.
Can this picture be used to construct a realistic quantum theory of gravity which addresses also the other problems in the subject? As mentioned in an example of such a theory is provided by a class of background independent membrane theories proposed in . These extend the formalism of loop quantum gravity in a way as to provide a possible background independent form of string theory. So the answer is a very provisional, yes. Much work remains to be done, but the moral is that the holographic principle, in at least its weak form, is likely to feature significantly in both the mathematical language and the measurement theory of the future background independent quantum theory of gravity.
### ACKNOWELDGEMENTS
I would like to thank Tom Banks, John Barrett, Michael Douglas, Willy Fischler, David Gross, Sameer Gupta, Chris Isham, Jerzy Lewandowski, Yi Ling, Renate Loll, Amanda Peet, Joe Polchinski, Roger Penrose, Carlo Rovelli, Andrew Strominger, Leonard Susskind, Edward Witten and especially Ted Jacobson and Mike Reisenberger for discussions on these issues. I would also like to thank Raphael Bousso for very helpful correspondence and discussions. This paper owes a great deal to conversations over many years with Louis Crane and Fotini Markopoulou, who proposed several of the ideas which are developed here. Opportunities to present this argument to the topos discussion group at Imperial College and a seminar at the faculty for the philosophy of science at Oxford were very helpful in sorting out its fine points. Finally, I am grateful for the hospitality of ITP, Santa Barbara, where this work was begun and of the theoretical physics group at Imperial College, where it was finished. This work was supported by the NSF through grant PHY95-14240 and a gift from the Jesse Phillips Foundation.
|
no-problem/0003/nlin0003004.html
|
ar5iv
|
text
|
# Incoherent optical switching of semiconductor resonator solitons
\[
## Abstract
We demonstrate experimentally the bistable nature of the bright spatial solitons in a semiconductor microresonator and show that they can be created and destroyed by incoherent local optical injection.
\]
Spatial optical solitons i.e. light beams propagating without transverse spreading arise when diffraction is balanced by a nonlinear process such as self-focussing in a nonlinear dispersive or reactive medium. Light propagating inside an optical resonator filled with a nonlinear medium can thus form stable filaments, or localized structures (spatial solitons). These are free to move in the resonator cross section (or move by themselves ) which implies their bistability, and ability to carry information. The mobility of spatial solitons, however, makes them different from arrangements of fixed binary elements, so that new types of information processing have been considered, making use of spatial resonator solitons.
Early realisations of such resonator solitons in slow materials were given in . We investigated in the past spatial resonator solitons of phase-type and intensity type , including experiments demonstrating large simultaneous collections of solitons and their manipulation as is required for practical applications. These experiments were conducted using slow nonlinear materials for the sake of easy observeability of the complex 2D space-time dynamics. For practical purposes, however, speed is of prime importance and compatibility with semiconductor technology is desirable. Spatial solitons and their switching in semiconductor microresonators have therefore been predicted theoretically recently . With the aim of realizing spatial solitons in semiconductor resonators, experiments were conducted recently, addressing passive resonators and resonators with population inversion . We showed the spontaneous formation of bright and dark spatial solitons in . We confirm here the bistable nature of the bright spatial semiconductor resonator solitons by the results of local switching experiments and demonstrate the incoherent writing and erasing of the bright solitons.
The experimental arrangement (FIG. 1) was essentially as described in and . Light of a Ti:Al<sub>2</sub>O<sub>3</sub>-laser around 855 nm wavelength illuminates the semiconductor resonator sample. This consists of two Bragg mirrors of about 99,5 $`\%`$ reflectivity and 18 pairs of GaAs/GaAlAs-quantum-wells between them. The band edge and the wavelength of the exciton line is at 849 nm. Observations are done in reflection because the substrate material (GaAs) is opaque at the working wavelength.
The laser light is modulated by a mechanical chopper to limit illumination to durations of a few $`\mu `$s, in order to avoid thermal nonlinear effects. The repetition rate of the illuminations is 1 kHz, permitting stroboscopic recordings of the dynamics or signal averaging. Part of the laser light is split away from the main beam, with orthogonal polarisation, for local injection into the illuminated sample area. The injection is applied in pulses of several 10 ns duration using an electro-optical modulator (EOM). The light reflected from the sample is imaged onto a CCD camera. For time-resolved observations the reflected light is passed through another EOM (50 ns aperture time), which is opened with a variable delay with respect to the start of the illumination. The 2D intensity can thus be recorded for arbitrary moments during the illumination. Further, the intensity in a particular point can be monitored by a small area photodiode PD. In the observations reported, the intensity of the main beam was chosen so that a bright soliton (dark in reflection) would appear only at the center of the main beam which has Gaussian intensity profile with a width of 30 $`\mu `$m. The small area photodiode PD measures the intensity at this point.
FIG. 2 shows the switch-on of a bright soliton. The illuminating intensity rises initially due to the mechanical chopper opening. The maximum intensity is below the switching intensity for the bistable resonator. At t $``$ 3.9 $`\mu `$s the injection beam (orthogonal polarisation with respect to the illumination, width 12 $`\mu `$m) is opened for 70 ns to switch the resonator. During the switch initiated by the injecting pulse a switching front travels radially outward and forms a switched area surrounded by the switching front (dark in left inset in FIG. 2). The switching area then collapses into a spot about 10 $`\mu `$m diameter (see right inset in FIG. 2), the expected size of a soliton for this resonator (details see ). This collapse takes place from 4 to 5 $`\mu `$s. After 5 $`\mu `$s a stationary soliton exists, recorded in the right inset of FIG. 2. When the incident illumination (dotted trace) is finally decreased (chopper closes) the soliton switches off.
Although we do not understand presently the mechanism by which the relatively slow collapse of the switching front occurs, FIG. 2 demonstrates that the bright soliton can be switched on by an external control beam, implying the bistability of the soliton. The switch-on of the soliton would presumably proceed more directly if the injection beam size, intensity, phase and polarization were matched to the final soliton dimensions.
FIG. 3 shows conversely the switching off of a bright soliton (dark in reflection). The illumination intensity in FIG. 3 is chosen above the switching intensity of the resonator so that a soliton forms spontaneously as described in during the transient phase from about 2.4 to 3.5 $`\mu `$s. At about 3.5 $`\mu `$s a stationary soliton is existing (see central inset in FIG. 3). At about 3.9 $`\mu `$s the injection beam (same properties and alignment as for Fig. 2) is opened. This switches the soliton off and returns the whole resonator to the unswitched state (see the right inset in FIG. 3). We mention that the state after the switch-back is stable here, although at its intensity it was unstable in the beginning (t $``$ 2.4 $`\mu `$s). Measurements showed that after the switch-off the threshold of instability was a few percent higher than initially. Raising the background intensity slightly led to renewed spontaneous appearance of the soliton with the pronounced feature of critical slowing. One might tentatively ascribe the small increase of the instability threshold to heating of the material during the time of formation and existence of the soliton, during which the intensity and dissipation in the resonator is high.
FIG. 4 shows that the switching off of the soliton requires a minimum intensity in the external beam. Here the external beam is opened at t $``$ 3.9 $`\mu `$s with an intensity 10 $`\%`$ smaller than in FIG. 3. The soliton in this case is transiently perturbed, but remains stably switched-on.
In summary, we have shown that bright spatial solitons of a semiconductor resonator can be switched on and off by an external incoherent address beam. Thus we demonstrate that such solitons are controllable as required for applications. The bistable nature of the solitons is unambiguously demonstrated. The switch-on mechanism observed presently is too slow for fast processing applications. We attribute this to an insufficient match of the injection beam field with the soliton and suppose that a matched injection beam should directly switch on the bright solitons, without the long transient soliton formation phase. The details of the incoherent switching mechanism observed here will be clarified in the near future.
Acknowledgement
This work was supported by ESPRIT LTR project PIANOS. The quantum-well semiconductor sample was provided by R.Kuszelewicz, CNET, Bagneux, France.
|
no-problem/0003/math0003177.html
|
ar5iv
|
text
|
# MATCHING CONTROL LAWS FOR A BALL AND BEAM SYSTEM
( Abstract: This note describes a method for generating an infinite-dimensional family of nonlinear control laws for underactuated systems. For a ball and beam system, the entire family is found explicitly. Copyright $`\mathrm{\copyright }`$ 2000 IFAC
Keywords: Nonlinear control, mechanical systems ) <sup>1</sup><sup>1</sup>footnotetext: This work was partially supported by NSF Grant No. CMS-9813182.
1. THE MATCHING CONDITION
This note presents an application of the method developed by Auckly, et al. (2000), to stabilization of a ball and beam system. The results are fully described in (Andreev, et al., (2000), Auckly, Kapitanski (2000), and Auckly, et al. (2000)). An experimental comparison of a linear control law versus the nonlinear control laws described here will be given in the full paper, (Andreev, et al., (2000)).
Let $`Q`$ denote a configuration space. Let $`g\mathrm{\Gamma }(T^{}QT^{}Q)`$ be a metric. Let $`c,f:TQTQ`$ be fiber-preserving maps. We assume that $`c(X)=c(X)`$. Let $`V:Q𝑹`$. The differential equation that we consider is
$$_{\dot{\gamma }}\dot{\gamma }+c(\dot{\gamma })+grad_\gamma V=f(\dot{\gamma }).$$
(1)
Let $`P\mathrm{\Gamma }(T^{}QTQ)`$ be a $`g`$-orthogonal projection. We consider the situation where a constraint $`P(f)=0`$ is imposed. A system is called underactuated if $`P0`$.
Several recent papers propose to find control inputs so that the closed-loop system (1) would have a natural candidate for a Lyapunov function (Bloch, et al. (1998), Hamberg (1999), and van der Shaft (1986)). Auckly, et al. (2000) introduced the following matching condition and characterization of matching in terms of linear partial differential equations. A control input, $`f`$, satisfies the matching condition if there are functions $`\widehat{g}`$, $`\widehat{c}`$, and $`\widehat{V}`$ so that the closed loop equations take the form:
$$\widehat{}_{\dot{\gamma }}\dot{\gamma }+\widehat{c}(\dot{\gamma })+\widehat{grad}_\gamma \widehat{V}=0.$$
(2)
The motivation for this method is that $`\widehat{H}=\frac{1}{2}\widehat{g}(\dot{\gamma },\dot{\gamma })+\widehat{V}(\gamma )`$ is a natural candidate for a Lyapunov function because $`d\widehat{H}/dt=\widehat{g}(\widehat{c}(\dot{\gamma }),\dot{\gamma })`$. A straightforward computation shows that, the matching condition is satisfied if and only if
$$P(_XX\widehat{}_XX)=0,$$
(3)
$$\begin{array}{c}P(grad_\gamma V\widehat{grad}_\gamma \widehat{V})=0,\hfill \\ P(c(X)\widehat{c}(X))=0.\hfill \end{array}$$
(4)
Equation (3) is a system of non-linear first order PDE’s for $`\widehat{g}`$. It is perhaps surprising and pleasing that all of the solutions to (3), (4) may be obtained by first solving one first order linear system of PDE’s and then solving a second set of linear PDE’s. This is accomplished by introducing a new variable, $`\lambda `$, by $`g(X,Y)=\widehat{g}(\lambda X,Y)`$.
###### Theorem 1
The metric, $`\widehat{g}`$, satisfies (3) if and only if $`\lambda `$ and $`\widehat{g}`$ satisfy
$$g\lambda |_{\text{Im}P^2}=0,L_{_{\lambda PX}}\widehat{g}=L_{_{PX}}g.$$
(5)
In the special case of a system with two degrees of freedom, it is possible to write out the general solution to this set of differential equations. Following Auckly, Kapitanski (2000), express the underactuaded subspace as the span of a unit length vectorfield, $`PX`$. Choose coordinates $`x^1`$, $`x^2`$ so that $`PX=\frac{}{x^1}`$, and write $`\lambda PX=\sigma \frac{}{x^1}+\mu \frac{}{x^2}`$. For the $`\lambda `$-equation, (5), to be consistent the following compatibility condition must hold: $`([11,2]\mu )/x^2=([12,2]\mu )/x^1`$. Starting with this equation and working backwards, all of the equations may be solved via the method of characteristics.
2. THE BALL AND BEAM SYSTEM
Fig.1. Nonlinear mechanical system.
As an application of our method consider the stabilization problem for the ball and beam system described schematically in figure 1. One can express $`\alpha `$ as an explicit function of $`\theta `$. After rescaling, the kinetic energy of the system is given by:
$$T=\frac{1}{2}\dot{s}^2+\alpha ^{}\dot{s}\dot{\theta }+\frac{1}{2}\left(a_4+\left(a_3+\frac{5}{2}s^2\right)\left(\alpha ^{}\right)^2\right)\dot{\theta }^2$$
and $`V=a_5\mathrm{sin}(\theta )+(s+a_6)\mathrm{sin}(\alpha )`$, where the $`a_k`$ are dimentionless parameters. The projection, $`P=(ds+\alpha ^{}d\theta )/s`$, so the control input $`u`$ is related to $`f`$ in (1) by $`f=(ud\theta )^{\mathrm{}}`$. The resulting equations of motion are
$$\ddot{s}+\alpha ^{}\ddot{\theta }+(\alpha ^{\prime \prime }\frac{5}{2}s\alpha _{}^{}{}_{}{}^{2})\dot{\theta }^2+\mathrm{sin}(\alpha )=0$$
$$\begin{array}{c}\alpha ^{}\ddot{s}+[a_4+(a_3+\frac{5}{2}s^2)\alpha _{}^{}{}_{}{}^{2}]\ddot{\theta }+5\alpha _{}^{}{}_{}{}^{2}s\dot{s}\dot{\theta }\hfill \\ +(a_3+\frac{5}{2}s^2)\alpha ^{}\alpha ^{\prime \prime }\dot{\theta }^2+a_5\mathrm{cos}\theta \hfill \\ +(a_6+s)\mathrm{cos}(\alpha )\alpha ^{}+a_7\dot{\theta }=u,\hfill \end{array}$$
where $`a_7`$ corresponds to inherent dissipation.
The general solution to the matching equations is
$$\widehat{g}_{11}(s,\theta )=\psi ^2(\alpha )(h(y(s,\theta ))+10_0^\alpha \frac{d\phi }{\mu _1^{}(\phi )\psi ^2(\phi )})$$
$$\widehat{g}_{12}=\frac{1}{\mu }(g_{11}\sigma \widehat{g}_{11}),\widehat{g}_{22}=\frac{1}{\mu }(g_{12}\sigma \widehat{g}_{12}),$$
$$\begin{array}{c}\widehat{V}(s,\theta )=w(y)+\mathrm{\hspace{0.17em}5}(y+s_0)_0^\alpha \frac{\mathrm{sin}(\phi )}{\mu _1^{}(\phi )\psi (\phi )}𝑑\phi \hfill \\ \\ \mathrm{\hspace{0.17em}5}_0^\alpha \frac{\mathrm{sin}(\phi )}{\mu _1^{}(\phi )\psi (\phi )}_0^\phi \psi (\tau )𝑑\tau 𝑑\phi ,\hfill \end{array}$$
where $`y=\psi (\alpha )ss_0+_0^\alpha \psi (\tau )𝑑\tau `$, $`\psi (\alpha )=\mathrm{exp}\{5_0^\alpha \frac{\mu _1(\kappa )}{\mu _1^{}(\kappa )}𝑑\kappa \}`$, $`\mu (s,\theta )=\frac{\mu _1^{}(\alpha )}{5s\alpha ^{}}`$, $`\sigma (s,\theta )=\mu _1(\alpha )\frac{1}{5s}\mu _1^{}(\alpha )`$ and $`\mu _1`$, $`h`$, and $`w`$ are arbitrary functions. Also, $`\widehat{c}^1=\alpha ^{}\widehat{c}^2`$, where $`\widehat{c}^2(s,\theta ,\dot{s},\dot{\theta })`$ is an arbitrary function which is odd in $`\dot{s}`$ and $`\dot{\theta }`$. The final nonlinear control law is $`u=u_g+u_V+u_c`$, where $`u_g=g(_{\dot{\gamma }}\dot{\gamma }\widehat{}_{\dot{\gamma }}\dot{\gamma },\frac{}{\theta })`$, $`u_V=\frac{V}{\theta }g(\widehat{grad}_\gamma \widehat{V},\frac{}{\theta })`$, and $`u_c=a_7\dot{\theta }g(\widehat{c}(\dot{\gamma }),\frac{}{\theta })`$. Using $`\widehat{H}`$ as a Lyapunov function, we obtain the following conditions that guarantee local asymptotic stability of the equilibrium: $`det(\widehat{g}(0))>0`$, $`\text{tr}(\widehat{g}(0))>0`$, $`det(\widehat{g}\widehat{c}(0))>0`$, $`\text{tr}(\widehat{g}\widehat{c}(0))>0`$, $`det(D^2\widehat{V}(0))>0`$, and $`\text{tr}(D^2\widehat{V}(0))>0`$,
Another way to check local asymptotic stability is to find the poles of the linearized closed-loop system. It is a theorem (Andreev, et al. (2000), Auckly, Kapitanski (2000)) that any linear full state feedback control law can be obtained as a linearization of some control law in our family.
A good stabilizing control law will produce a large basin of attraction, send solutions to the equilibrium in a short period of time, and will require little control effort. It is, unfortunately, not clear how to quantify these goals.
We have done some numerical simulation of various control laws in our family. We always pick the arbitrary functions in our nonlinear control law in such a way that the linearization at the desired equilibrium, $`u_{lin}=a_8+K_{bp}(ss_0)+K_{ap}\theta +K_{bd}\dot{s}+K_{ad}\dot{\theta }`$, is exactly the linear state feedback control law provided by the manufacturer of a commercially available system (Apkarian, (1994)). The numerical and experimental response of the system to various initial conditions will be recorded in the full version of the paper.
3. CONCLUSION
We believe that nonlinear control laws have the potential to achieve better performance than linear control laws. There are, however, several subtle questions which must be resolved before nonlinear control laws may be fully exploited in practice. The first question is how to quantify performance. The second question is how to pick a control law which will come close to optimizing performance. One interesting idea is to restrict attention to a class of control laws which generate a closed loop system of a special form. The hope is then that it will be easier to quantify the performance of such systems. We have shown that, in many situations it is possible to find all control laws which will result in a closed loop system of the form (2).
REFERENCES
Andreev, F., D. Auckly, L. Kapitanski, A. Kelkar, and W. White (2000). Matching, linear systems, and the ball and beam. Preprint.
Apkarian, J. (1994). Control System Laboratory, Quanser Consulting, Hamilton, Ontario, Canada L8R 3K8.
Auckly, D., L. Kapitanski, and W. White (2000). Control of nonlinear underactuated systems. To appear in Commun. Pure Appl. Math.
Auckly, D., L. Kapitanski (2000). Mathematical Problems in the Control of Underactuated Systems. Preprint.
Bloch, A., N. Leonard and J. Marsden (1998). Matching and stabilization by the method of controlled Lagrangians. Proc. IEEE Conf. on Decision and Control, Tampa, FL, pp. 1446-1451.
Bloch, A., N. Leonard and J. Marsden (1999). Stabilization of the pendulum on a rotor arm by the method of controlled Lagrangians. Proc. IEEE Int. Conf. on Robotics and Automation, Detroit, MI, pp. 500-505.
Hamberg, J.(1999). General matching conditions in the theory of controlled Lagrangians. Proceedings of the 38th Conference on Decision and Control, Phoenix, AZ.
van der Schaft, A. J. (1986). Stabilization of Hamiltonian systems. Nonlinear Analysis, Theory, Methods & Applications, 10, 1021-1035.
|
no-problem/0003/astro-ph0003160.html
|
ar5iv
|
text
|
# Large Scale Structure in the weakly non-linear regime
## 1. Introduction
Where does structure in the Universe come from? The current paradigm is that it comes from gravitational growth of some small initial fluctuations. The self-gravity of an initial overdensed region increases its density contrast so that eventually the region collapses. For a flat Universe in the linear regime, the local density contrast $`\delta \rho /\overline{\rho }1`$ grows as the expansion factor, eg $`D=a`$, so that since decoupling linear gravitational growth has the potential of amplifying fluctuations by at least a factor of a thousand. But Gravity is not linear and when objects start collapsing the growth could be much larger. On galactic scales one also has to consider other forces such as hydrodynamics, heating and cooling by friction, dissipation, feedback mechanism from stars, such as nova and supernova explosions, interaction with the CMB and so on.
To test if the above picture of gravitational growth is correct we need to deal with a classical initial condition problem. Because gravitational time scales are very slow, we have no way to measure the growth of individual large scale structures and we need to resort to the statistical study of mean quantities. One can imagine, for example, measuring the rms fluctuations (at a given scale) at different cosmic times to see if this agrees with the predicted amount of gravitational growth, $`D`$. Observationally this corresponds to finding the clustering properties of some tracer of structure (eg galaxies) at different redshifts. If the tracer is not perfect, we will have some statistical biasing. The problem with this approach is that by the time the rms fluctuations change significantly there typically has also been a substantial cosmic evolution of the corresponding tracers. Thus, it is difficult to disentangle the effects of the underlying cosmological model (which sets the rate $`D`$ of gravitational growth) from galaxy evolution.
It is therefore important to have a way of testing the gravitational growth paradigm at a single cosmic time or redshift. Higher order correlations and weakly non-linear clustering allows us to do just this. This is because one can construct ratios of higher order correlations to powers of the two point amplitude which are independent of cosmic time or cosmological parameters, but still contain information of the underlying dynamics.
## 2. Gravitational Growth in the weakly non-linear regime
Gravitational growth increases the density contrast of initially small fluctuations so that eventually the region collapses. The details of this collapse depends on the initial density profile. As an illustration we will focus in the spherically symmetric case. Thus we will study structure growth in the context of matter domination, the fluid limit and the shear-free or spherical collapse approximation. These turns out to be very good approximation for the one-point cumulants of the density fluctuations. It is then easy to find (see Peebles 1980, Gaztañaga & Lobo 2000) the following second order differential equation for the density contrast $`\delta `$ in the Einstein-deSitter universe ($`\mathrm{\Omega }_k=\mathrm{\Omega }_\mathrm{\Lambda }=0`$), eg $`a(t)=(t/t_0)^{2/3}`$:
$$\frac{d^2\delta }{d^2\eta }+\frac{1}{2}\frac{d\delta }{d\eta }\frac{3}{2}\delta =\frac{4}{3}\frac{1}{1+\delta }\left(\frac{d\delta }{d\eta }\right)^2+\frac{3}{2}\delta ^2$$
where we have shifted to the rhs all non-linear terms, and used the $`\eta \mathrm{ln}(a)`$ as our time variable. This equation reproduces the equation of the spherical collapse model (SC). As one would expect, this yields a local evolution so that the evolved field at a point is just given by a (non-linear) transformation of the initial field at the same point, with independence of the surroundings. The linear solution factorizes the spatial and temporal part:
$$\delta _l(\text{x},t)=D(tt_0)\delta (\text{x},t_0)=D(t)\delta _0(\text{x})$$
where $`D`$ is the linear growth factor, which from the above differential equation:
$$D(t)=C_1e^\eta +C_2e^{3/2\eta }=C_1a(t)+C_2a(t)^{3/2}$$
with growing $`Da`$ and the decaying $`Da^{3/2}`$ modes. Thus, the initial fluctuations, no matter of what amplitude, grow by the same factor, $`D`$, and the statistical properties of the initial field are just linearly scaled. For example the linear rms fluctuations $`\sigma _l`$ or its variance $`\sigma _l^2`$ gives:
$$\sigma _l^2\overline{\xi }_2\delta ^2(t)=D(tt_0)^2\delta _0^2=D(tt_0)^2\sigma _0^2$$
where $`\delta _0=\delta (t_0)`$ and $`\sigma _0`$ refer to some initial reference time $`t_0`$: $`\sigma _0^2<\delta _0^2>`$.
We are interested in the perturbative regime ($`\delta 0`$), which is the relevant one for the description of structure formation on large scales. The non-linear solution for $`\delta `$ can then be expressed directly in terms of the linear one, $`\delta _l`$:
$$\delta =f(\delta _l)=\underset{n=1}{\overset{\mathrm{}}{}}\frac{\nu _n}{n!}[\delta _l]^n$$
Thus all non-linear information is encoded in the $`\nu _n`$ coefficients. We can now introduce this expansion in our non-linear differential equation, with $`\delta _l`$ given by the linear growth factor $`D=a=e^\eta `$, and compare order by order to find:
$$\nu _2=\frac{34}{21};\nu _3=\frac{682}{189};\nu _4=\frac{446440}{43659};\nu _5=\frac{8546480}{243243}\mathrm{}$$
These results are derived for the Einstein-de Sitter, but are also a good approximation for other cosmologies (eg Bouchet et al. 1992, Bernardeau 1994a, Fosalba & Gaztañaga 1998b, Kamionkowski & Buchalter 1999). For non-standard cosmologies or a different equation of state see Gaztañaga & Lobo (2000).
One can now find the N-order cumulants $`\overline{\xi }_N\delta ^N_c`$, where $`N=2`$ corresponds to the variance. Here the expectation values $`\mathrm{}`$ correspond to an average over realizations of the initial field. On comparing with observations we assume the fair sample hypothesis (§30 Peebles 1980), by which we can commute spatial integrals with expectation values. Thus, in practice $`\mathrm{}`$ is the average over positions in the survey area. It is useful to introduce the N-order hierarchical coefficients: $`S_N=\overline{\xi }_N/\overline{\xi }_2^{N1}`$, eg skewness for $`S_3`$ and kurtosis for $`S_4`$. These can easily be estimated from the series expansion above by just taking expectation values of different powers of $`\delta `$ (eg see Fosalba & Gaztañaga 1998a). For leading order Gaussian initial conditions we have:
$$S_3=3\nu _2;S_4=4\nu _3+12\nu _2^2;S_5=5\nu _4+60\nu _3\nu _2+60\nu _2^3$$
These results have also been extended to the non-Gaussian case, see Fry & Scherrer (1994) Chodorowski & Bouchet (1996), Gaztañaga & Mahonen (1996), Gaztañaga & Fosalba (1998). If we take for $`\nu _2`$ the non-linear solution above, eg $`\nu _2=34/21`$, the skewness yields $`S_3=3\nu _2=34/7`$, which reproduces the exact perturbation theory (PT) result by Peebles (1980). Thus the above (SC) model gives the exact leading order result for the skewness. This is also true for higher orders (see Bernardeau 1992 and Fosalba & Gaztañaga 1998a). These expressions have to be corrected for smoothing effects (see Juszkiewicz et al. 1993, Bernardeau 1994a, 1994b, and Fosalba & Gaztañaga 1998a) and possibly from redshift space distortions (eg Hivon et al 1995, Scoccimarro, Couchman and Frieman 1999). Next to leading order terms have been estimated by Scoccimarro & Frieman (1996), Fosalba & Gaztañaga (1998a,b).
The 1-point cumulants measured in galaxy catalogues have been compared with these PT predictions (eg Bouchet et al. 1993, Gaztañaga 1992, Gaztañaga & Frieman 1994, Baugh, Gaztañaga & Efstathiou 1995, Baugh & Gaztañaga 1996, Colombi etal 1997, Hui & Gaztañaga 1999). The left panel of Figure 1 shows a comparison of $`S_N`$ measure in the APM Galaxy Survey (Maddox et al. 1990), with the predictions above (see Gaztañaga 1994, 1995 for more details). The agreement between predictions (lines) and measurements (points) on scales $`R>10h^1\text{Mpc}`$ (where $`\overline{\xi }_2<1`$), is quite good indicating that the APM galaxies follow the non-linear gravitational growth picture. For errors on statistics see Szapudi, Colombi and Bernardeau (1999) and references therein.
### 2.1. Biasing: tracing the mass
The expressions above apply to unbiased tracers of the density field; since galaxies of different morphologies are known to have different clustering properties, at least some galaxy species are biased. As an example, suppose the probability of forming a luminous galaxy depends only on the underlying mean density field in its immediate vicinity. The relation between the density field as traced by galaxies $`\delta _{gal}(\text{x})`$ and the mass density field $`\delta (\text{x})`$, can then be written as:
$$\delta _{gal}(\text{x})=f(\delta (\text{x}))=\underset{n}{}\frac{b_n}{n!}\delta ^n(\text{x}),$$
where $`b_n`$ are the bias parameters. Thus, note how biasing and gravity could produce comparable non-linear effects. To leading order in $`\overline{\xi }_2`$, this local bias scheme implies $`\overline{\xi }_2^{gal}=b_1^2\overline{\xi }_2`$ and (see Fry & Gaztañaga 1993):
$$S_3^{gal}=\frac{S_3}{b_1}+3\frac{b_2}{b_1^2};S_4^{gal}=\frac{S_4}{b_1^2}+12\frac{b_2S_3}{b_1^3}+4\frac{b_3}{b_1^4}+12\frac{b_2^2}{b_1^4};\mathrm{}$$
Gaztañaga & Frieman (1994) have used the comparison of $`S_3`$ and $`S_4`$ in PT with the corresponding measured APM values (as shown in Figure 1) to infer that $`b_11`$, $`b_20`$ and $`b_30`$, but the results are degenerate due to the relative scale-independence of $`S_N`$ and the increasing number of biasing parameters. One could break this degeneracy by using the configuration-dependence of the projected 3-point function, $`q_3(\alpha )`$, as proposed by Frieman & Gaztañaga (1994), Fry (1994), Matarrese, Verde & Heavens (1997) Scoccimarro et al (1998). As shown in Frieman & Gaztañaga (1999), the configuration-dependence of $`q_3(\alpha )`$ on large scales in the APM catalog is quite close to that expected in perturbation theory , suggesting again that $`b_1`$ is of order unity (and $`b_20`$) for these galaxies. This is illustrated in the right panel of Figure 1. The solid curves show the predictions of weakly non-linear gravitational growth. The APM galaxy measurements are shown as symbols ; other curves show results for each of the zones. The agreement indicates that large-scale structure is driven by non-linear gravitational instability and that APM galaxies are relatively unbiased tracers of the mass on these large scales.
## 3. Conclusions
The values of $`S_N=\overline{\xi }_N/\overline{\xi }_2^{N1}`$ can be measured as traced by the large scale galaxy distribution (eg Bouchet et al. 1993, Gaztañaga 1992, 1994, Szapudi el at 1995, Hui & Gaztañaga 1999 and references therein), and also the weak-lensing (Bernardeau et al. 1997, Gaztañaga & Bernardeau 1998) or the Ly-alpha QSO absorptions (Gaztañaga & Croft 1999). These measurements of the skewness $`S_3`$, kurtosis $`S_4`$, and so on, can be compared with the predictions from weakly non-linear perturbation theory (see Figure 1) to place constraints on our assumptions about gravitational growth, initial conditions or biasing at a given redshift (see Mo, Jing & White 1997). Contrary to what happens with the second order statistics (eg the variance), this test of gravitational instability is quite independent of the overall amplitude of fluctuations and other assumptions of our model for cosmological evolution, and does not require comparing the clustering at different redshifts. As shown in Gaztañaga & Lobo (2000), one can also use the $`S_N`$ measurements to constraint non-standard cosmologies.
Frieman & Gaztañaga (1999) have presented new results for the angular 3-point galaxy correlation function in the APM Galaxy Survey and its comparison with theoretical expectations (see also Fry 1984, Scoccimarro et al. 1998, Buchalter, Jaffe & Kamionkowski 2000). For the first time, these measurements extend to sufficiently large scales to probe the weakly non-linear regime (see previous work by Groth & Peebles 1977, Fry & Peebles 1978, Fry & Seldner 1982). On large scales, the results are in good agreement with the predictions of non-linear perturbation theory, for a model with initially Gaussian fluctuations (see Figure 1). This reinforce the conclusion that large-scale structure is driven by non-linear gravitational instability and that APM galaxies are relatively unbiased tracers of the mass on large scales; they also provide stringent constraints upon models with non-Gaussian initial conditions (eg see Gaztañaga & Mahonen 1996; Peebles 1999a,b; White 1999; Scoccimarro 2000).
## References
Baugh, C.M., & Gaztañaga, E. 1996, MNRAS, 280, L37
Baugh, C.M., Gaztañaga, E., Efstathiou, G., 1995, MNRAS, 274, 1049
Bernardeau, F., 1992, ApJ 392, 1
Bernardeau, F., 1994a, A&A 291, 697
Bernardeau, F., 1994b, AJ433, 1
Bernardeau, F.,van Waerbeke, L., Mellier, Y. 1997, A&A 322, 1
Bouchet, F. R., Juszkiewicz, R., Colombi, S., & Pellat, R. 1992, ApJ,394,L5
Bouchet, F. R., Strauss, M. A., Davis, M., Fisher, K. B., Yahil, A., & Huchra, J. P., 1993, ApJ 417, 36
Buchalter, A., Jaffe, A., Kamionkowski, M., 2000, ApJ, 530, 36
Colombi, S., Bernardeau, F., Bouchet, F. R., Hernquist, L., 1997, MNRAS, 287, 241.
Chodorowski, M. & Bouchet, F. 1996, MNRAS, 279, 557
Fosalba, P. & Gaztañaga, E., 1998a, MNRAS 301, 503
Fosalba, P. & Gaztañaga, E., 1998b, MNRAS 301, 535
Frieman, J. A., & Gaztañaga, E. 1994, ApJ,425, 392
Frieman, J. A., & Gaztañaga, E. 1999, ApJ Lett, 521, L83
Fry, J. N. 1984, ApJ, 279, 499
Fry, J. N. 1994, Phy. Rev. Lett. 73, 215
Fry, J. N. & Gaztañaga, E. 1993, ApJ, 413, 447
Fry, J. N. & Peebles 1978, ApJ, 221, 19
Fry, J. N. & Seldner, M. 1982, ApJ, 259, 474
Fry, J. N. & Scherrer, R. 1994, ApJ, 429, 36
Gaztañaga, E. 1992, ApJ Lett, 398, L17
Gaztañaga, E. 1994, MNRAS, 268, 913
Gaztañaga, E. 1995, MNRAS, 454, 561
Gaztañaga, E. & Bernardeau, F. 1998, A & A, 331, 829
Gaztañaga, E. & Croft, R.A.C., MNRAS, 309, 885
Gaztañaga, E. & Fosalba, P., 1998, MNRAS, 301, 524
Gaztañaga, E. & Frieman, J. A., 1994, ApJ, 437, L13
Gaztañaga, E. & Lobo, A., 2000, astro-ph/0003129
Gaztañaga, E. & Mahonen, P. 1996, ApJ, 462, L1
Groth, E. J. & Peebles, P. J. E. 1977, ApJ, 217, 385
Hivon, E., Bouchet, F. R., Colombi, S. & Juszkiewicz, R., 1995, A&A, 298, 643
Hui, L. & Gaztañaga, E. 1999, ApJ, ApJ, 519, 1
Juszkiewicz, R., Bouchet, F., & Colombi, S. 1993, ApJ, 412, L9
Kamionkowski, M. & Buchalter, A. 1999, ApJ, 514, 7
Maddox, S.J., Efstathiou, G., Sutherland, W.J., Loveday, J. 1990, MNRAS, 242, 43P
Matarrese, S., Verde, L., Heavens, A. F., 1997, MNRAS 290, 651
Mo, H.J., Jing, Y.P., White, S.D.M., 1997, MNRAS 284, 189
Peebles, P. J. E. 1980, The Large Scale Structure of the Universe, Princeton: Princeton University Press
Peebles, P. J. E. 1999a, ApJ 510, 523
Peebles, P. J. E. 1999b, ApJ 510, 531
Scoccimarro, R, 2000, astro-ph/0002037
Scoccimarro, R. & Frieman, J., 1996, ApJ Supp, 105, 37
Scoccimarro, R., Couchman, H. M. P., & Frieman, J., 1999, ApJ, 517, 531
Scoccimarro, R., Colombi, S., Fry, J. N., Frieman, J., Hivon, E., & Melott, A. 1998, ApJ, 496, 586
Szapudi, I., Dalton, G.B., Efstathiou, G. & Szalay, A. S. 1995, ApJ, 444, 520
Szapudi, I., Colombi, S., Bernardeau, F., 1999, MNRAS, 310, 428
White, M., 1999, MNRAS 310, 511
|
no-problem/0003/physics0003060.html
|
ar5iv
|
text
|
# Untitled Document
“tenit“tenbfTHE HAWKING-UNRUH TEMPERATURE
“tensyAND QUANTUM FLUCTUATIONS IN PARTICLE ACCELERATORS
K. T. McDonald
“teniJoseph Henry Laboratories, Princeton University, Princeton, New Jersey 08544
We wish to draw attention to a novel view of the effect of the quantum fluctuations during the radiation of accelerated particles, particularly those in storage rings. This view is inspired by the remarkable insight of Hawking<sup>1</sup> that the effect of the strong gravitational field of a black hole on the quantum fluctuations of the surrounding space is to cause the black hole to radiate with a temperature
$$T=\frac{\mathrm{}g}{2\pi ck},$$
where $`g`$ is the acceleration due to gravity at the surface of the black hole, $`c`$ is the speed of light, and $`k`$ is Boltzmann’s constant. Shortly thereafter Unruh<sup>2</sup> argued that an accelerated observer should become excited by quantum fluctuations to a temperature
$$T=\frac{\mathrm{}a^{}}{2\pi ck},$$
where $`a^{}`$ is the acceleration of the observer in its instantaneous rest frame. In a series of papers Bell and co-workers<sup>3-5</sup> have noted that electron storage rings provide a demonstration of the utility of the Hawking-Unruh temperature, with emphasis on the question of the incomplete polarization of the electrons due to quantum fluctuations of synchrotron radiation.
Here we expand slightly on the results of Bell “tenrmet al., and encourage the reader to consult the literature for more detailed understanding.
“tensyApplicability of the Idea
When an accelerated charge radiates, the discrete energy and momentum of the radiated photons induce fluctuations on the motion of the charge. The insight of Unruh is that for uniform linear acceleration (in the absense of the fluctuations), the fluctuations would excite any internal degrees of freedom of the charge to the temperature stated above. His argument is very general (“tenrmi.e., thermodynamic) in that it does not depend on the details of the accelerating force, nor of the nature of the accelerated particle. The idea of an effective temperature is strictly applicable only for uniform linear acceleration, but should be approximately correct for other accelerations, such as that due to uniform circular motion.
A charged particle whose motion is confined by the focusing system of a particle accelerator exhibits transverse and longitudinal oscillations about its ideal path. These oscillations are excited by the quantum fluctuations of the particle’s radiation, and thus provide an excellent physical example of the viewpoint of Unruh.
Further, the particles take on a thermal distribution of energies when viewed in the average rest frame of a bunch, which transforms to the observed energy spread in the laboratory. While classical synchrotron radiation would eventually polarize the spin-$`\frac{1}{2}`$ particles completely, the thermal fluctuations oppose this, reducing the maximum beam polarization.
It is suggestive to compare the excitation energy $`U^{}=kT`$, as would be observed in the particle’s rest frame, to the rest energy $`mc^2`$ when the acceleration is due to laboratory electromagnetic fields $`E`$ and $`B`$. Noting that $`a^{}=eE^{}/m`$ we find
$$\frac{U^{}}{mc^2}=\frac{\mathrm{}eE^{}}{2\pi m^2c^3}=\frac{\left[E_{}+\gamma \left(E_{}+\beta B_{}\right)\right]}{2\pi E_{\mathrm{\backslash }tenrmcrit}},$$
where the particle’s laboratory momentum is $`\gamma \beta mc`$, and
$$E_{\mathrm{\backslash }tenrmcrit}\frac{m^2c^3}{e\mathrm{}}.$$
For an electron,
$$E_{\mathrm{\backslash }tenrmcrit}=1.3\times 10^{16}\mathrm{\backslash }tenrmvolts/cm=4.4\times 10^{13}\mathrm{\backslash }tenrmgauss.$$
($`E_{\mathrm{\backslash }tenrmcrit}`$ is the field strength at which spontaneous pair production becomes highly probable, “tenrmi.e., the field whose voltage drop across a Compton wavelength is the particle’s rest energy.) We might expect that the fluctuations become noticeable when $`U^{}0.1`$ eV, and hence comparable to any other thermal effects in the system, such as the particle-source temperature.
For linear accelerators $`E_{}10^6`$ volts/cm at best, so $`U^{}<10^5`$ eV. The effect of quantum fluctuations is of course negligible because the radiation itself is of little importance in a linear accelerator.
For an electron storage ring such as LEP, $`\gamma 10^5`$, and $`B_{}10^3`$ gauss, so that $`U^{}0.2`$ eV. For the SSC proton storage ring, $`\gamma 2\times 10^4`$, while $`B_{}6\times 10^4`$ gauss, so that $`U^{}2`$ eV. As is well known, in essentially all electron storage rings, and in future proton rings, the effect of quantum fluctuations is quite important.
The remaining discussion is restricted to beams in storage rings (= transverse particle accelerators).
“tensyBeam-Energy Spread
An immediate application of the excitation energy $`U^{}`$ is to the beam-energy spread. In the average rest frame of a bunch of particles, the distribution of energies is approximately thermal, with characteristic kinetic energy $`U^{}`$, and momentum $`p^{}=\sqrt{2mU^{}}`$. The spread in laboratory energies is then given by
$$U_{\mathrm{\backslash }tenrmlab}\gamma (mc^2+U^{}\pm \beta p^{}c)U_0\left(1\pm \gamma \sqrt{\frac{\lambda _C}{\pi \rho }}\right),$$
where $`U_0=\gamma mc^2`$ is the nominal beam energy, $`\rho =U_0/eB_{}`$ is the radius of curvature of the central orbit, and $`\lambda _C=\mathrm{}/mc`$ is the Compton wavelength. Writing this as
$$\left(\frac{\delta U}{U_0}\right)^2\frac{\gamma ^2\lambda _C}{\pi \rho },$$
we obtain the standard result, as given by equation (5.48) of the review by Sands.<sup>6</sup>
“tensyBeam Height
The quantum fluctuations of synchrotron radiation drive the oscillations of particles about the bunch center, and set lower limits on the transverse and longitudinal beam size. If we associate a harmonic oscillator with each component of the motion about the bunch center, then each oscillator will be excited to amplitudes whose corresponding energy is $`U^{}=kT^{}`$.
For example, consider the vertical betatron oscillations which determine the beam height. The frequency of these oscillations is $`\omega =\nu _z\omega _0=\nu _zc/R`$, where $`\nu _z`$ is the vertical betatron number, and $`R=L/2\pi `$ is the mean radius of the storage ring. In the average rest frame of a bunch the oscillation frequency appears to be $`\omega ^{}=\gamma \omega `$, and the spring constant in this frame is given by $`k^{}=m\omega ^2=\gamma ^2m\omega ^2`$. The typical amplitude of oscillation in this frame is then
$$\frac{1}{2}k^{}z^2U^{}=\frac{\mathrm{}a^{}}{2\pi c}=\frac{\mathrm{}\gamma ^2a}{2\pi c}=\frac{\mathrm{}\gamma ^2c}{2\pi \rho },$$
noting that in uniform circular motion the acceleration is transverse. For the vertical oscillation the lab frame amplitude $`z`$ is the same as $`z^{}`$. Combining the above we find
$$z^2=\frac{\lambda _CR^2}{\pi \nu _z^2\rho },$$
which reproduces the standard result, such as equation (5.107) of Sands.<sup>6</sup>
An analogous argument is given in ref. 5 to derive the beam height in a weakly focused storage ring.
“tensyBunch Length and Beam Width
A similar analysis can be given for oscillations in the plane of the orbit. However, radial and longitudinal excursions are also directly coupled to energy excursions, which proves to be the stronger effect. As the present method finds the standard result for the beam-energy spread, the usual results for bunch length and beam width follow at once. \[In ref. 6, use equations (5.64) and (5.93) to yield expressions (5.65) and (5.95).\]
“tensyBeam Polarization
Sokolov and Ternov<sup>7</sup> predicted that quantum fluctuations in synchrotron radiation limit the transverse polarization of the beam to 92%. In the absense of quantum fluctuations the polarization should reach 100% after long times. Bell and Leinaas<sup>3</sup> realized that the thermal character of the fluctuations provides an alternate view of the depolarizing mechanism. In ref. 5 they provide a detailed justification that the thermodynamic arguments are fully equivalent to the original QED calculation of Sokolov and Ternov. In the process they find that for circular motion in a weakly focused ring (betatron), the effective temperature due to quantum fluctuations is
$$kT=\frac{13}{96}\sqrt{3}\frac{\mathrm{}a^{}}{c},$$
which is about 1.5 times Unruh’s result for linear acceleration.
“tensyRadiation Spectrum
Because of the quantum fluctuations the motion of the particles departs from the central orbit, and a classical calculation of the synchrotron-radiation spectrum is incorrect in principle. The deviations become significant only when the characteristic energy of the radiation approaches the beam energy, “tenrmi.e., when $`\gamma B_{}/E_{\mathrm{\backslash }tenrmcrit}1`$, and the prominent effect is the cutoff at the high-energy end of the spectrum.
In the regime where the quantum corrections to the radiation spectrum are small the author has given an estimate of their size.<sup>8</sup> For this we imagine the accelerated charge is surrounded (in its rest frame) by a bath of photons with a Planck spectrum of temperature $`kT=\mathrm{}a^{}/2\pi c`$. The correction to the classical spectrum is considered to arise from the Thomson scattering of these virtual photons off the charged particle. In the lab frame the spectral correction is proportional to the Lorentz transform of the Planck spectrum, whose peak photon energy is then $`2\gamma kT=\mathrm{}\gamma ^3c/\pi \rho `$, essentially the same as that of the classical spectrum. On integrating over energy, the total rate of the correction term is the classical (Larmor) rate times
$$\frac{\alpha }{60\pi }\left(\frac{\gamma B_{}}{E_{\mathrm{\backslash }tenrmcrit}}\right)^2,$$
which is indeed very small at present storage rings.
“tensyAcknowledgements
I would like to thank Ian Affleck and Heinrich Mitter for several discussions on this topic. This work was supported in part by the U.S. Department of Energy under contract DOE-AC02-76ER-03072. <sup>1</sup> S.W. Hawking, “Black-Hole Explosions”, Nature “tensy248, 30-31 (1974).
<sup>2</sup> W.G. Unruh, “Notes on Black-Hole Evaporation”, Phys. Rev. D “tensy14, 870-892 (1976).
<sup>3</sup> J.S. Bell and J.M. Leinaas, “Electrons as Accelerated Thermometers”, Nucl. Phys. “tensyB212, 131-150 (1983).
<sup>4</sup> J.S. Bell, R.J. Hughes and J.M. Leinaas, “The Unruh Effect in Extended Thermometers”, Z. Phys. C “tensy28, 75-80 (1985).
<sup>5</sup> J.S. Bell and J.M. Leinaas, “The Unruh Effect and Quantum Fluctuations of Electrons in Storage Rings”, Nucl. Phys. “tensyB284, 488-508 (1987).
<sup>6</sup> M. Sands, “The Physics of Electron Storage Rings”, SLAC-121, (1970); also in Proc. 1969 Int. School of Physics, ‘Enrico Fermi,’ ed. by B. Touschek (Academic Press, 1971), p. 257.
<sup>7</sup> A.A. Sokolov and I.M. Ternov, “On Polarization and Spin Effects in the Theory of Synchrotron Radiation”, Sov. Phys. Dokl. “tensy8, 1203 (1964).
<sup>8</sup> K.T. McDonald, “Fundamental Physics During Violent Acceleration”, in “teniLaser Acceleration of Particles, AIP Conf. Proc. “tensy130 23-54 (1985).
|
no-problem/0003/astro-ph0003250.html
|
ar5iv
|
text
|
# Prospects for observing pulsating red giants with the MONS Star Trackers 11footnote 1To appear in Proceedings of the Third MONS Workshop: Science Preparation and Target Selection, edited by T.C.V.S. Teixeira and T.R. Bedding (Aarhus: Aarhus Universitet).
## 1 Introduction
The Star Trackers on the MONS satellite (Bedding & Kjeldsen, these Proceedings) should produce exquisite light curves for many hundreds of red giant stars. These observations, made over about 30 days with high duty cycle, will allow a number of questions to be addressed. Classes of stars are discussed in order of decreasing effective temperature, starting with the Mira variables.
## 2 Mira variables
Miras have the largest amplitudes and longest periods of all pulsating stars. What can we then hope to learn by studying them at high precision over a month, which is only a small fraction of a pulsation cycle? \[Schaefer (1991)\] has collected 14 cases of flares reported on Miras, lasting minutes to hours and having amplitudes up to a magnitude. A systematic study by \[Maffei & Tosti (1995)\] found short-term variations in the photographic light curves 18 long-period variables in M 16, with amplitudes $`0.5`$ magnitudes and durations 1–30 days. Most recently, \[de Laverny et al. (1998)\] detected variations from Hipparcos photometry of Miras, with amplitudes 0.2 to 1.1 magnitudes and durations from 2 hours up to 6 days. In some cases, repeat events were observed on the same star (see Fig. 1).
The origin of these short-term variations is not clear, but they are presumably due to rapid and probably localized temperature changes. One possible cause might be the arrival at the surface of an unusually large convection cell. Given the high precision of the MONS Star Trackers, we should expect to see a distribution of events down to much smaller amplitudes. Two-colour information would be especially useful for shedding light on this phenomenon.
## 3 Short-period M giants
\[Koen & Laney\] (2000) have studied what they describe as rapidly oscillating M giant stars. They present a few dozen M giants that were discovered by Hipparcos to have periods shorter than 10 days and amplitudes up to a few tenths of a magnitude. The only viable explanation seems to be pulsation in very high overtones, and some stars shows signs of multiple periodicities. \[Koen & Laney\] list about 35 stars with periods less than 10 days, having $`V`$ magnitudes from 5.4 to 8.9 and amplitudes mostly below 0.1 mag. Several of these stars will be observed by the MONS Star Trackers, and the light curves should allow a proper frequency analysis for multiple modes.
## 4 Oscillations in K giants
It has been established from ground-based photometry that variability in red giants decreases in amplitude as one moves down the spectral sequence from M to K (\[Jorissen et al. 1997\], \[Fekel et al. 2000\]). With the kappa mechanism no longer functioning, excitation is presumably due to convection. Periods become shorter as stellar density increases, and variablility becomes less regular, presumably due to the stochastic nature of the excitation process and/or to the presence of multiple modes.
A few bright K giants show radial-velocity (RV) variations that could be due to oscillations, but it has proved difficult to obtain time series that are long and continuous enough to resolve the frequency spectrum. Matters are complicated by the presence of long-term variations (hundreds of days), which could be due to pulsation, rotational modulation or low-mass companions (e.g., \[Hatzes & Cochran 1993\], 1999).
The best-studied example is Arcturus ($`\alpha `$ Boo), which has been found to have short-term RV variations with periods of a few days (\[Smith et al. 1987\], \[Belmonte et al. 1990\], \[Hatzes & Cochran 1994\], \[Merline 1999\]), as well as long-term variations with a period of a few hundred days (\[Hatzes & Cochran 1993\]). This star will definitely be observed by the MONS Star Trackers, since it lies close to $`\eta `$ Boo, a high-priority primary target. It is not clear whether 30-50 d will be long enough to produce a usable oscillation spectrum, but nearly-continuous coverage over this period will produce a time series far better than any RV observations so far obtained.
Figure 2 shows Hipparcos light curves for a sample of the brightest K giants (many of which are also known to be RV variables). Suprisingly, these light curves have not yet been discussed in the literature. We see clear evidence for photometric variability in several stars, and we can confirm that Arcturus is indeed a variable star, with peak-to-peak variations of about 0.04 mag.
Another interesting case is $`\pi `$ Her, for which \[Hatzes & Cochran (1999)\] obtained RV measurements over two years that showed variability with a period of about 600 d. They pointed out that if rotational modulation of surface structure were the cause, one would expect photometric variations of about 0.1 mag (peak-to-peak). The Hipparcos light curve was obtained at roughly the same time and shows some evidence for slow variability, but at level about ten times smaller than this, allowing us to rule out spots as the cause of the RV variations in $`\pi `$ Her.
Photometric variability in K giants has previously been seen in globular clusters. \[Edmonds & Gilliland (1996)\] observed 47 Tuc with the Hubble Space Telescope over 38.5 hr and found variables with periods of 2–4 days and semi-amplitudes of 5–15 mmag. \[Kaluzny et al. (1998)\] detected 15 red variables in 47 Tuc from the Optical Gravitational Lensing Experiment (OGLE), which had poorer precision but much better temporal coverage. Their K-giant variables have periods of 2–36 d and semi-amplitudes of 40–90 mmag. Interestingly, both these observations are predated by a report by \[Yao (1990)\] of a red giant variable in the globular cluster M 15, with a period of 4.3 hr and an amplitude of about 20 mmag.
It seems clear that many K giants are variable on timescales of hours to days, and observations with the MONS Star Trackers should produce excellent light curves.
Finally, we mention the recent exciting results by \[Buzasi et al.\] (2000 and these Proceedings), who used the star camera on the failed WIRE satellite to perform high-precision photometry of the bright K giant $`\alpha `$ UMa. They produced evidence for multi-mode oscillations in this star with periods of 0.3–6 d and amplitudes of 0.1–0.4 mmag. Data of much higher quality are expected from the MONS Star Trackers and should produce rich oscillation spectra for a sample of bright K giants.
### Acknowledgments
This work was supported by the Australian Research Council.
|
no-problem/0003/math-ph0003035.html
|
ar5iv
|
text
|
# Explicit Formulae for Cocycles of Holomorphic Vector Fields with values in 𝜆 Densities
## Introduction
The continuous cohomology of Lie algebras of $`𝒞^{\mathrm{}}`$-vector fields has been studied by I. M. Gelfand, D. B. Fuks, R. Bott, A. Haefliger and G. Segal in some outstanding papers , , .
B. L. Feigin and N. Kawazumi , whose work is continued in , studied Gelfand-Fuks cohomology of Lie algebras of holomorphic vector fields $`Hol(\mathrm{\Sigma })`$ on an open Riemann surface. Kawazumi calculated the cohomology spaces $`H^{}(Hol(\mathrm{\Sigma }),_\lambda (\mathrm{\Sigma }))`$ of $`Hol(\mathrm{\Sigma })`$ with values in the space of (holomorphic) $`\lambda `$-densities on $`\mathrm{\Sigma }`$, using a well known theorem of Goncharova, cf . He expressed the generators of the cohomology spaces in terms of the nowhere-vanishing holomorphic vector field $``$ which exists on open Riemann surfaces, trivializing the holomorphic tangent bundle.
In this article, we give explicit formulae for the generators of $`H^2(Hol(\mathrm{\Sigma }),_\lambda (\mathrm{\Sigma }))`$ in terms of affine and projective connections. This is done using the cocycles which have been evidenced by V. Ovsienko and C. Roger in and globalizing them by their transformation property.
The main reason to look for explicit formulae is the search for a generalization of the Krichever-Novikov algebras to semi-direct products of $`Hol(\mathrm{\Sigma })`$ with $`_\lambda (\mathrm{\Sigma })`$, cf for the case of $`Vect(S^1)`$.
Acknoledgements:
The author thanks V. Ovsienko and C. Roger for the statement of the problem and usefull discussions on the subject of this paper. He thanks also D. Millionshshikov for illuminating conversations and his formula for the Krichever-Novikov cocycle.
## 1 Preliminairies, statement of the result
In this section, we state the theorems of Kawazumi and of Ovsienko-Roger which are the starting point of our work.
Let $`Vect(S^1)`$ denote the Lie algebra of differentiable vector fields on the circle $`S^1`$, and $`_\lambda `$ the $`Vect(S^1)`$-module of $`\lambda `$-densities, using the action
$$L_fa=(fa^{}+\lambda f^{}a)(dx)^\lambda ,$$
(1)
where $`fVect(S^1)`$ and $`a_\lambda `$ are both represented by their coefficient function.
###### Theorem 1 (Theorem 3, )
The cohomology groups $`H^2(Vect(S^1),_\lambda )`$ are non-zero only for $`\lambda =0,1,2,5,7`$. They are two-dimensional for $`\lambda =0,1,2`$ and one-dimensional for $`\lambda =5,7`$.
The generators read explicitly
$`\overline{c}_0(f,g)`$ $`=`$ $`\left|\begin{array}{cc}f& g\\ f^{}& g^{}\end{array}\right|`$
$`c_0(f,g)`$ $`=`$ $`c_{GF}(f,g)`$
$`c_1(f,g)`$ $`=`$ $`\left|\begin{array}{cc}f^{}& g^{}\\ f^{\prime \prime }& g^{\prime \prime }\end{array}\right|dx`$
$`\overline{c}_1(f,g)`$ $`=`$ $`\left|\begin{array}{cc}f& g\\ f^{\prime \prime }& g^{\prime \prime }\end{array}\right|dx`$
$`c_2(f,g)`$ $`=`$ $`\left|\begin{array}{cc}f^{}& g^{}\\ f^{\prime \prime \prime }& g^{\prime \prime \prime }\end{array}\right|(dx)^2`$
$`\overline{c}_2(f,g)`$ $`=`$ $`\left|\begin{array}{cc}f& g\\ f^{\prime \prime \prime }& g^{\prime \prime \prime }\end{array}\right|(dx)^2`$
$`c_5(f,g)`$ $`=`$ $`\left|\begin{array}{cc}f^{\prime \prime \prime }& g^{\prime \prime \prime }\\ f^{(IV)}& g^{(IV)}\end{array}\right|(dx)^5`$
$`c_7(f,g)`$ $`=`$ $`\left(2\left|\begin{array}{cc}f^{\prime \prime \prime }& g^{\prime \prime \prime }\\ f^{(VI)}& g^{(VI)}\end{array}\right|9\left|\begin{array}{cc}f^{(IV)}& g^{(IV)}\\ f^{(V)}& g^{(V)}\end{array}\right|\right)(dx)^7`$
Here, $`c_{GF}`$ is the Gelfand-Fuks cocycle, cf , being a cocycle with values in the trivial module $`_0`$.
Now, let $`\mathrm{\Sigma }_r`$ \- as in the rest of this article - denote an open Riemann surface, obtained from the compact Riemann surface $`\mathrm{\Sigma }`$ by extraction of $`r`$ points: $`\mathrm{\Sigma }_r:=\mathrm{\Sigma }\{p_1,\mathrm{},p_r\}`$.
Let $`Hol(\mathrm{\Sigma }_r)`$ denote the (infinite dimensional) Lie algebra of holomorphic vector fields on $`\mathrm{\Sigma }_r`$. Let $`_\lambda (\mathrm{\Sigma }_r)`$ be the space of sections of the bundle of holomorphic $`\lambda `$-densities. As all bundles on $`\mathrm{\Sigma }_r`$ are trivial, elements of $`_\lambda (\mathrm{\Sigma }_r)`$ can be represented by holomorphic functions. $`Hol(\mathrm{\Sigma })`$ still acts on $`_\lambda (\mathrm{\Sigma }_r)`$ according to
$$L_fa=(fa^{}+\lambda f^{}a)(dz)^\lambda ,$$
where $`fHol(\mathrm{\Sigma }_r)`$ and $`a_\lambda (\mathrm{\Sigma }_r)`$ are both represented by their coefficient function, the $`(dz)^\lambda `$ being the global section trivialising the bundle of $`\lambda `$-densities.
Recall that
$$H^p(\mathrm{\Sigma }_r)=\{\begin{array}{ccc}& \mathrm{for}& p=0\\ ^{2g+r1}& \mathrm{for}& p=1\\ 0& \mathrm{for}& p2\end{array},$$
if $`g`$ denotes the genus of $`\mathrm{\Sigma }`$.
In , Kawazumi calculates the spaces $`H^2(Hol(\mathrm{\Sigma }_r),_\lambda (\mathrm{\Sigma }_r))`$ using the Res̆etnikov spectral sequence. This sequence has as $`E_2`$-term the sheaf cohomology of a sheaf whose stalk at $`x\mathrm{\Sigma }_r`$ is the cohomology of $`Hol(\mathrm{\Sigma }_r)`$ with values in $`_\lambda (\mathrm{\Sigma }_r)_x`$, the fibre of $`_\lambda (\mathrm{\Sigma }_r)`$ at $`x`$.
Furthermore, Kawazumi uses the main result of his article to express the stated $`E_2`$-term as a tensor product of some “covariant derivative” cocycles with the formal version of his cohomology, namely $`H^{}(W_1,T_\lambda )`$. Here, $`W_1`$ is the Lie algebra of formal vector fields on the complex line and $`T_\lambda `$ is the corresponding module of formal $`\lambda `$-densities. $`H^{}(W_1,T_\lambda )`$ is explicitly given, thanks to the theorem of Goncharova, cf . Thus, he obtains a (collapsing) spectral sequence for the wanted cohomology.
Let us state his result just for the dimensions of $`H^2(Hol(\mathrm{\Sigma }_r),_\lambda (\mathrm{\Sigma }_r))`$ for the different $`\lambda `$.
###### Theorem 2 (consequence of (9.7), )
$`\mathrm{dim}H^2(Hol(\mathrm{\Sigma }_r),_0(\mathrm{\Sigma }_r))`$ $`=`$ $`2(2g+r1)`$
$`\mathrm{dim}H^2(Hol(\mathrm{\Sigma }_r),_1(\mathrm{\Sigma }_r))`$ $`=`$ $`2g+r`$
$`\mathrm{dim}H^2(Hol(\mathrm{\Sigma }_r),_2(\mathrm{\Sigma }_r))`$ $`=`$ $`2g+r`$
$`\mathrm{dim}H^2(Hol(\mathrm{\Sigma }_r),_5(\mathrm{\Sigma }_r))`$ $`=`$ $`1`$
$`\mathrm{dim}H^2(Hol(\mathrm{\Sigma }_r),_7(\mathrm{\Sigma }_r))`$ $`=`$ $`1`$
For all other values of $`\lambda `$, $`H^2(Hol(\mathrm{\Sigma }_r),_\lambda (\mathrm{\Sigma }_r))`$ is zero.
To understand these dimensions, recall from (9.7) p.701 that for $`\lambda =0`$, $`H^2(Hol(\mathrm{\Sigma }_r),_\lambda (\mathrm{\Sigma }_r))`$ is generated by some classes, which we denote $`c_0^\omega `$ and $`\overline{c}_0^\omega `$, each depending on an element $`\omega H^1(\mathrm{\Sigma }_r)=^{2g+r1}`$. In the same manner, $`H^2(Hol(\mathrm{\Sigma }_r),_1(\mathrm{\Sigma }_r))`$ is generated by one family, denoted $`\overline{c}_1^\omega `$, and a cocycle $`c_1`$, $`H^2(Hol(\mathrm{\Sigma }_r),_2(\mathrm{\Sigma }_r))`$ is generated by a family, denoted $`\overline{c}_2^\omega `$, and a cocycle $`c_2`$ and $`H^2(Hol(\mathrm{\Sigma }_r),_5(\mathrm{\Sigma }_r))`$ and $`H^2(Hol(\mathrm{\Sigma }_r),_7(\mathrm{\Sigma }_r))`$ are each generated by a cocycle $`c_5`$ and $`c_7`$. We have chosen the same notation as in the above theorem of Ovsienko and Roger, but their cocycles would not give globally defined objects on a Riemann surface $`\mathrm{\Sigma }_r`$. The explicit construction in terms of connections of these cocycles, resp. families of cocycles, is not known and the subject of this article.
Partial results are known: namely, the holomorphic version of the Gelfand-Fuks cocycle leading to a meromorphic version of the Virasoro algebra appeared in work of Krichever and Novikov, further developped by Schlichenmaier and Sheinman, cf . It reads (cf where we applied Poincaré duality to write it as an integral over $`\mathrm{\Sigma }_r`$):
$$c_0^\omega (f,g)=\frac{c}{24\pi 𝐢}_{\mathrm{\Sigma }_r}\left(\frac{1}{2}\left|\begin{array}{cc}f& g\\ f^{\prime \prime \prime }& g^{\prime \prime \prime }\end{array}\right|R\left|\begin{array}{cc}f& g\\ f^{}& g^{}\end{array}\right|\right)𝑑z\overline{\omega },$$
where $`\omega H^1(\mathrm{\Sigma }_r)`$ and $`R`$ is a projective connection. Recall that for a Stein manifold (in particular for an open Riemann surface) the subcomplex of holomorphic forms calculates all the de Rham cohomology, cf p.449.
On this form, we already see how such a cocycle is constructed: the Gelfand-Fuks cocycle serves as symbol, and then one adds terms to have a globally defined 1-form. In other words, the 1-form without the term involving $`R`$ is globally defined only with respect to an atlas of charts from $`PSl(2;)`$, to define it for a general holomorphic atlas, one has to use a projective connection.
Affine connections are more general than projective connections, namely, a manifold supporting an affine connection admits also a projective connection. These connections come from the corresponding structures, an affine (projective) structure being an (holomorphic) atlas such that the chart transitions are in the subgroup of affine (resp. projective) transformations. One sees that an affine structure is in particular a projective structure. We will state some well known facts about these objects in the next section.
Let us state the main result of this article:
###### Theorem 3
Let $`\mathrm{\Sigma }_r`$ be an open Riemann surface, $`Hol(\mathrm{\Sigma }_r)`$ the Lie algebra of holomorphic vector fields on $`\mathrm{\Sigma }_r`$ and $`_\lambda (\mathrm{\Sigma }_r)`$ the space of holomorphic $`\lambda `$-densities.
The spaces $`H^2(Hol(\mathrm{\Sigma }_r),_\lambda (\mathrm{\Sigma }_r))`$ for $`\lambda =0,1,2,5,7`$ are generated by the classes $`\overline{c}_0,c_0^\omega ,c_1,\overline{c}_1^\omega ,c_2,\overline{c}_2^\omega ,c_5,c_7`$, where the subscript indicates the value of $`\lambda `$ and the superscript the dependence on (the class of) a holomorphic 1 form $`\omega `$ on $`\mathrm{\Sigma }_r`$.
The explicit formulae are given in section 2 (in terms of affine and projective connections) and in section 4 (in terms of the covariant derivative).
Note that the theorem asserts in particular that the formulae in section 2 and in section 4 for $`\overline{c}_0,c_0^\omega ,c_1,\overline{c}_1^\omega ,c_2,\overline{c}_2^\omega ,c_5,c_7`$ coincide.
## 2 Transformation behavior
In this section, we shall calculate the correction terms in order to make the cocycles of theorem 1 globally defined geometrical objects.
Let $`X,Y`$ denote holomorphic vector fields on $`\mathrm{\Sigma }_r`$. Let $`U_\alpha ,U_\beta \mathrm{\Sigma }_r`$ be open subsets such that $`U_\alpha U_\beta \mathrm{}`$. Let $`X`$ and $`Y`$ be given by local coefficient functions $`f_\alpha ,g_\alpha `$ in $`U_\alpha `$ and $`f_\beta ,g_\beta `$ in $`U_\beta `$. Denote by $`z_\alpha `$ and $`z_\beta `$ local coordinates in $`U_\alpha `$ and $`U_\beta `$, and by $`h(z_\alpha )=z_\beta `$ the holomorphic change of coordinates. We have
$$f_\beta =\frac{h}{z_\alpha }f_\alpha ,$$
and similarly for $`g_\beta `$. Denote $`\frac{h}{z_\alpha }`$ just by $`h^{}`$.
Now, it is easy to transform derivatives on the coefficient functions:
$$f_\beta ^{}=\frac{1}{h^{}}(h^{\prime \prime }f_\alpha )+f_\alpha ^{},$$
and
$$f_\beta ^{\prime \prime }=\frac{1}{h^{}}\left\{\left(\frac{h^{\prime \prime \prime }}{h^{}}\frac{(h^{\prime \prime })^2}{(h^{})^2}\right)f_\alpha +\frac{h^{\prime \prime }}{h^{}}f_\alpha ^{}+f_\alpha ^{\prime \prime }\right\}.$$
Remark that this kind of manipulations is particularly well suited for being treated by MAPLE.
Denote by $`S:=S(h)`$ the Schwartzien derivative of $`h`$, i.e. the expression
$$S=\frac{h^{\prime \prime \prime }}{h^{}}\frac{3}{2}\left(\frac{h^{\prime \prime }}{h^{}}\right)^2.$$
It is easy to show and well known that we have:
$$f_\beta ^{\prime \prime \prime }=\frac{1}{(h^{})^2}\left(f_\alpha ^{\prime \prime \prime }+S^{}f_\alpha +2Sf_\alpha ^{}\right).$$
Now recall some generalities on affine and projective structures resp. connections, cf §9 p. 164 and p. 137–138:
###### Definition 1
Let $`\{U_\alpha ,z_\alpha \}`$ be a covering of $`\mathrm{\Sigma }_r`$ by coordinate charts and $`z_\beta =h(z_\alpha )`$ the coordinate transitions for non-empty $`U_\alpha U_\beta `$.
A (holomorphic) projective connection is a family of holomorphic functions $`R_\alpha `$ on $`U_\alpha `$ such that for non-empty $`U_\alpha U_\beta `$, we have
$$R_\beta (h^{})^2=R_\alpha +S.$$
In the same way, we have
###### Definition 2
A (holomorphic) affine connection is a family of holomorphic functions $`T_\alpha `$ on $`U_\alpha `$ such that for non-empty $`U_\alpha U_\beta `$, we have
$$T_\beta h^{}=T_\alpha +\frac{h^{\prime \prime }}{h^{}}.$$
(Observe that $`h^{}0`$.) There is a 1-1 sorrespondance between connections and the corresponding structures, see thm. 19, p. 170. See also , section 2, for a brief summary on these structures. Affine connections (thus affine structures, projective structures and projective connections) exist on any open Riemann surface, cf . This is in contrast to compact Riemann surfaces where affine connections exist only for genus 1, see p. 173.
We use these objects to compensate extra terms arising from the transition behaviour of the cocycles of theorem 1. One arrives at the following results (the first 2 are trivial; in the following, $`R`$ denotes a projective connection, and $`T`$ an affine connection):
* $`\overline{c}_0(f,g)`$ is a well-defined global vector field, so $`\overline{c}_0^\omega (f,g):=\overline{c}_0(f,g)\omega `$ is a well-defined global function
* $`c_0^\omega (f,g)`$ is a well-defined global (constant) function
* $`c_1(f,g):=\left|\begin{array}{cc}f^{}& g^{}\\ f^{\prime \prime }& g^{\prime \prime }\end{array}\right|T\left|\begin{array}{cc}f& g\\ f^{\prime \prime }& g^{\prime \prime }\end{array}\right|+(R\frac{1}{2}T^2)\left|\begin{array}{cc}f& g\\ f^{}& g^{}\end{array}\right|`$ is a well-defined global 1-form
* $`\overline{c}_1(f,g):=\left|\begin{array}{cc}f& g\\ f^{\prime \prime }& g^{\prime \prime }\end{array}\right|T\left|\begin{array}{cc}f& g\\ f^{}& g^{}\end{array}\right|`$ is a well-defined global function, so $`\overline{c}_1^\omega (f,g):=\overline{c}_1(f,g)\omega `$ is a well-defined global 1-form
* $`c_2(f,g):=\left|\begin{array}{cc}f^{}& g^{}\\ f^{\prime \prime \prime }& g^{\prime \prime \prime }\end{array}\right|T\left|\begin{array}{cc}f& g\\ f^{\prime \prime \prime }& g^{\prime \prime \prime }\end{array}\right|(2TRR^{})\left|\begin{array}{cc}f& g\\ f^{}& g^{}\end{array}\right|`$ is a well-defined global 2-form
* $`\overline{c}_2(f,g):=\left|\begin{array}{cc}f& g\\ f^{\prime \prime \prime }& g^{\prime \prime \prime }\end{array}\right|2R\left|\begin{array}{cc}f& g\\ f^{}& g^{}\end{array}\right|`$ is a well-defined global 1-form, so $`\overline{c}_2^\omega (f,g):=\overline{c}_2(f,g)\omega `$ is a well-defined global quadratic differential
* $`c_5(f,g):=\left|\begin{array}{cc}f^{\prime \prime \prime }& g^{\prime \prime \prime }\\ f^{(IV)}& g^{(IV)}\end{array}\right|+R^{\prime \prime }\left|\begin{array}{cc}f& g\\ f^{\prime \prime \prime }& g^{\prime \prime \prime }\end{array}\right|+3R^{}\left|\begin{array}{cc}f^{}& g^{}\\ f^{\prime \prime \prime }& g^{\prime \prime \prime }\end{array}\right|+2R\left|\begin{array}{cc}f^{\prime \prime }& g^{\prime \prime }\\ f^{\prime \prime \prime }& g^{\prime \prime \prime }\end{array}\right|+(2RR^{}3(R^{})^2)\left|\begin{array}{cc}f& g\\ f^{}& g^{}\end{array}\right|2RR^{}\left|\begin{array}{cc}f& g\\ f^{\prime \prime }& g^{\prime \prime }\end{array}\right|R^{}\left|\begin{array}{cc}f& g\\ f^{(IV)}& g^{(IV)}\end{array}\right|4R^2\left|\begin{array}{cc}f^{}& g^{}\\ f^{\prime \prime }& g^{\prime \prime }\end{array}\right|2R\left|\begin{array}{cc}f^{}& g^{}\\ f^{(IV)}& g^{(IV)}\end{array}\right|`$ is a well-defined global 5-form
Note that the assignment of holomorphic 1-forms $`\omega `$ to certain cocycles gives exactly the number of generators which is needed to generate the cohomology spaces. We left out the formula for $`c_7`$ which is too long to be reproduced here.
## 3 Cocycle property
Now, we have globalized the cocycles to individual cochains or families of cochains. But it is not clear whether the terms that we added will disturb the cocycle property. This is what we check in this section.
By writing explicitly the cocycle identity for the different expressions which we considered in the preceeding section to globalize the cocycles (the action depends on $`\lambda `$ (cf equation (1) in the preliminairies)), we get the following result:
Note that the $`6`$th and the $`7`$th expressions arise as terms in $`c_5(f,g)`$. Note that in this formal calculation, $`f,g`$ can be interpreted as vector fields on the circle or on the open Riemann surface. In the latter case, the expressions are not globally defined geometric objects.
* $`\left|\begin{array}{cc}f& g\\ f^{}& g^{}\end{array}\right|`$ is a cocycle for any value of $`\lambda `$
* $`\left|\begin{array}{cc}f& g\\ f^{\prime \prime }& g^{\prime \prime }\end{array}\right|`$ is a cocycle only for $`\lambda =1`$
* $`\left|\begin{array}{cc}f& g\\ f^{\prime \prime \prime }& g^{\prime \prime \prime }\end{array}\right|`$ is a cocycle only for $`\lambda =2`$
* $`\left|\begin{array}{cc}f^{}& g^{}\\ f^{\prime \prime }& g^{\prime \prime }\end{array}\right|`$ is a cocycle only when taking trivial action
* $`\left|\begin{array}{cc}f^{}& g^{}\\ f^{\prime \prime \prime }& g^{\prime \prime \prime }\end{array}\right|`$ is a cocycle for any value of $`\lambda `$
* $`\left|\begin{array}{cc}f& g\\ f^{(IV)}& g^{(IV)}\end{array}\right|`$ is never a cocycle
* $`\left|\begin{array}{cc}f^{}& g^{}\\ f^{(IV)}& g^{(IV)}\end{array}\right|`$ is never a cocycle
* $`\left|\begin{array}{cc}f^{\prime \prime }& g^{\prime \prime }\\ f^{\prime \prime \prime }& g^{\prime \prime \prime }\end{array}\right|`$ is a cocycle only for $`\lambda =3`$
* $`\left|\begin{array}{cc}f^{\prime \prime \prime }& g^{\prime \prime \prime }\\ f^{(IV)}& g^{(IV)}\end{array}\right|`$ is a cocycle only for $`\lambda =5`$
It is thus obvious that $`\overline{c}_0^\omega `$, $`c_0^\omega `$, $`c_1`$, $`\overline{c}_1^\omega `$ $`c_2`$, $`\overline{c}_2^\omega `$ are well-defined, global 2-cocycles for cohomology with values in $`_\lambda `$ with $`\lambda =0,0,1,1,2`$ and $`2`$ respectively.
For $`c_5`$ and $`c_7`$, we will take a different point of view.
## 4 Formulation in terms of the covariant derivative
The fundamental fact which assures the validity of our work is the existence of affine structures on open Riemann surfaces.
These connections are flat integrable connections in the sense of differential geometry, thus we can talk about associated covariant derivatives. The covariant derivative associated to the affine connection reads locally (on a $`\lambda `$-density $`\varphi `$)
$$\varphi =\varphi ^{}\lambda \mathrm{\Gamma }\varphi .$$
In general, $`\mathrm{\Gamma }`$ plays the role of the trace of the Christoffel symbols; in higher dimensions, we have
$$_i\varphi =_i\varphi \lambda \mathrm{\Gamma }_{ij}^j\varphi .$$
Actually, $`\mathrm{\Gamma }`$ is nothing else than what we called before the affine connection $`T`$. $`\varphi `$ is a globally defined object. On $`\frac{1}{2}`$-densities, one can exhibit a particular convenient choice of a projective connection associated to an affine connection:
$`^2(\varphi (dz)^{\frac{1}{2}})`$ $`=`$ $`((\varphi ^{}+{\displaystyle \frac{1}{2}}\mathrm{\Gamma }\varphi )(dz)^{\frac{1}{2}})`$
$`=`$ $`(\varphi ^{\prime \prime }+{\displaystyle \frac{1}{2}}(\mathrm{\Gamma }\varphi )^{}{\displaystyle \frac{1}{2}}\mathrm{\Gamma }(\varphi ^{}+{\displaystyle \frac{1}{2}}\mathrm{\Gamma }\varphi ))(dz)^{\frac{3}{2}}`$
$`=`$ $`(\varphi ^{\prime \prime }{\displaystyle \frac{1}{4}}\mathrm{\Gamma }^2\varphi +{\displaystyle \frac{1}{2}}\mathrm{\Gamma }^{}\varphi )(dz)^{\frac{3}{2}}`$
$`=`$ $`(^2+{\displaystyle \frac{1}{2}}R)\varphi (dz)^{\frac{1}{2}}`$
where we have put $`R=\frac{1}{2}\mathrm{\Gamma }^2+\mathrm{\Gamma }^{}`$.
Otherwise, this choice is justified by $`\mathrm{\Gamma }=\frac{h^{\prime \prime }}{h^{}}`$ and $`S=\frac{h^{\prime \prime \prime }}{h^{}}\frac{3}{2}\left(\frac{h^{\prime \prime }}{h^{}}\right)^2`$, giving also $`S=\mathrm{\Gamma }^{}\frac{1}{2}\mathrm{\Gamma }^2`$, cf equation (10) p. 205.
Furthermore, we can set
$$L_f\varphi (dx)^\lambda =f\varphi +\lambda (f)\varphi ,$$
because this action coincides with the action defined in equation (1). We have $`[f,g]=fg(f)g`$ and a derivation property of $``$ on tensor products. This corresponds to the product formula for the derivative. With this in mind, we have the same rules of manipulation as before for computations which concerned only ordinary derivatives of functions on the cercle.
In conclusion, it is clear that we can formulate all cocycles in terms of the covariant derivative:
* $`c_1(f,g)=\left|\begin{array}{cc}f& g\\ ^2f& ^2g\end{array}\right|dz`$
* $`\overline{c}_1(f,g)=\left|\begin{array}{cc}f& g\\ ^2f& ^2g\end{array}\right|dz^0`$
* $`c_2(f,g)=\left|\begin{array}{cc}f& g\\ ^3f& ^3g\end{array}\right|dz^2`$
* $`\overline{c}_2(f,g)=\left|\begin{array}{cc}f& g\\ ^3f& ^3g\end{array}\right|dz^1`$
* $`c_5(f,g)=\left|\begin{array}{cc}^3f& ^3g\\ ^4f& ^4g\end{array}\right|dz^5`$
* $`c_7(f,g):=\left(2\left|\begin{array}{cc}^3f& ^3g\\ ^4f& ^4g\end{array}\right|\mathrm{\hspace{0.17em}9}\left|\begin{array}{cc}^4f& ^4g\\ ^5f& ^5g\end{array}\right|\right)dz^7`$
The cocycle $`c_2`$ is the covariant derivative version of the Krichever-Novikov cocycle; we learnt this expression from D. Millionshshikov.
Obviously, this description is much simpler. To show at least in priciple how the proof of this coincidence looks like, take for example $`c_2`$. We have to calculate
$`^3f`$ $`=`$ $`^2(f^{}+\mathrm{\Gamma }f)`$
$`=`$ $`(f^{\prime \prime }+\mathrm{\Gamma }^{}f+\mathrm{\Gamma }f^{})`$
$`=`$ $`f^{\prime \prime \prime }+\mathrm{\Gamma }^{\prime \prime }f+2\mathrm{\Gamma }^{}f^{}\mathrm{\Gamma }\mathrm{\Gamma }^{}f\mathrm{\Gamma }^2f.`$
Note that in the first line, $`f`$ is a vector field, but $`(f^{}+\mathrm{\Gamma }f)`$ is a function, in the second line, $`(f^{\prime \prime }+\mathrm{\Gamma }^{}f+\mathrm{\Gamma }f^{})`$ is a 1-form and the result is a 2-form.
This gives
$$f(^3g)g(^3f)=fg^{\prime \prime \prime }gf^{\prime \prime \prime }+(2\mathrm{\Gamma }\mathrm{\Gamma }^2)(fg^{}gf^{})$$
Identifying $`(2\mathrm{\Gamma }\mathrm{\Gamma }^2)`$ with $`(2\mathrm{\Gamma }\mathrm{\Gamma }^2)=2R`$, we get the coincidence of the covariant derivative expression with $`c_2`$.
As said before, it is clear that all covariant derivative expressions will be cocycles with values in the appropriate $`_\lambda `$ \- the computations are straight forward.
## 5 Non-triviality of the cocycles
Let us scetch here an argument showing the non-triviality of the constructed cocycles:
Choose an embedding $`S^1\mathrm{\Sigma }_r`$. The associated restriction of holomorphic vector fields (resp. holomorphic $`\lambda `$-densities) to $`S^1`$ gives a map $`\varphi :Hol(\mathrm{\Sigma }_r)S^1`$ (resp. $`\psi :_\lambda (\mathrm{\Sigma }_r)_\lambda (S^1)`$). These maps are injective Lie algebra homomorphisms with dense image (where the image is equipped with the induced topology from $`Vect(S^1)`$), cf .
There is a commutative diagram:
The non-triviality now follows from the non-triviality (see ) of the corresponding cocycles for $`Vect(S^1)`$.
|
no-problem/0003/hep-ph0003249.html
|
ar5iv
|
text
|
# Contents
## 1 The many ideas of electroweak symmetry breaking
The mechanism of electroweak symmetry breaking (EWSB) is still mysterious. The “simplest” solution is to postulate the existence of one condensing $`SU(2)`$ doublet scalar field that gives masses to the vector bosons and the fermions. This idea is usually spoken of as the Standard Model explanation for electroweak symmetry breaking. However, the word “explanation” is perhaps too strong. The Higgs field provides no reason why it should have a vacuum expectation value, and it only exacerbates the hierarchy problem. Furthermore, the phase transition associated with the SM Higgs boson solution is not sufficiently strong first-order to explain the baryon asymmetry of the universe. These are just three of the reasons why the SM solution is unsatisfactory.
A long-standing endeavor in theoretical and experimental physics is going beyond the SM to explain EWSB at a more fundamental level. The would-be explanations (technicolor, top-quark condensation, supersymmetry, etc.) have invariably implied new particles and/or interactions with mass scale near the EWSB scale. For example, in strongly-coupled theories such as technicolor or top-quark condensation the new particles may include pseudo-Nambu-Goldstone bosons, and exotic gauge bosons (i.e., new forces) and fermions. In supersymmetry the new particles are superpartners and a second Higgs boson multiplet. In other words, one should expect additional particles correlated with a real explanation of EWSB beyond one physical Higgs boson state.
The previous two paragraphs could have been written several years years ago. What’s new today is data. We sometimes bemoan the “lack of data” in high-energy physics. However, data has been coming in. Some theories have died as a result of the data, while other theories have been emerging as more viable and perhaps preferred by the data. This is what data is supposed to do. Data also should help us make decisions about what to look for in the future, and it should be one of the guiding principles to the future experimental program. This outlook we call “Bayesian Physics,” meaning that data and better understanding of theory is interpreted to suggest and imply future goals and experiments. Emphasis is placed on searching for theories or classes of theories that have experienced positive success when compared to data. The theories do not just survive, they have received positive support from the data. Supersymmetry is in this class, we believe.
A competitor philosophy is what we call “Nonjudgmental Physics.” In this philosophy, any physics idea (complete or incomplete) that still is technically not dramatically excluded by the data is equally likely. This outlook dictates, for example, that we view a light-Higgs predicting theory (supported by recent data) as equal in stature to a strong EWSB sector theory (not supported by recent data, and probably but not necessarily ruled out) when thinking about the requirements of future experiments. This philosophy is clearly the superior philosophy in the limit of infinite amount of time, money, and people to do experiments. We believe that the “Bayesian Physics” outlook can help set priorities when resources are limited, and may be essential to obtain new scientifically useful facilities.
## 2 Indications of a light Higgs boson
There are several important inputs from data that can help us decide what theories are more probable. Gauge coupling unification, for example, can be interpreted as a great success for supersymmetric theories. This alone may be powerful enough for some to consider only theories consistent with supersymmetry gauge coupling unification. Although we interpret gauge coupling unification as a powerful message that supersymmetry is part of nature, we will not dwell on this subject here. Instead, we wish to interpret the precision EW data in a wider class of theories.
We argue in this section that all theories that support a Higgs boson (composite or fundamental) with mass significantly less than about $`500\text{ GeV}`$ should be given special status over all other candidate theories of nature. Therefore, when making decisions about new collider facilities, it makes more sense to discuss and analyze all the vagaries of lower mass Higgs bosons than it does to compare a simple SM Higgs signature with signatures of a strongly coupled EWSB sector at very high energies. We will motivate this viewpoint in the next few paragraphs, and in the next sections we will outline some of the relevant discovery issues.
Since a strongly-coupled EWSB sector contributes to precision electroweak observables like a $`\mathrm{TeV}`$ Higgs boson , it appears reasonable to declare these theories as less likely than other theories with fundamental or composite light Higgs bosons. Of course, it might be possible to construct a fully consistent theory that combines strongly coupled EWSB with other new physics that conspires to describe electroweak precision data. However, the new physics that accompanies such theories usually worsens the predictions for precision observables. For example, large positive contributions to the $`S`$ parameter in technicolor theories combined with the effective high mass of a “Higgs boson” are incompatible with the data. In our view, strongly coupled EWSB, although not necessarily ruled out, has become a less-motivated concern given the current data. We will therefore not discuss further this possibility, and rather focus on the more data-motivated scenarios of light fundamental or composite Higgs bosons.
First, the EW precision data from LEP/SLD over the last ten years and direct searches for Higgs bosons at LEP2 indicate that a SM-like Higgs boson with mass between $`110\text{ GeV}`$ and $`215\text{ GeV}`$ is a good fit to the data (95% C.L.) . This is our main input to the discussion. This lends support to any theory that predicts a SM-like Higgs boson with mass less than $`215\text{ GeV}`$ and above $`110\text{ GeV}`$. Supersymmetry is one such theory; it certainly predicts $`m_h<215\text{ GeV}`$, and a part of parameter space allows $`m_h>110\text{ GeV}`$. Actually, the lightest physical Higgs boson of the general supersymmetry case can be as light as 85 GeV and consistent with LEP data. Also, there are no internal inconsistencies with the SM up to very high scales if its mass is in this range. In fact, if such a light SM-like Higgs boson can be taken at face value it may be possible to detect a Higgs boson at the Tevatron even before LHC and Tevatron .
A zeroth-order conclusion of a “Bayesian Physics” view of the data supporting a light Higgs boson is to build a collider that can find and study a Higgs boson with mass less than $`215\text{ GeV}`$. However, this may be a bit naive since the precision data is measuring virtual effects that may include cancellations and combinations of many different kinds of states that in the end imitate the effects of a light Higgs boson. In the next section we will discuss these possible conspiracies and show that they imply that the Higgs boson could be as heavy as $`500\text{ GeV}`$, but not more.
## 3 Cancellation conspiracies for a heavy Higgs boson
The light Higgs boson requirements of precision EW data based on a SM analysis can be imitated by other effects. Several groups have studied the possibility of raising the Higgs boson mass substantially above $`215\text{ GeV}`$ and using other states or operators to cancel the effects of this heavier Higgs mass in the radiative corrections, thereby allowing agreement with electroweak symmetry breaking.
One example of this type of conspiracy is found in ref. . The theory is the SM in extra dimensions, where the fermions live on a 3+1 dimensional wall, and the gauge bosons live in a higher dimensional space . If the Higgs boson lives on the 3+1 dimensional wall with the SM fermions, it can mediate a mass mixing between the ordinary $`W,Z,\gamma `$ gauge bosons and their Kaluza-Klein excitations. This mass mixing then leads to shifts in the EW precision observables that can mimic the effects of a light Higgs boson . That is, a heavy Higgs boson plus gauge boson mode mixing leads to predictions for $`Z`$ pole observables very similar to that of a light Higgs boson.
Any prediction for an observable $`𝒪_i`$ ($`\mathrm{\Gamma }_Z`$, $`\mathrm{sin}^2\theta _W^{\mathrm{eff}}`$, etc.) can be expanded approximately as
$$𝒪_i=𝒪_i^{\mathrm{SM}}+a_i\mathrm{log}m_h/m_Z+b_iV.$$
(1)
$`𝒪_i^{\mathrm{SM}}`$ is defined to be the best fit value of the observable assuming the SM and $`m_h=m_Z`$. We choose $`m_h=m_Z`$ arbitrarily for this expansion, but it is also convenient since it is approximately the value of $`m_h`$ at the global minimum of the fitting $`\chi ^2`$,
$$\chi ^2=\underset{i}{}\frac{(𝒪_i𝒪_i^{\mathrm{expt}})^2}{(\mathrm{\Delta }𝒪_i^{\mathrm{expt}})^2}.$$
(2)
$`V`$ represents the effects on the observable from Kaluza-Klein excitations of the gauge bosons, and is defined as
$$V2\underset{\stackrel{}{n}}{}\frac{g_\stackrel{}{n}^2}{g^2}\frac{m_W^2}{\stackrel{}{n}^2M_c^2},$$
(3)
where $`M_c=R^1`$ is the compactification scale of the extra spatial dimension(s). If there is only one extra dimension then
$$V=\frac{\pi ^2}{3}\frac{m_W^2}{M_c^2}.$$
(4)
The measurements of the observables $`𝒪_i^{\mathrm{expt}}`$ are in good agreement with the SM prediction $`𝒪_i^{\mathrm{SM}}`$ as long as $`110\text{ GeV}\stackrel{<}{}m_h\stackrel{>}{}215\text{ GeV}`$. If $`b_iV=0`$ (decoupled effects of KK excitations) then $`𝒪_i`$ is merely the prediction of the SM for some $`m_h`$. If $`m_h`$ gets too large, $`𝒪_i`$ gets further away from the best-fit value of $`m_hm_Z`$ and the prediction does not explain the data. However, if $`b_iV0`$, it is possible to have a cancellation between the $`\mathrm{log}m_h/m_Z`$ and $`V`$ terms in Eq. (1) even for $`m_hm_Z`$. This would constitute a “conspiracy” of cancellations to allow a large Higgs mass.
The trouble with conspiracies is that the cancellation must occur for every well-measured observable. Although cancellations can be arranged in some observables to maintain the light-Higgs SM prediction for larger Higgs mass and larger $`V`$, the cancellation cannot be maintained for all observables. The $`\chi ^2`$ function may stay under control for somewhat larger Higgs masses due to this cancellation adjustment among the most precisely measured observables, but at some point the less-precisely measured observables will deviate too far from the experimental measurements and cause the $`\chi ^2`$ to rise unacceptably high.
As an example of the general statements of the last few paragraphs, we show how the Higgs mass limit in extra dimensions can be increased above the SM limit but not to arbitrarily high values. The most precisely measured observable relevant to Higgs boson physics is $`\mathrm{sin}^2\theta _W^{\mathrm{eff}}`$. It can be expanded as
$$\mathrm{sin}^2\theta _W^{\mathrm{eff}}=\mathrm{sin}^2\theta _W^{\mathrm{eff},\mathrm{SM}}+0.00053\mathrm{log}m_h/m_Z0.44V.$$
(5)
Again, $`\mathrm{sin}^2\theta _W^{\mathrm{eff},\mathrm{SM}}`$ is the SM best-fit value for $`m_h=m_Z`$, which is in good agreement with the experimental measurement. To maintain this good agreement for higher Higgs mass, $`V`$ must satisfy
$$V=1.2\times 10^3\mathrm{log}m_h/m_Z.$$
(6)
For example, if $`m_h=500\text{ GeV}`$ then $`V=2.0\times 10^3`$, which corresponds to a compactification scale of $`3.3\text{ TeV}`$ for one extra dimension. The compactification scale is also the mass of the first Kaluza-Klein excitations of the gauge bosons. Indeed the analysis of ref. demonstrates this general relationship between $`m_h`$ and $`V`$ as derived in Eq. (6). However, $`m_h=500\text{ GeV}`$ is right at the edge of a tolerable total $`\chi ^2`$ for all precision observables. That is, the cancellation between large Higgs mass effects and Kaluza-Klein gauge boson effects is only partially working for other observables.
Another important observable is $`m_W`$. The theoretical prediction can be expanded similarly as we did for $`\mathrm{sin}^2\theta _W^{\mathrm{eff}}`$,
$$m_W=m_W^{\mathrm{SM}}(0.07\text{ GeV})\mathrm{log}m_h/m_Z+(34\text{ GeV})V.$$
(7)
Plugging in $`m_h=500\text{ GeV}`$ and $`V=0.002`$ we get
$$m_W=m_W^{\mathrm{SM}}0.12\text{ GeV}+0.07\text{ GeV}=m_W^{\mathrm{SM}}0.05\text{ GeV}.$$
(8)
There is still a cancellation effect between heavy Higgs and light compactification scale in $`m_W`$, but it is not complete. The parameters have subtracted $`50\text{ MeV}`$ from the SM light-Higgs prediction. The measured value and the SM best-fit prediction for $`m_W`$ with $`m_h=m_Z`$ are
$$m_W^{\mathrm{expt}}=80.419\pm 0.038\text{ GeV}\mathrm{and}m_W^{\mathrm{SM}}=80.395\text{ GeV}.$$
(9)
Subtracting $`50\text{ MeV}`$ from $`m_W`$ is clearly a prediction that does not match the measurement very well, and the large Higgs mass starts to run into trouble in the fit to EW parameters.
The above example is made more rigorous by doing a complete $`\chi ^2`$ analysis of the data using the parameters of the extra dimensional theory . The result is that Higgs boson masses can be extended beyond the SM mass limit, but only up to $`500\text{ GeV}`$ at the 95% C.L. For $`m_h>500\text{ GeV}`$ the theory is not a good match to the data.
We believe that the extra-dimensional example illustrates a general lesson. That is, there are too many observables precisely measured to expect a global conspiracy cancellation between heavy Higgs effects (fundamental or composite) and other physics contributions. It may be possible to have a collection of higher-dimensional operators conspire to allow a larger Higgs mass , but these examples are not real theories, and there appears to be no motivation for choosing the considered effective Lagrangian other than to construct this cancellation. Furthermore, one can argue generally that these conspiracies of operator coefficients are unlikely .
Other specific example theories that demonstrate cancellation of the effects of a larger Higgs mass have been proposed in the literature . One such theory is the see-saw top-quark condensate model , and the cancellation occurs between a heavy composite Higgs boson and the virtual effects of a massive quark mixing with the top quark . However, as shown in , Higgs masses above $`500\text{ GeV}`$ are not allowed and still maintain theoretical consistency. Furthermore, ref. has demonstrated an approximate $`450\text{ GeV}`$ mass limit on conspiring Higgs composite models. It is interesting that in all the more detailed, independent studies of conspiring theories such as the extra dimensional theory , and the composite theories , the mass limit of $`m_h<500\text{ GeV}`$ survives. And, of course, the minimal supersymmetric standard model automatically predicts a light SM Higgs boson. In all these cases, the indications from the data and a wide range of theory point to a Higgs mass below $`500\text{ GeV}`$. We think this is the goal to shoot for in a high energy collider program.
## 4 Resolving the “new physics”
We have argued in the previous section that it is likely that a light Higgs boson exists and accounts for the precision EW data taken at the Z-pole and elsewhere. If its production and decay are close to that of the SM Higgs boson, both the NLC and the LHC could discover it. Nevertheless, the NLC would usher in an extraordinary era of precision Higgs boson physics, that would be useful in studying the dynamics and structure of EWSB symmetry breaking. To some, this is powerful enough reason to support an NLC program.
However, we would like to point out that there are important discovery issues surrounding a light Higgs boson. One issue that we will address in the next section is an invisibly decaying Higgs boson . This possibility is certainly not a ridiculous theoretical musing, and we as a community should make sure that it is covered experimentally. The other discovery issue we would like to discuss is the “new physics” that conspires to allow a heavier Higgs boson satisfy the precision EW data. What is the most effective way to discover the nature of such new physics?
Above we gave two concrete examples of new physics that could conspire to allow a Higgs boson up to 500 GeV. One example is a top seesaw model with one $`Y=4/3`$ fermion in addition to the SM fermions, and the other example is large extra dimensions for the gauge fields with the Higgs boson living on a $`3+1`$ dimensional wall with the fermions. We will consider each in turn.
First, we consider the top-quark seesaw model with one extra fermion with hypercharge $`4/3`$ as analyzed in ref. . This fermion participates in a condensate seesaw with another quark to produce one light eigenvalue, the top quark $`t`$ with mass $`m_t175\text{ GeV}`$, and one heavy eigenvalue, $`\chi `$. The effects of $`\chi `$ on precision EW analysis is such that it could conspire with a heavy composite Higgs boson to mimic the effects of a light Higgs boson. Actually, this statement is not precisely correct since varying the Higgs boson mass maps out a different path in the $`S`$-$`T`$ plane, for example, than the path generated by varying the $`\chi `$ mass. $`S`$ and $`T`$ are defined by
$`{\displaystyle \frac{\mathrm{\Pi }_{ZZ}(m_Z^2)\mathrm{\Pi }_{ZZ}(0)}{m_Z^2}}`$ $`=`$ $`{\displaystyle \frac{\alpha (m_Z)S}{\mathrm{sin}^22\theta _W(m_Z)}}`$ (10)
$`{\displaystyle \frac{\mathrm{\Pi }_{WW}(0)}{m_W^2}}{\displaystyle \frac{\mathrm{\Pi }_{ZZ}(0)}{m_Z^2}}`$ $`=`$ $`\alpha (m_Z)T`$ (11)
where all parameters are in the MS-bar scheme.
The SM prediction for $`S`$ and $`T`$ depends on $`m_t`$ and $`m_h`$ as well as other parameters of the theory. With the reference values $`m_t^{\mathrm{ref}}=175\text{ GeV}`$ and $`m_h^{\mathrm{ref}}=500\text{ GeV}`$ for the SM parameters we can calculate the prediction of $`S`$ and $`T`$: $`S_{\mathrm{SM}}^{\mathrm{ref}}`$ and $`T_{\mathrm{SM}}^{\mathrm{ref}}`$. The experimental best fits to $`S`$ and $`T`$ are
$`S_{\mathrm{expt}}`$ $`=`$ $`SS_{\mathrm{SM}}^{\mathrm{ref}}=0.13\pm 0.10`$ (12)
$`T_{\mathrm{expt}}`$ $`=`$ $`TT_{\mathrm{SM}}^{\mathrm{ref}}=0.13\pm 0.11.`$ (13)
The $`68\%`$ and $`95\%`$ C.L. contours for this fit are given in Fig. 1. We have also put X marks on the plot for SM prediction with $`m_h=100\text{ GeV},200\text{ GeV},\mathrm{},1000\text{ GeV}`$ going from left to right. As we can see from the plot, the 95% C.L. bound on the Higgs boson in the SM is between $`200\text{ GeV}`$ and $`300\text{ GeV}`$, consistent with the value $`229\text{ GeV}`$ obtained in ref. .
The phenomenology of the one-doublet top seesaw model is very similar to the SM with one extra, massive quark $`\chi `$. If light, this quark contributes substantially to $`T`$ but very little to $`S`$. Nevertheless, one could imagine a large Higgs mass conspiring with a smaller $`\chi `$ mass to generate a good fit to the data. We demonstrate an example of this by supposing that the Higgs boson has mass of $`m_h=500\text{ GeV}`$ and the $`\chi `$ has mass of about $`5\text{ TeV}`$. Then, the shift in $`\mathrm{\Delta }T_\chi `$ can put the theory prediction well within the 95% C.L. contours of precision EW fits. This is the origin of the claim that Higgs boson masses above $`300\text{ GeV}`$ are not in conflict with the data as long $`5\text{ TeV}\stackrel{<}{}m_\chi \stackrel{<}{}7\text{ TeV}`$.
There are several lessons to learn from this example in our opinion. First, the Higgs mass in this theory is not expected to be above $`500\text{ GeV}`$ in any event. We can understand this result by correlating the Higgs mass with the Landau pole scale $`\mathrm{\Lambda }_{\mathrm{LP}}`$ where the Higgs self-coupling blows up. We plot $`\mathrm{\Lambda }_{\mathrm{LP}}`$ vs. $`m_h`$ in Fig. 2. The scale $`\mathrm{\Lambda }_{\mathrm{LP}}`$ directly correlates with other parameters in the top seesaw model, most notably the extra fermion mass which must be below $`\mathrm{\Lambda }_{\mathrm{LP}}`$ in order for condensation to occur. Therefore, knowing $`\mathrm{\Lambda }_{\mathrm{LP}}`$ enables us to determine the effects of new physics on precision electroweak observables. As shown in , the custodial symmetry violations associated with the new fermion mixing with the top quark induce a large contribution to the $`T`$ parameter proportional to $`m_Z^2/m_\chi ^2`$. The coefficient of this proportionality has been estimated, and a conservative conclusion is that no set of parameters with Higgs mass greater than $`500\text{ GeV}`$ will allow a good fit to precision electroweak data. In other words, as $`m_h`$ gets higher $`\mathrm{\Lambda }_{\mathrm{LP}}`$ gets lower, and as $`\mathrm{\Lambda }_{\mathrm{LP}}`$ gets lower $`m_\chi `$ gets lower and causes $`T`$ to be much too large to accomodate the precision electroweak data. We also note that from the discussion of limits on the coefficients of higher-dimensional operators (usually $`1/(8\text{ TeV})^2`$ for dimension six operators ), it appears unlikely that conspiracies will be effective for any theory with a Higgs mass greater than $`500\text{ GeV}`$.
The $`500\text{ GeV}`$ limit we have discussed for the top seesaw model is a specific case of the more general Chivukula-Simmons bound . This bound states that when you correlate minimal custodial symmetry breaking requirements with the Higgs Landau pole scale, the experimentally measured bound on the $`T`$ parameter imply stronger bounds on the Higgs boson than mere triviality. Note, this bound is not purely theoretical and requires the important input of precision electroweak data.
Nevertheless, a larger Higgs mass of up to $`500\text{ GeV}`$ can conspire to bring the prediction back into the allowed region in the $`S`$-$`T`$ plane. However, this is not a cancellation of the effects of a large Higgs mass. The pathway made in the $`S`$-$`T`$ plane by a variable Higgs mass is significantly different than that made by varying other parts of the theory, in this case the $`\chi `$ mass. Better precision low-energy measurements would be able to distinguish the two theories. In the plot we anticipate a a reduction of errors on $`\mathrm{sin}^2\theta _W^{\mathrm{eff}}`$ and $`\mathrm{\Gamma }_Z`$ by running on the $`Z`$ pole with over $`50\text{ fb}^1`$ of integrated luminosity at the NLC. Using ref. as our guide, we think it might not be unreasonable to obtain $`\mathrm{\Delta }\mathrm{sin}^2\theta _W^{\mathrm{eff}}=0.00002`$ and $`\mathrm{\Delta }\mathrm{\Gamma }_Z=1\text{ MeV}`$ at the 95% C.L. The first estimate is well-within the anticipation of ; however, the second number is a factor of 4 better than cited by . The error on $`\mathrm{sin}^2\theta _W^{\mathrm{eff}}`$ may be dominated by uncertainty in $`\alpha _{\mathrm{QED}}(m_Z)`$. It is best reduced by doing precise scans of $`e^+e^{}\mathrm{hadrons}`$ at low energies, and a discussion of the experimental plans, expectations, and hopes can be found in ref. . The error on $`m_Z`$ is dominated by the uncertainty of $`\sqrt{s}`$ and $`1\text{ MeV}`$ at the 95% C.L. may not be doable. Nevertheless, even with just the $`\mathrm{sin}^2\theta _W^{\mathrm{eff}}`$ measurement, one should be able to test consistency of a Higgs mass measurement with the predictions for $`S`$ and $`T`$, and find a descrepancy, implying new states. The more precise measurement of $`\mathrm{\Delta }\mathrm{\Gamma }_Z`$ would help even more dramatically pin down the type of new physics that is compensating for the heavier Higgs boson. We also see from the graph, that the heavier the Higgs boson mass, the easier it should be to unravel any conspiracy with more $`Z`$-pole data. For this reason, we encourage additional study of the $`Z`$ pole precision EW measurement capabilities at the NLC.
We also learn that although conspiracies do correlate with light “new physics” this does not guarantee that the new states will be directly produced and discovered at the next generation colliders. In the top seesaw model that we are considering now, neither the NLC nor the LHC will be able to directly observe $`57\text{ TeV}`$ quarks. Only high-luminosity precision $`Z`$-pole measurements would really be able to see the evidence for new physics by constraining, for example, $`S`$ and $`T`$ to be off the Higgs boson path. With a precise measurement of the Higgs boson mass, the trajectory of the new physics in the $`S`$ and $`T`$ plane could be determined. We speak in terms of $`S`$ and $`T`$ here because it is valid and useful in this example, but in a more general approach one could analyze the multidimensional space of observables with all the self-energy and vertex corrections included.
The second example is large extra dimensions. We stated in the previous section that light Kaluza-Klein modes of the gauge bosons could conspire with a large Higgs mass to satisfy EW precision data. The global $`\chi ^2`$ implied that $`m_h<500\text{ GeV}`$ is required. If a Higgs mass were discovered with mass somewhere between $`400\text{ GeV}`$ and $`500\text{ GeV}`$, then one would expect in this scenario to find gauge boson KK states starting somewhere between $`3.3\text{ TeV}`$ and $`6.6\text{ TeV}`$. This is clearly out of the reach for direct detection at the NLC, but the LHC will be able to see KK excitations up to at least $`5.9\text{ TeV}`$ with $`100\text{ fb}^1`$, covering a significant portion of the parameter space. On the surface this appears to be bad news for the NLC and good news for the LHC. However, the precision measurement capabilities of NLC at high energies allows one to be sensitive to virtual, tree-level exchanges of KK states in $`e^+e^{}Z^{(n)}/\gamma ^{(n)}f\overline{f}`$. The sensitivity to KK states using all the observables at one’s disposal at the NLC is extraordinary. A $`600\text{ GeV}`$ NLC with $`50\text{ fb}^1`$ can see the effects of KK excitations well above $`10\text{ TeV}`$ . Furthermore, one can show that the LHC will have an extremely difficult time resolving the degenerate KK modes from an “ordinary” $`Z^{}`$ gauge boson. However, the NLC can resolve the difference .
Also, additional “new physics” discoveries might be possible only through production and subsequent decay of Higgs bosons. This could be the case if a Higgs boson decays into neutrinos or graviscalars in extra dimensions. Probably any decay mode of even the “heavier” Higgs bosons of conspiracy theories would still allow discovery. They are usually produced the traditional ways, in $`Z/W+h`$ associated production, $`ggh`$, and $`WWh`$. Below the top threshold they decay mainly to $`WW`$ and $`ZZ`$ and invisibly, and above the top threshold mainly to $`t\overline{t}`$ and possibly invisibly. At LHC they will be produced, but detecting them may be very difficult, particularly if the mass is above the top threshold. The invisibly decaying Higgs boson will be discussed in somewhat more detail in the next section. NLC has a significant advantage here.
In short, both theories discussed have been touted as explicit realizations of a conspiracy to accomodate a heavier Higgs boson in precision EW data. However, both theories upon close examination prefer the Higgs boson not be heavier than $`500\text{ GeV}`$, kinematically accessible to a $`600\text{ GeV}`$ NLC. Furthermore, and perhaps most importantly, any new physics that contributes to the conspiracy is more likely to be discernible at the NLC than the LHC.
We have said very little about supersymmetry in this paper, even though we have more confidence in its relevance to nature than the other theories discussed. Supersymmetry has been well-established to predict a light-Higgs boson in the spectrum, easily accessible at the NLC. The “new physics” of supersymmetry are the superpartners. Unlike many other ideas, supersymmetry is a well-defined, perturbative gauge theory, and it is possible to rationally study issues such as finetuning of electroweak symmetry breaking . Numerous studies are in agreement that at least some superpartners should be less than a few hundred GeV. Furthermore, if the lightest supersymmetric partner is stable then cosmological constraints generally, but not always, imply upper bounds on superpartners accessible at the NLC . In our view it is rather obvious that supersymmetry enthusiasts would support the NLC, if for no other reason than to study the Higgs boson properties carefully. Our discussion above is meant for those who worry about a broader perspective.
## 5 Invisibly decaying Higgs boson is more pressing concern
In discussing the NLC capabilities of discovering and studying EWSB, one is often led to comparing with the LHC. The discussions usually begin with noting that kinematically accessible states will be studied very effectively at the NLC and decay branching fractions and production cross-sections will be measured to impressive accuracy. However, at this stage most of us are concerned with discovery, and so it is frequently brought up that the NLC will have a hard time with strongly coupled EWSB theories, where resonances at perhaps several TeV would be the only experimental indication that the $`W_LW_L`$ scattering cross-section is being unitarized. This has traditionally been implicitly thought of as the best example of a “problematic non-SM-like EWSB signal that must be covered”. The NLC typically struggles in this analysis.
However, we feel that the “metric” on all possible beyond-the-SM theories is grossly distorted by contrasting different collider’s ability to discover and study either a SM-like light Higgs boson or difficult multi-TeV resonance signals. There are many more discovery issues than strongly coupled EWSB sectors. And, from our discussion in the introduction and the previous section, we believe that the relevance of multi-TeV resonance signals has diminished dramatically given the data collected on the $`Z`$ pole over the last ten years.
We would therefore like to emphasize other potential discovery issues for Higgs bosons. There are many possible discovery challenges for even light Higgs boson(s). Perhaps the most important “problematic non-SM-like EWSB signal that must be covered” is an invisibly decaying Higgs boson. There are several well-motivated theoretical reasons why a Higgs boson may preferentially decay into invisible, non-interacting states. These accessible decay modes certainly do not need to affect the Higgs couplings to gauge bosons or SM fermions, and so precision EW observable analyses would follow through just as for the SM Higgs boson. However, the decays will cause problems for the detectability of the Higgs boson itself. If we want to discover and study the Higgs bosons we should analyze carefully the prospects for discovering this rather difficult possibility. Similarly, the Higgs may decay invisibly only part of the time, which actually could make discovery more difficult at both NLC and LHC.
Furthermore, within the context of supersymmetry, there are many additional ways that Higgs boson detectability could be a major challenge to high-energy colliders. For example, a complex Higgs sector, with many physical light Higgs states may escape all detection at the LHC, and also be a challenge at the NLC. However, with sufficient luminosity, a $`500\text{ GeV}`$ NLC should be guaranteed to see a Higgs boson signal .
Returning to the single invisibly decaying Higgs case, there have been several analyses evaluating discovery possibilities at hadron colliders and lepton colliders. First, LEPII collaborations have published searches for such states and generally get limits near $`m_{h_{\mathrm{inv}}}<99\text{ GeV}`$ assuming SM strength coupling to the $`Z`$ boson and $`100\%`$ branching fraction into invisible final states. Future runs at LEPII will not go much beyond this number. Nevertheless, the limit is very close to the kinematic edge $`\sqrt{s}m_Z`$ from $`e^+e^{}h_{\mathrm{inv}}Z`$. The Tevatron presently has no meaningful limits on the invisibly decaying Higgs boson. With over $`30\text{ fb}^1`$ it may be possible to observe $`h_{\mathrm{inv}}`$ at the $`3\sigma `$ level if its mass is below $`125\text{ GeV}`$ . At the LHC, analyses indicate that $`m_{h_{\mathrm{inv}}}`$ may be probed up to $`200\text{ GeV}`$ with $`100\text{ fb}^1`$. It would be worthwhile to redo the LHC analyses to take into account our current knowledge of SM particle properties (parton distributions, top quark mass, etc.) and the current expectations for detector parameters, such as particle identification, tagging and energy resolution.
An NLC analyses of the invisibly decaying Higgs boson indicates that it can be probed very close to the kinematic limit of $`\sqrt{s}m_Z`$. Again, this is the general expectation of the $`e^+e^{}`$ collider with a beam constraint to search for peaks in the missing invariant mass spectrum. We expect that shared branching fractions into invisible states and SM states will be measured effectively at the NLC as well. Nevertheless, we encourage a detailed study on this important discovery issue, and think that a comparison between NLC and LHC for invisibly decaying Higgs boson searches is more appropriate than comparing capabilities for discovery of very large invariant mass resonances of strongly coupled EWSB.
## 6 Impact on future experiment: seeing through the many ideas
In summary, we have argued that a broad view of possible beyond-the-SM theories combined with the accumulated data of ten years at LEP and SLC indicate that we should expect a Higgs boson with mass less than $`500\text{ GeV}`$. This presents many important discovery issues. The most notable of these issues is how to discover an invisibly decaying Higgs boson in this mass range. Other issues arise if the Higgs boson mass turns out to be at the upper end of this allowed range, having conspired with other “new physics” contributions to satisfy the current EW data. In both cases studied here, top seesaw model with an extra quark and large extra dimensions with KK gauge bosons at several TeV, the states are best resolved by precision measurements at the NLC running on the $`Z`$ pole and at higher energy, $`\sqrt{s}=600\text{ GeV}`$. Combining the results from these measurements it is possible to observe the heavier scalar and either the associated heavy quark or KK excitations of the gauge bosons. We think this result is probably general.
Finally, we emphasize what should be an obvious point to most: nothing is metaphysically certain. Certainty about what we may or may not find at the next collider has never been a part of the high-energy physics frontier. Our results are not theorems, but they are robust indications about what to bet on if one wants to pursue the most likely directions for progress in our field. If we knew what we were going to find there would be no reason to build colliders. Nevertheless, we think that the last decade of experimental physics is paying off and is providing us important clues that an NLC running at $`\sqrt{s}\stackrel{<}{}600\text{ GeV}`$ will be rewarding. The NLC will vastly improve our chances of finding the origin of EWSB, and then would enable extraordinary precision EWSB measurements.
|
no-problem/0003/cond-mat0003100.html
|
ar5iv
|
text
|
# Anomalies in the 𝑎𝑏-plane resistivity of strongly underdoped La2-xSrxCuO4 single crystals: possible charge stripe ordering?
## ACKNOWLEDGEMENTS
Many thanks are due to A. Rigamonti for the useful discussions. This work has been done under the Advanced Research Project SPIS of the Istituto Nazionale per la Fisica della Materia (INFM).
|
no-problem/0003/cond-mat0003298.html
|
ar5iv
|
text
|
# Distribution of consecutive waves in the sandpile model on the Sierpinski gasket
## 1 Introduction
Sandpile models form the paradigmatic examples of the concept of self organised criticality (SOC) . This is the phenomenon in which a slowly driven systems with many degrees of freedom evolves spontaneously into a critical state, characterised by long range correlations in space and time.
In the past decade much progress has been made in the theoretical understanding of sandpile models. This is especially true for the Bak-Tang-Wiesenfeld (BTW) model , where, following the original work of Dhar , a mathematical formalism was developped \[4–8\] that allows an exact calculation of several properties of the model such as height probabilities , the upper critical dimension and so on.
Despite all this work, it has however not been possible yet to give a full and exact characterisation of the scaling properties of the avalanches in the BTW-model. In recent years it has become increasingly clear that, especially in two dimensions, avalanches are to be described by a full multifractal set of scaling exponents . This spectrum of exponents has been calculated with high numerical precision, but at this moment there is no clue how it can be determined by an analytical approach.
Avalanches can be decomposed into simpler objects called waves . The probability distribution of waves seems to obey simple scaling and the exponent describing that scaling is known exactly, both for the general wave and for the last wave of each avalanche. Since in dimensions $`dd_c=4`$, multiple topplings are extremely rare, it is to be expected that in these situations, wave and avalanche statistics obey the same scaling properties.
More recently, the distribution of two consecutive waves has received considerable attention . The conditional probability that the $`k+1`$-th wave has size $`s_{k+1}`$ given that the previous wave had size $`s_k`$, $`P(s_{k+1}|s_k)`$, is the first quantity to study when one is interested in correlation effects in waves. It are these correlations that make waves and avalanches different. Paczuski and Boettcher proposed, on the basis of extensive simulations, that $`P(s_{k+1}|s_k)`$ has a scaling form
$`P(s_{k+1}|s_k)s_{k+1}^\beta F({\displaystyle \frac{s_{k+1}}{s_k}})`$ (1)
where for large $`x`$, $`F(x)x^r`$, while $`F(x)\text{constant}`$ for $`x0`$. Numerical estimates for these exponents in $`d=2`$ are $`\beta 3/4`$, $`r1/2`$. At this moment, no exact values for these exponents are known. In a very recent work, Hu et al. study the ‘backward’ conditional probability $`P(s_k|s_{k+1})`$ and find that it obeys a similar scaling law
$`P(s_k|s_{k+1})s_k^{\overline{\beta }}\overline{F}({\displaystyle \frac{s_k}{s_{k+1}}})`$ (2)
where for large $`x`$, $`\overline{F}(x)x^{\overline{r}}`$, while $`\overline{F}(x)\text{constant}`$ for $`x0`$. These authors give arguments that show that
$`\overline{\beta }+\overline{r}=\tau _{lw}`$ (3)
where $`\tau _{lw}`$ is the scaling exponent describing the size distribution of the last wave. The relation (3) is consistent with the numerical data for the square lattice where both Euclidean and fractal dimensions of waves are $`2`$. At the same time, it is desirable to get an independent verification of (3) using lattices of different dimensions.
In the present paper we study the properties of waves on the Sierpinski gasket, continuing previous work . Using the methods of analysis introduced in , we obtain precise numerical estimates for the exponents $`\tau _{lw},\beta ,r,\overline{\beta }`$ and $`\overline{r}`$. Also in this case equation (3) seems to be well satisfied which indicates that short time correlations in waves admit an analytical treatment.
## 2 The sandpile model on a Sierpinski gasket
The BTW sandpile model can be defined on any graph, but for definiteness we will introduce it in the context of the Sierpinski gasket (see figure 1). Each vertex (apart from the three boundary sites) of this graph has four nearest neighbours. To each such vertex $`i`$ we associate a height variable $`z_i`$ which can take on any positive integer number. We also introduce a critical height $`z_c`$, which we will take equal to four for all vertices. The number of sites in the lattice, $`N`$, is trivially related to the number of iterations $`n`$ used in constructing the fractal. The dynamics is defined as follows. On a very slow time scale we drop sand at a randomly selected site and thereby increase the height variable by one: $`z_iz_i+1`$. When at a given site, $`z_i>z_c`$, that site becomes unstable and topples
$`z_jz_j\mathrm{\Delta }_{i,j}`$
where
$`\mathrm{\Delta }_{i,j}=\begin{array}{cc}\hfill 4& i=j\hfill \\ \hfill 1& \text{if }i\text{ and }j\text{ are nearest neighbours}\hfill \\ \hfill 0& \text{otherwise}\hfill \end{array}`$ (7)
Through toppling, neighbouring sites can become unstable, topple themselves, create new unstable sites, and so on. This avalanche of topplings proceeds on a very fast time scale and no new grains of sand are added before the avalanche is over. Sand can leave the system when a boundary site topples. An avalanche is over when all sites are stable again.
For further reference it is also necessary to introduce the matrix $`G`$, called the lattice Green function, which is the inverse of $`\mathrm{\Delta }`$.
It is not difficult to see that the order in which unstable sites are toppled does not influence the stable configuration which is obtained when the avalanche is over. This Abelian nature of the sandpile model allows the introduction of the concept of waves which are introduced in the following way. Suppose that an avalanche starts at a site $`i_0`$ and that after a few topplings $`i_0`$ becomes unstable again. One can then forbid $`i_0`$ to topple again and continue with the toppling of other unstable sites untill all of them are stable. It is easy to show that in such a sequence all sites topple at most once. This set of topplings is called the first wave. Next, we topple the site $`i_0`$ for the second time. If after some topplings it becomes unstable again, we keep it fixed, and topple all the other unstable sites. This set of topplings constitutes the second wave. We continue in this way untill we finally reach a stable configuration. We can in this way decompose any avalanche in a set of waves. The probability distribution $`P_w(s)`$ that an arbitrary wave involves $`s`$ topplings obeys simple scaling
$`P_w(s)s^{\tau _w}`$ (8)
(For the moment we neglect finite size effects which will be taken into account in section 4.)
Because in a wave, sites topple at most once, waves are simpler objects to analyse than avalanches. Without going into details we summarize the following important properties of waves
* There is a one-to-one correspondance between waves and the two-rooted spanning trees on a graph which consists of the Sierpinski gasket and one extra site called the sink. The sink is connected with two edges to each of the three boundary sites of the Sierpinski gasket.
* The element $`G_{ij}`$ of the Green function is given by the ratio of the number of two-rooted spanning trees (in which $`i`$ and $`j`$ are in the same subtree) to the number of one-rooted spanning trees. Moreover $`G_{ij}`$ is also equal to the expected number of topplings at site $`j`$ when a grain of sand was dropped at site $`i`$, which is proportional to the probability that a wave started at $`i`$ reaches $`j`$.
From these results, it follows that $`P_w(R)=dG(R)/dR`$ where $`R`$ is the linear size of the wave. The asymptotic behaviour of the Green function on an arbitrary lattice is
$`G(R)R^{d_wd_f}`$
where $`d_f`$ is the fractal dimension of the lattice, and $`d_w`$ the dimension of a random walk on the lattice. Therefore, $`P_w(R)R^{d_wd_f1}`$. From the definition of fractal dimension, $`sR^{d_f}`$ we finally obtain
$`P_w(s)s^{d_w/d_f2}`$ (9)
so that
$`\tau _w=2{\displaystyle \frac{d_w}{d_f}}`$ (10)
a result first derived in . For the particular case of the Sierpinski gasket, $`d_f=\mathrm{log}3/\mathrm{log}2,d_w=\mathrm{log}5/\mathrm{log}2`$, so that $`\tau _w=\mathrm{log}(9/5)/\mathrm{log}30.535`$, a result which is nicely consisted with the available numerical data.
The properties of the last wave in a given avalanche are of special interest for us (see section 3). The probability distribution $`P_{lw}(s_{lw})`$ that the last wave has $`s_{lw}`$ topplings obeys the scaling law
$`P_{lw}(s_{lw})s_{lw}^{\tau _{lw}}`$ (11)
From the definition of waves it follows that the last wave has the property that the site $`i_0`$ is on the boundary of the wave. Let us denote by $`d_B`$ the fractal dimension of the boundary of an arbitrary wave. The number of points on the boundary of a wave of size $`s`$ is then of order $`s^{d_B/d_f}`$. Hence, the probability that a given site is on the boundary of a wave of size $`s`$ is proportional to $`s^{d_B/d_f1}`$, which should also be proportional to $`P_{lw}(s_{lw})/P_w(s)`$. Using (6) we immediately obtain
$`\tau _{lw}=3{\displaystyle \frac{d_w+d_B}{d_f}}`$ (12)
In two dimensions, $`d_w=2`$ and $`d_B=z=5/4`$ where $`z`$ is the fractal dimension of the chemical path on a spanning tree. One thus finds $`\tau _{lw}=11/8`$ . On a fractal the relation between $`d_B`$ and $`z`$ may be more complicated. In fact, it was shown in that for a deterministic fractal
$`d_B=zd_w+d_f`$ (13)
so that, from (12) we obtain
$`\tau _{lw}=2{\displaystyle \frac{z}{d_f}}`$ (14)
The exponent $`z`$ on the Sierpinski gasket was also calculated in with the result
$`z=\mathrm{log}[(20+\sqrt{205})/15]/\mathrm{log}2`$. Thus one finally obtains for the case of the Sierpinski gasket
$`\tau _{lw}={\displaystyle \frac{\mathrm{log}[(20+\sqrt{205})/135]}{\mathrm{log}3}}1.247`$ (15)
In section 4, we will present numerical estimates for $`\tau _{lw}`$ that are fully consistent with this prediction <sup>1</sup><sup>1</sup>1The derivation of $`\tau _{lw}`$ in used implicitly that the graph is selfdual, which is correct for the square lattice but not for the Sierpinski gasket. This error was pointed out by one of us (VBP). The correct result is that given in (15)..
## 3 Distribution of consecutive waves
In order to characterise the statistical properties of waves more completely, it is necessary to go beyond the description on the basis of the distribution $`P_w(s)`$ only. Following the work of Paczuski and Boettcher we now turn to a study of the conditional probability $`P(s_{k+1}|s_k)`$ where $`s_k`$ is the size of the $`k`$-th wave. If the size of consecutive waves is a Markov process, this conditional probability is sufficient to describe the evolution of wave sizes. In figure 2.a, we show our data for this conditional probability for Sierpinski gaskets with $`n=9`$ ($`N=29526`$). All our data were obtained by studying at least $`1000\times N`$ avalanches. The figure shows the best fit of our data to the scaling form (1) proposed by Paczuski and Boettcher. Unfortunately, it is not possible to obtain very accurate estimates for the exponents $`\beta `$ and $`r`$ in this way. We will come back to this issue in the next section.
Recently, it was pointed out that the ‘backward’ conditional probability $`P(s_k|s_{k+1})`$ is also of interest because it is possible to relate the exponents $`\overline{\beta }`$ and $`\overline{r}`$ (see (2)) to $`\tau _{lw}`$. In figure 2.b we present our data for this quantity, again for the case $`n=9`$.
We now repeat briefly the argument given in . We begin by rewriting (2) in a normalised form
$`P(s_k|s_{k+1})\left({\displaystyle \frac{s_k}{s_{k+1}}}\right)^{\overline{\beta }}\overline{F}\left({\displaystyle \frac{s_k}{s_{k+1}}}\right)s_{k+1}^1`$ (16)
Let’s next consider the situation in which $`s_ks_{k+1}`$ so that the argument of $`\overline{F}`$ in (16) is large. In that case it must be so that the $`(k+1)`$-th wave has a non-empty intersection with the boundary of the previous wave (see figure 3.a). Indeed, assume the opposite so that the $`(k+1)`$-th wave covers a small region inside the much bigger $`k`$-th wave (figure 3.b). But such a situation is forbidden, since all sites inside the $`k`$-th wave return to their original height after the wave has passed. The $`(k+1)`$-th wave must therefore follow the motion of the previous wave untill it hits the boundary of the $`k`$-th wave from where it can follow a different evolution. Therefore, the situation of figure 3.b cannot occur.
Then, consider figure 3.a on a coarse grained scale by performing a rescaling of the order of $`R_{k+1}`$, the linear size of the $`k+1`$-th wave. On that scale, the geometry of figure 3.a resembles that of last waves with $`s_{k+1}`$ playing the role of the origin of the avalanche, and $`s_k`$ that of the last wave. Hence we arrive at the conclusion that in the limit $`s_ks_{k+1}`$, the distribution of $`s_k`$ coincides with that of the last wave. From (16), (11) and the asymptotic behaviour of $`\overline{F}`$, the equality (3) then follows.
Once this result has been obtained it is possible to obtain also a relation for the exponents $`\beta `$ and $`r`$ that appear in the scaling form (1). The joint distribution $`P(s_k,s_{k+1})`$ can be written in two ways using either the forward or backward conditional probability
$`P(s_k,s_{k+1})`$ $`=`$ $`P(s_k|s_{k+1})P_w(s_{k+1})`$
$`=`$ $`P(s_{k+1}|s_k)P_w(s_k)`$
We then insert (16), (8) and a properly normalised version of (1) and get
$`\left({\displaystyle \frac{s_k}{s_{k+1}}}\right)^{\overline{\beta }}\overline{F}\left({\displaystyle \frac{s_k}{s_{k+1}}}\right)s_{k+1}^{1\tau _w}\left({\displaystyle \frac{s_{k+1}}{s_k}}\right)^\beta F\left({\displaystyle \frac{s_{k+1}}{s_k}}\right)s_k^{1\tau _w}`$ (17)
In the case $`s_ks_{k+1}`$ we insert the proper limiting behaviours of the functions $`F`$ and $`\overline{F}`$, and immediately obtain, using (3)
$`\beta =1+\tau _w\tau _{lw}`$ (18)
A final equality between exponents can be obtained by investigating the limit $`s_ks_{k+1}`$ in (17). Inserting the appropriate scaling behaviours one obtains
$`\beta +r=1+\tau _w\overline{\beta }`$ (19)
In $`d=2`$, (18) leads to the predictions $`\beta =5/8`$ and $`\overline{\beta }+\overline{r}=11/8`$. The value of $`\beta `$ is not too far from the numerical estimate $`\beta 3/4`$ reported by , while in numerical evidence is presented that is in agreement with the prediction (3). In the following section, we investigate the situation on the Sierpinski gasket.
## 4 Numerical results
In order to analyse our data we have used the method introduced in , in which one investigates the moments $`s^q`$ of the distribution $`P(s,L)`$, where we now explicitly take into account the size $`L`$ of the system. In our case, $`L=2^n`$. If $`P(s,L)`$ has a simple scaling form
$`P(s,L)s^\tau H(s/L^{d_f})`$ (20)
these moments should be proportional to simple powers of $`L`$, $`s^qL^{\sigma (q)}`$ where $`\sigma (q)=d_f(1\tau +q)`$ for $`q>\tau 1`$, and $`\sigma (q)=0`$ for $`q\tau 1`$.
In principle, an analysis of the moments is most instructive when one is interested in the presence of possible multifractal scaling (instead of simple scaling). In that case, the function $`\sigma (q)`$ becomes nonlinear. However, even in the absence of multifractality, this kind of analysis has many advantages. In the case of Sierpinski gasket, the probability distributions for the size of avalanches, waves and last waves show strong oscillations superposed on the pure power laws (see the figures 2,4 and 5 in ) . This is a consequence of the discrete scale invariance of the system. These oscillations make the determination of the scaling exponents a hard task. The moments $`s^q`$ have the big advantage that they are averages over the distribution and hence the effects of the oscillations almost completely disappear. If one then assumes simple scaling, as we know is correct for waves , one can further reduce any remaining fluctuations by fitting the values of $`\sigma (q)`$ (for $`q`$ big enough) to a straight line. The slope of the line should equal $`d_f`$, and the intersection with the $`q`$-axis gives $`\tau 1`$.
We tested this method of analysis for the last wave. We performed extensive simulations for Sierpinski gaskets with $`5n9`$. From the statistics of the last waves we could then estimate $`\sigma _{lw}(q)`$. The results are shown in figure 4. The curvature at low $`q`$ is a finite size effect. From a fit of our data in the regime $`q1.5`$ we obtain the estimate $`\tau _{lw}=1.28\pm .03`$. This is very close to the exact result which we obtained in section 2 <sup>2</sup><sup>2</sup>2This value of $`\tau _{lw}`$ is also much lower then that obtained in from a direct fit to (11). We are currently performing a multifractal analysis of data for the size and the area of avalanches in the BTW (and in a stochastic sandpile) model, on the Sierpinski gasket. The results will be published elsewhere..
We have followed a similar scheme of analysis for all the other exponents introduced in section 3. Take as a concrete example the sum of exponents $`\overline{\beta }+\overline{r}`$. This can be obtained as follows. For $`s_ks_{k+1}`$, $`P^+(s_k)P(s_k|s_{k+1})s_k^{\overline{r}\overline{\beta }}`$. Instead of analysing this distribution itself we look at the moments of $`P^+(s_k)`$ which are expected to scale as $`L^{\sigma ^+(q)}`$. A plot of $`\sigma ^+(q)`$ is shown in figure 5. From an analysis of the data in this figure we obtain the estimate $`\overline{\beta }+\overline{r}=1.24\pm .01`$, which is very clearly consistent with the prediction (3).
By analysing the small $`s_k`$ behaviour of $`P(s_k|s_{k+1})`$ in a similar way, we can estimate $`\overline{\beta }=.16\pm 0.05`$.
Continuing in this way for the forward conditional probability $`P(s_{k+1}|s_k)`$, we find from the large $`s_{k+1}`$-behaviour $`\beta +r=1.30\pm 0.01`$, while from the data for small $`s_{k+1}`$ we finally obtain $`\beta =0.35\pm 0.05`$ (see figure 6). This numerical result is not too different from the prediction following from (18) which gives $`\beta =.288`$.
Finally note that also the relation (19) is rather well satisfied.
## 5 Conclusions
In this paper we investigated the properties of waves in the sandpile model on a Sierpinski gasket. We gave predictions for the exponent describing the last wave in an avalanche and for the scaling exponents occuring in forward and backward conditional probabilities for consecutive waves. These predictions were tested by extensive simulations and were found to be in good agreement with the numerics.
Results such as those shown in figure 5 and figure 6 also show no clear evidence for any multifractality which would show up as a curvature in the plots of $`\sigma (q)`$ for big enough $`q`$ ().
We are currently investigating the presence of multifractality of avalanches on the Sierpinski gasket. If, as is the case in $`d=1`$ () and in $`d=2`$ (), such multifractality shows up , the interesting question arises how such a phenomenon can be built up on the avalanche level when it is absent at the level of waves and consecutive waves.
Acknowledgement One of us (VBP) thanks the Limburgs Universitair Centrum for hospitality.
|
no-problem/0003/physics0003068.html
|
ar5iv
|
text
|
# Phase changes in 38 atom Lennard-Jones clusters. I: A parallel tempering study in the canonical ensemble
## I Introduction
Because the properties of molecular aggregates impact diverse areas ranging from nucleation and condensation to heterogeneous catalysis, the study of clusters has continued to be an important part of modern condensed matter science. Clusters can be viewed as an intermediate phase of matter, and clusters can provide information about the transformation from finite to bulk behavior. Furthermore, the potential surfaces of clusters can be complex, and many clusters are useful prototypes for studying other systems having complex phenomenology.
The properties of small clusters can be unusual owing to the dominance of surface rather than bulk atoms. A particularly important and well studied example of a property that owes its behavior to the presence of large numbers of surface atoms is cluster structure. The structure of clusters can differ significantly from the structure of the corresponding bulk material, and these differences in structure have implications about the properties of the clusters. For example, most small Lennard-Jones (LJ) clusters have global potential surface minima that are based on icosahedral growth patterns. The five-fold symmetries of these clusters differ substantially from the closest-packed arrangements observed in bulk materials.
While most small Lennard-Jones clusters have geometries based on icosahedral core structures, there can be exceptions. A notable example is the 38-atom Lennard-Jones cluster \[LJ<sub>38</sub>\]. This cluster is particularly interesting owing to its complex potential surface and associated phenomenology. The potential surface for LJ<sub>38</sub> has been described in detail by Doye, Miller and Wales who have carefully constructed the disconnectivity graph for the system using information garnered from basin hopping and eigenvector following studies of the low energy potential minima along with examinations of the transition state barriers. The general structure of this potential surface can be imagined to be two basins of similar energies separated by a large energy barrier with the lowest energy basin being significantly narrower than the second basin. Striking is the global minimum energy structure for LJ<sub>38</sub> which, unlike the case for most small Lennard-Jones clusters, is not based on an icosahedral core, but rather is a symmetric truncated octahedron. The vertices defined by the surface atoms of LJ<sub>38</sub> have a morphology identical to the first Brillouin zone of a face centered cubic lattice, and the high symmetry of the cluster may account for its stability. It is interesting to note that recent experimental studies of nickel clusters using nitrogen uptake measurements have found the global minimum of Ni<sub>38</sub> to be a truncated octahedron as well. The basin of energy minima about the global minimum of LJ<sub>38</sub> is narrow compared to the basin about the next highest energy isomer which does have an icosahedral core. The difference in energy between the global minimum and the lowest minimum in the icosahedral basin is only 0.38% of the energy of the global minimum.
Characteristic of some thermodynamic properties of small clusters are ranges of temperature over which these properties change rapidly in a fashion reminiscent of the divergent behavior known to occur in bulk phase transitions at a single temperature. The rapid changes in such thermodynamic properties for clusters are not divergent and occur over a range of temperatures owing to the finite sizes of the systems. In accord with the usage introduced by Berry, Beck, Davis and Jellinek we refer to the temperature ranges where rapid changes occur as “phase change” regions, rather than using the term “phase transition,” that is reserved for systems at the thermodynamic limit. As an example LJ<sub>55</sub> displays a heat capacity anomaly over a range in temperatures often associated with what has been termed “cluster melting.” Molecular dynamics and microcanonical simulations performed at kinetic temperatures in the melting region of LJ<sub>55</sub> exhibit van der Waals type loops in the caloric curves and coexistence between solid-like and liquid-like forms.
In recent studies, Doye, Wales and Miller and Miller, Doye and Wales have examined the phase change behavior of LJ<sub>38</sub>. These authors have calculated the heat capacity and isomer distributions as a function of temperature using the superposition method. In the superposition method the microcanonical density of states is calculated for each potential minimum, and the total density of states is then constructed by summation with respect to each local density of states. Because it is not possible to find all potential minima for a system as complex as LJ<sub>38</sub>, the summation is augmented with factors that represent the effective weights of the potential minima that are included in the sum. The superposition method has also been improved to account for anharmonicities and stationary points. For LJ<sub>38</sub> Doye et al. have identified two phase change regions. The first, accompanied by a heat capacity maximum, is associated with a solid-to-solid phase change between the truncated octahedral basin and the icosahedral basin. A higher temperature heat capacity anomaly represents the solid-liquid coexistence region, similar to that found in other cluster systems. The heat capacity anomaly associated with the melting transition in LJ<sub>38</sub> is steeper and more pronounced than the heat capacity peak that Doye et al. have associated with the solid-solid transition. Because the weights that enter in the sum to construct the microcanonical density of states are estimated, it is important to confirm the findings of Doye et al. by detailed numerical simulation. Such simulations are a goal of the current work and its companion paper. As is found in Section III, the simulations provide a heat capacity curve for LJ<sub>38</sub> that has some qualitative differences with the curve reported by Doye et al.
Owing to the complex structure of the potential surface of LJ<sub>38</sub>, the system represents a particularly challenging case for simulation. It is well known that simulations of systems having more than one important region of space separated by significant energy barriers can be difficult. The difficulties are particularly severe if any of the regions are either narrow or reachable only via narrow channels. The narrow basin about the global minimum makes simulations of LJ<sub>38</sub> especially difficult. There are several methods that have been developed that can prove to be useful in overcoming such ergodicity difficulties in simulations. Many of these methods use information about the underlying potential surface generated from simulations on the system using parameters where the various regions of configuration space are well-connected. One of the earliest of these methods is J-walking where information about the potential surface is obtained from simulations at high temperatures, and the information is passed to low temperature walks by jumping periodically to the high temperature walk. Closely allied with J-walking is the parallel tempering method where configurations are exchanged between walkers running at two differing temperatures. A related approach, similar in spirit to J-walking, uses Tsallis distributions that are sufficiently broad to cover much of configuration space. Another recent addition to these methods is the use of multicanonical distributions in the jumping process. Multicanonical walks are performed using the entropy of the system, and multicanonical distributions are nearly independent of the energy thereby allowing easy transitions between energy basins. As we discuss in the current work, we have found the parallel tempering method to be most useful in the context of simulations of LJ<sub>38</sub>. A comparative discussion of some of the methods outlined above is given later in this paper.
In the current work we apply parallel tempering to the calculation of the thermodynamic properties of LJ<sub>38</sub> in the canonical ensemble. In the paper that follows we again use parallel tempering to study LJ<sub>38</sub>, but using molecular dynamics methods along with microcanonical Monte Carlo simulations. Our goals are to understand better this complex system and to determine the best simulation method for systems of comparable complexity. The contents of the remainder of this first paper are as follows. In Section II we discuss the methods used with particular emphasis on the parallel tempering approach and its relation to the J-walking method. In Section III we present the results including the heat capacity as a function of temperature and identify the phase change behaviors of LJ<sub>38</sub>. In Section IV we present our conclusions and describe our experiences with alternatives to parallel tempering for insuring ergodicity.
## II Method
For canonical simulations we model a cluster with $`N`$ atoms by the standard Lennard-Jones potential augmented by a constraining potential $`U_c`$ used to define the cluster
$$U(𝐫)=4\epsilon \underset{i<j}{\overset{N}{}}\left[\left(\frac{\sigma }{r_{ij}}\right)^{12}\left(\frac{\sigma }{r_{ij}}\right)^6\right]+U_c,$$
(1)
where $`\sigma `$ and $`\epsilon `$ are respectively the standard Lennard-Jones length and energy parameters, and $`r_{ij}`$ is the distance between particles $`i`$ and $`j`$. The constraining potential is necessary because clusters at defined temperatures have finite vapor pressures, and the evaporation events can make the association of any atom with the cluster ambiguous. For classical Monte Carlo simulations, a perfectly reflecting constraining potential is most convenient
$$U_c=\underset{i=1}{\overset{N}{}}u(𝐫_i),$$
(2)
with
$$u(𝐫)=\{\begin{array}{cc}\mathrm{}\hfill & |𝐫𝐫_{cm}|>R_c\hfill \\ 0\hfill & |𝐫𝐫_{cm}|<R_c\hfill \end{array}$$
(3)
where $`𝐫_{cm}`$ is the center of mass of the cluster, and we call $`R_c`$ the constraining radius.
Thermodynamic properties of the system are calculated with Monte Carlo methods using the parallel tempering technique. To understand the application of the parallel tempering method and to understand the comparison of parallel tempering with other related methods, it is useful to review the basic principles of Monte Carlo simulations.
In the canonical ensemble the goal is the calculation of canonical expectation values. For example, the average potential energy is expressed
$$U=\frac{d^{3N}rU(𝐫)e^{\beta U(𝐫)}}{d^{3N}re^{\beta U(𝐫)}},$$
(4)
where $`\beta =1/k_BT`$ with $`T`$ the temperature and $`k_B`$ the Boltzmann constant. In Monte Carlo simulations such canonical averages are determined by executing a random walk in configuration space so that the walker visits points in space with a probability proportional to the canonical density $`\rho (𝐫)=Z^1\mathrm{exp}[\beta U(𝐫)]`$, where $`Z`$ is the configurational integral that normalizes the density. After generating $`M`$ such configurations in a random walk, the expectation value of the potential energy is approximated by
$$U_M=\frac{1}{M}\underset{i=1}{\overset{M}{}}U(𝐫_i).$$
(5)
The approximate expectation value $`U_M`$ becomes exact in the limit that $`M\mathrm{}`$.
A sufficiency condition for the random walk to visit configuration space with a probability proportional to the density $`\rho (𝐫)`$ is the detailed balance condition
$$\rho (𝐫_o)K(𝐫_o𝐫_n)=\rho (𝐫_n)K(𝐫_n𝐫_o),$$
(6)
where $`𝐫_o`$ and $`𝐫_n`$ represent two configurations of the system and $`K(𝐫_o𝐫_n)`$ is the conditional probability that if the system is at configuration $`𝐫_o`$ it makes a transition to $`𝐫_n`$. In many Monte Carlo approaches, the conditional probability is not known and is replaced by the expression
$$K(𝐫_o𝐫_n)=T(𝐫_o𝐫_n)\mathrm{a}cc(𝐫_o𝐫_n),$$
(7)
where $`T(𝐫_o𝐫_n)`$ is called the trial probability and $`\mathrm{a}cc(𝐫_o𝐫_n)`$ is an acceptance probability constructed to ensure $`K(𝐫_o𝐫_n)`$ satisfies the detailed balance condition. The trial probability can be any normalized density function chosen for convenience. A common choice for the acceptance probability is given by
$$\mathrm{a}cc(𝐫_o𝐫_n)=\mathrm{min}[1,\frac{\rho (𝐫_n)T(𝐫_n𝐫_o)}{\rho (𝐫_o)T(𝐫_o𝐫_n)}].$$
(8)
The Metropolis method, obtained from Eq.(8) by choosing $`T(𝐫_o𝐫_n)`$ to be a uniform distribution of points of width $`\mathrm{\Delta }`$ centered about $`𝐫_o`$, is arguably the most widely used Monte Carlo method and the basis for all the approaches discussed in the current work. The Metropolis method rigorously guarantees a random walk visits configuration space proportional to a given density function asymptotically in the limit of an infinite number of steps. In practice when configuration space is divided into important regions separated by significant energy barriers, a low temperature finite Metropolis walk can have prohibitively long equilibration times.
Such problems in attaining ergodicity in the walk do not occur at temperatures sufficiently high that the system has significant probability of finding itself in the barrier regions. In both the J-walking and parallel tempering methods, information obtained from an ergodic Metropolis walk at high temperatures is passed to a low temperature walker periodically to enable the low temperature walker to overcome the barriers between separated regions. In the J-walking method the trial probability at inverse temperature $`\beta `$ is taken to be a high temperature Boltzmann distribution
$$T(𝐫_o𝐫_n)=Z^1e^{\beta _JU(𝐫_n)}$$
(9)
where $`\beta _J`$ represents the jumping temperature that is sufficiently high that a Metropolis walk can be assumed to be ergodic. Introduction of Eq.(9) into Eq.(8) results in the acceptance probability
$$\mathrm{a}cc(𝐫_o𝐫_n)=\mathrm{min}\{1,\mathrm{exp}[(\beta \beta _J)(U(𝐫_n)U(𝐫_o))]\}.$$
(10)
In practice at inverse temperature $`\beta `$ the trial moves are taken from the Metropolis distribution about 90% of the time with jumps attempted using Eq.(9) about 10% of the time. The jumping configurations are generated with a Metropolis walk at inverse temperature $`\beta _J`$, and jump attempts are accepted using Eq.(10). The acceptance expression \[Eq.(10)\] is correct provided the configurations chosen for jumping are a random representation of the distribution $`e^{\beta _JU(𝐫)}`$. The Metropolis walk that is used to generate the configurations at inverse temperature $`\beta _J`$ is correlated, and Eq.(10) is inappropriate unless jumps are attempted sufficiently infrequently to break the correlations. In practice Metropolis walks are still correlated after 10 steps, and it is not possible to use Eq.(10) correctly if jumps are attempted 10% of the time. In J-walking the difficulty with correlations is overcome in two ways. In the first method, often called serial J-walking, a large set of configurations is stored to an external distribution with the configurations generated with a Metropolis walk at inverse temperature $`\beta _J`$, and configurations stored only after sufficient steps to break the correlations in the Metropolis walk. Additionally, the configurations are chosen from the external distribution at random. This external distribution is made sufficiently large that the probability of ever choosing the same configuration more than once is small. In this method detailed balance is strictly satisfied only in the limit that the external distribution is of infinite size. In the second method, often called parallel J-walking, the walks at each temperature are made in tandem on a parallel machine. Many processors, randomly initialized, are assigned to the jumping temperature, and each processor at the jumping temperature is used to donate a high temperature configuration to the low temperature walk sufficiently infrequently that the correlations in the Metropolis walk at inverse temperature $`\beta _J`$ are broken. In this parallel method, configurations are never reused, but the acceptance criterion \[Eq.(10)\] is strictly valid only in the limit of an infinite set of processors at inverse temperature $`\beta _J`$. In practice both serial and parallel J-walking work well for many applications with finite external distributions or with a finite set of processors.
In parallel tempering configurations from a high temperature walk are also used to make a low temperature walk ergodic. In contrast to J-walking rather than the high temperature walk feeding configurations to the low temperature walk, the high and low temperature walkers exchange configurations. By exchanging configurations detailed balance is satisfied, once the Metropolis walks at the two temperatures are sufficiently long to be in the asymptotic region. To verify detailed balance is satisfied by the parallel tempering procedure we let
$$\rho _2(𝐫,𝐫^{})=Z^1e^{\beta U(𝐫)}e^{\beta _JU(𝐫^{})}$$
(11)
be the joint density that the low temperature walker is at configuration $`𝐫`$ and the high temperature walker is at configuration $`𝐫^{}`$. When configurations between the two walkers are exchanged, the detailed balance condition is
$$\rho _2(𝐫,𝐫^{})K(𝐫𝐫^{},𝐫^{}𝐫)=\rho _2(𝐫^{},𝐫)K(𝐫^{}𝐫,𝐫𝐫^{})$$
(12)
By solving for the ratio of the conditional transition probabilities
$$\frac{K(𝐫𝐫^{},𝐫^{}𝐫)}{K(𝐫^{}𝐫,𝐫𝐫^{})}=\mathrm{exp}[(\beta \beta _J)(U(𝐫^{})U(𝐫))],$$
(13)
it is evident that if exchanges are accepted with the same probability as the acceptance criterion used in J-walking \[see Eq.(10)\], detailed balance is satisfied.
Although the basic notions used by both J-walking and parallel tempering are similar, the organization of a parallel tempering calculation can be significantly simpler than the organization of a J-walking calculation. In parallel tempering no external distributions are required nor are multiple processors required at any temperature. Parallel tempering can be organized in the same simple way that serial tandem J-walking is organized as discussed in the original J-walking reference. Unlike serial tandem J-walking where detailed balance can be attained only asymptotically, parallel tempering satisfies detailed balance directly. For a problem as difficult as LJ<sub>38</sub> where very long simulations are required, the huge external distributions needed in serial J-walking, or the large set of jumping processors needed in parallel J-walking, make the method prohibitive. As discussed in Section III, parallel tempering can be executed for arbitrarily long simulations making the method suitable at least for LJ<sub>38</sub>.
In the current calculation parallel tempering is used not just to simulate the system at some low temperature using high temperature information, but simulations are performed for a series of temperatures. As is the case for J-walking and as discussed elsewhere for parallel tempering, the gaps between adjacent temperatures cannot be chosen arbitrarily. Temperature gaps must be chosen so that exchanges are accepted with sufficient frequency. If the temperature gap is too large, the configurations important at the two exchanging temperatures can be sufficiently dissimilar that no exchanges are ever accepted. Preliminary calculations must be performed to explore the temperature differences needed for acceptable exchange probabilities. In practice we have found at least 10% of attempted exchanges need to be accepted for the parallel tempering procedure to be useful. In general the temperature gaps must be decreased near phase change regions or when the temperature becomes low.
By exchanging configurations between temperatures, correlations are introduced at different temperature points. For example, the average heat capacities at two temperatures may rise or fall together as each value fluctuates statistically. In some cases the values of the heat capacities or other properties at two temperatures can be anti-correlated. The magnitude of these correlations between temperatures are measured and discussed in Section III. As discussed in Section III the correlations between differing temperatures imply that the statistical fluctuations must be sufficiently low to ensure any features observed in a calculation as a function of temperature are meaningful.
## III Results
Forty distinct temperatures have been used in the parallel tempering simulations of LJ<sub>38</sub> ranging from $`T=0.0143\epsilon /k_B`$ to $`T=0.337\epsilon /k_B`$. The simulations have been initiated from random configurations of the 38 atoms within a constraining sphere of radius 2.25 $`\sigma `$. We have chosen $`R_c=2.25\sigma `$, because we have had difficulties attaining ergodicity with larger constraining radii. With large constraining radii, the system has a significant boiling region at temperatures not far from the melting region, and it is difficult to execute an ergodic walk with any method when there is coexistence between liquid-like and vapor regions. Constraining radii smaller than $`2.25\sigma `$ can induce significant changes in thermodynamic properties below the temperature of the melting peak. Using the randomly initialized configurations the initialization time to reach the asymptotic region in the Monte Carlo walk has been found to be long with about 95 million Metropolis Monte Carlo points followed by 190 million parallel tempering Monte Carlo points included in the walk prior to data accumulation. This long initiation period can be made significantly shorter by initializing each temperature with the structure of the global minimum. We have chosen to initialize the system with random configurations to verify the parallel tempering method is able to equilibrate this system with no prior knowledge about the structure of the potential surface. Following this initiation period, $`1.3\times 10^{10}`$ points have been included with data accumulation. Parallel tempering exchanges have been attempted every 10 Monte Carlo passes over the 38 atoms in the cluster.
In an attempt to minimize the correlations in the data at differing temperatures, an exchange strategy has been used that includes exchanges between several temperatures. To understand this strategy, we let the set of temperatures be put into an array. One-half of the exchanges have been attempted between adjacent temperatures in the array, one-fourth have been attempted between next near neighboring temperatures, one-eighth between every third temperature, one-sixteenth between every fourth temperature and one-thirty second between every fifth temperature in the array. We have truncated this procedure at fifth near neighboring temperatures, because exchanges between temperatures differing by more than fifth neighbors are accepted with frequencies of less than ten per cent. The data presented in this work have been generated using the procedure outlined above. In retrospect, we have found exchanges are only required between adjacent temperatures. We have also performed the calculations where exchanges are included only between adjacent temperatures, and we have seen no significance differences either in the final results or in the correlations between different temperatures. Using the random initializations of the clusters, after the initialization period the lowest temperature walks are dominated by configurations well represented by small amplitude oscillations about the global minimum structure.
For all data displayed in this work, the error bars represent two standard deviations of the mean. The heat capacity, calculated from the standard fluctuation expression of the energy
$$C_V=k_B\beta ^2[E^2E^2],$$
(14)
is displayed in the upper panel of Fig. 1. In agreement with the heat capacity for LJ<sub>38</sub> reported by Doye et al., the heat capacity displayed in Fig. 1 has a melting maximum centered at about $`T=0.166\epsilon /k_B`$. In contrast to the results of Doye et al. we find no maximum associated with the solid-solid transition between the two basins in the potential surface. Rather, we see a small change in slope at about $`T=0.1\epsilon /k_B`$. To characterize this region having a change in slope, in the lower panel of Fig. 1 we present a graph of $`(C_V/T)_V`$ calculated from the fluctuation expression
$$\left(\frac{C_V}{T}\right)_V=2\frac{C_V}{T}+\frac{1}{k_B^2T^4}[E^3+2E^33E^2E]$$
(15)
The small low temperature maximum in $`(C_V/T)_V`$ occurs within the slope change region.
To interpret the configurations associated with the various regions of the heat capacity, we use an order parameter nearly identical to the order parameter introduced by Steinhardt, Nelson and Ronchetti to distinguish face centered cubic from icosahedral structures in liquids and glasses. The order parameter has been used by Doye et al. to monitor phase changes in LJ<sub>38</sub>. The order parameter $`Q_4`$ is defined by the equation
$$Q_4=\left(\frac{4\pi }{9}\underset{m=4}{\overset{4}{}}|\overline{Q}_{4,m}|^2\right)^{1/2},$$
(16)
where
$$\overline{Q}_{4,m}=\frac{1}{N_b}\underset{r_{ij}<r_b}{}Y_{4,m}(\theta _{ij},\varphi _{ij}).$$
(17)
To understand the parameters used in Eq.(17), it is helpful to explain how $`\overline{Q}_{4,m}`$ is evaluated. The center of mass of the full 38 atom cluster is located and the atom closest to the center of mass is then identified. The atom closest to the center of mass plus the 12 nearest neighbors of that atom define a “core” cluster of the 38 atom cluster. The center of mass of the core cluster is then calculated. The summation in Eq.(17) is performed over all vectors that point from the center of mass of the core cluster to all $`N_b`$ bonds formed from the 13 atoms of the core cluster. A bond is assumed to be formed between two atoms of the core cluster if their internuclear separation $`r_{ij}`$ is less than a cut-off parameter $`r_b`$, taken to be $`r_b=1.39\sigma `$ in this work. In Eq.(17) $`\theta _{ij}`$ and $`\varphi _{ij}`$ are respectively the polar and azimuthal angles of the vector that points from the center of mass of the core cluster to the center of each bond, and $`Y_{4,m}(\theta ,\varphi )`$ is a spherical harmonic. To verify that the optimal value of $`Q_4`$ is obtained, the procedure is repeated by choosing the second closest atom to the center of mass of the whole cluster to define the core cluster. The value of $`Q_4`$ obtained from this second core cluster is compared with that obtained from the first core cluster, and the smallest resulting value of $`Q_4`$ is taken to be the value of $`Q_4`$ for the entire cluster.
In the work of Steinhardt et al. fewer bonds are included in the summation appearing in Eq.(17) than in the current work. In the definition used by Steinhardt et al., the only bonds that contribute to the sum in Eq.(17) are those involving the central atom of the core cluster. In the definition used in this work, at low temperatures the sum includes all the bonds included by Steinhardt et al. in addition to vectors that connect the center of mass of the core cluster with the centers of bonds that connect atoms at the surface of the core cluster with each other. For a perfect and undistorted icosahedral or truncated octahedral cluster, the current definition and the definition of Steinhardt et al. are identical numerically owing to the rotational symmetry of the spherical harmonics. However, for distorted clusters the two definitions differ numerically. For perfect, undistorted icosahedral clusters $`Q_4=0`$ whereas for perfect, undistorted truncated octahedral clusters, $`Q_40.19`$, and both definitions of the order parameter are able to distinguish configurations from the truncated octahedral basin and other basins at finite temperatures. However, we have found the definition introduced by Steinhardt et al. is unable to distinguish structures in the icosahedral basin from liquid-like structures. This same issue has been discussed previously by Lynden-Bell and Wales. In contrast, we have found that liquid-like structures have larger values of $`Q_4`$ than icosahedral structures when the present definition of $`Q_4`$ \[i.e the definition that includes additional bonds in Eq.(17)\], is used. Consequently, as discussed shortly, the current definition of $`Q_4`$ enables an association of each configuration with either the icosahedral basin, the truncated octahedral basin, or structures that can be identified as liquid-like.
The average of $`Q_4`$ as a function of temperature is plotted in the upper panel of Fig. 2. Again the error bars represent two standard deviations of the mean. At the lowest calculated temperatures $`Q_4`$ is characteristic of the global truncated octahedral minimum. As the temperature is raised to the point where the slope change begins in the heat capacity, $`Q_4`$ begins to drop rapidly signifying the onset of transitions between the structures associated with the global minimum and icosahedral structures. We then have the first hint that the slope change in $`C_V`$ is associated with a analogue of a solid-solid transition from the truncated octahedron to icosahedral structures.
To clarify the transition further, the data plotted in the lower panel of Fig. 2 represent the probability of observing particular values of $`Q_4`$ as a function of temperature. The probabilities have been calculated by tabulating the frequency of observing particular values of $`Q_4`$ for each configuration generated in the simulation. Different values of $`Q_4`$ are then assigned to either icosahedral structures (labeled IC in the graph), truncated octahedral structures (labeled FCC) or liquid-like structures (labeled LIQ). below. By comparing the lower panel of Fig. 2 with the derivative of the heat capacity plotted in the lower panel of Fig. 1, it is evident that icosahedral structures begin to be occupied and the probability of finding truncated octahedral structures begins to fall when the derivative in the heat capacity begins to rise. Equilibrium between the truncated octahedral structures and the icosahedral structures continues into the melting region, and truncated octahedral structures only disappear on the high temperature side of the melting peak of the heat capacity. Doye et al. and Miller et al. have generated data analogous to that depicted in the lower panel of Fig. 2 using the superposition method, and the data of Miller et al. are in qualitative agreement with the present data. A more direct comparison with the data of these authors can be made by performing periodic quenching along the parallel tempering trajectories. We then use an energy criterion similar to that of Doye et al. to distinguish the three categories of geometries and to generate the respective probabilities $`P`$. For a given total cluster energy $`E`$, a truncated octahedron is associated with $`E<173.26\epsilon `$, icosahedral-based structures with $`173.26\epsilon E<171.6\epsilon `$, and liquid-like structures with $`E171.6\epsilon `$. The quenches have been performed every $`10^4`$ MC steps for each temperature, and the results of these quenches are plotted in Fig. 3. Using the energy criterion, the behavior we observe is qualitatively similar to the data of Doye et al.. However, the largest probability of observing icosahedral structures is found here to be substantially lower than Doye et al. The data accumulated more recently by Miller et al. using the superposition method include contributions from more stationary points than in the previous work of Doye et al., but no reweighting has been performed. As a result, the distributions of isomers look quite different, especially at high temperatures.
The assignment of a particular value of $`Q_4`$ to a structure as displayed in Fig. 2, is made by an analysis of the probability distribution $`P_Q(T,Q_4)`$ of the order parameter displayed in Figs. 4 and 5. Figure 4 is a representation of the three-dimensional surface of $`P_Q(T,Q_4)`$ as a function of temperature and order parameter. A projection of this surface onto two dimensions is given in Fig. 5. The probability density in Fig. 5 is represented by the shading so that the brighter the area the greater the probability. The horizontal white lines in Fig. 5 define the regions of the heat capacity curve. The lowest temperature horizontal line represents the temperature at which the slope of the heat capacity first changes rapidly, the middle temperature horizontal line represents the lowest temperature of the melting peak and the highest temperature horizontal line represents the end of the melting region. An additional representation of the data is given in Fig. 6, where the probability of observing particular values of $`Q_4`$ is given as a function of $`Q_4`$ at a fixed temperature of $`0.14\epsilon /k_B`$. In Fig. 6 three regions are evident for $`P_Q(T=0.14\epsilon /k_B,Q_4)`$ with $`Q_4`$ ranging from 0.13 to 0.19. Although the presence of three regions seems to indicate three distinct structures, all three regions correspond to the truncated octahedral global minimum. We have verified this assignment by quenching the structures with $`Q_4`$ ranging from 0.13 to 0.19 to their nearest local minima, and we have found all such structures quench to the truncated octahedron. To explain the three regions, we have found that there are small distortions of LJ<sub>38</sub> about the truncated octahedral structure where both the energy and $`Q_4`$ increase together. These regions where both the energy and $`Q_4`$ increase above $`Q_40.13`$ have low probability and account for the oscillations observed in Figs. 46. In the lower panel of Fig. 2, all structures having $`Q_4>0.13`$ have been identified as truncated octahedra. Quench studies of the broad region visible in Fig. 5 at the lowest values of $`Q_4`$, or equivalently in the first low $`Q_4`$ peak in Fig. 6 find all examined structures to belong to the icosahedral basin. To determine if a given configuration is associated with the icosahedral basin, one-dimensional cross sectional plots are made from Fig. 4 at each temperature used in the calculation. Figure 6 is a particular example of such a cross sectional plot. The maximum present at low $`Q_4`$ represents the center for structures in the icosahedral basin. The next two maxima at higher $`Q_4`$ represents the midpoint of the liquid region. Consequently, in generating the lower panel of Fig. 2, all configurations with $`Q_4`$ between $`Q_4=0`$ and the first minimum in Fig. 6 have been identified as icosahedral structures. All other values of $`Q_4`$, represented by the broad intermediate band in Fig. 5 (or the region about the second two maxima in Fig. 6), have been identified as liquid-like structures. To make these identifications, separate cross sections of Fig. 4 must be made at each temperature. Of course, it is impossible to verify that the identification of all values of $`Q_4`$ with a particular structure as discussed above would agree with the result of quenching the structure to its nearest potential minimum. The differences found by defining icosahedral, truncated octahedral or liquid-like structures using either an energy criterion or $`Q_4`$ is clarified by comparing Fig. 3 and the lower panel of Fig. 2. Both definitions are arbitrary, and the information carried by the two classification methods complement each other.
Figure 5 also provides additional evidence that the peak in $`(C_V/T)_V`$ is associated with the equilibrium between the truncated octahedral structures and the icosahedral structures. There is significant density for both kinds of structures in the region between the lowest two parallel lines that define the region with the slope change. Additionally, both icosahedral structures and truncated octahedral structures begin to be in equilibrium with each other at the beginning of the slope change region. This equilibrium continues to temperatures above the melting region.
Another identification of the slope change region with a transition between truncated octahedral and icosahedral forms can be made by defining $`P_R(T,R)dR`$ to be the probability that an atom in the cluster is found at location $`R`$ to $`R+dR`$ from the center of mass of the cluster at temperature $`T`$. A projection of $`P_R(T,R)`$ onto the $`R`$ and $`T`$ plane is depicted in Fig. 7. The solid vertical lines represent the location of atoms from the center of mass of the truncated octahedral structure (the lower set of vertical lines), and the lowest energy icosahedral structure (the upper set of vertical lines). As in Fig. 5, increased probability is represented by the lighter shading. At the lowest temperatures $`P_R(T,R)`$ is dominated by contributions from the truncated octahedron as is evident by comparing the shaded regions with the lowest set of vertical lines. As the temperature is increased, contributions to $`P_R(T,R)`$ begin to appear from the icosahedral structures. The shaded region at $`R=0.45`$ does not match any of the vertical lines shown, but corresponds to atoms in the third lowest energy isomer, which like the second lowest energy isomer, comes from the icosahedral basin. The equilibrium between the icosahedral and truncated octahedral structures observed in Fig. 7 matches the regions of temperature observed in Fig. 5.
We have mentioned previously that parallel tempering introduces correlations in the data accumulated at different temperatures, and it is important to ensure the statistical errors are sufficiently small that observed features are real and not artifacts of the correlations. To measure these correlations we define a cross temperature correlation function for some temperature dependent property $`g`$ by
$$\gamma (T_1,T_2)=\frac{(g(T_1)g(T_1))(g(T_2)g(T_2))}{[(g(T_1)g(T_1))^2(g(T_2)g(T_2))^2]^{1/2}}.$$
(18)
A projection of $`\gamma (T_1,T_2)`$ when $`g=C_V`$ is given in Fig. 8. In Fig. 8 white represents $`\gamma =1`$ and black represents $`\gamma =1`$ with other shadings representing values of $`\gamma `$ between these two extremes. The white diagonal line from the lower left hand corner to the upper right hand corner represents the case that $`T_1=T_2`$ so that $`\gamma =1`$. The light shaded areas near this diagonal represent cases where $`T_1`$ and $`T_2`$ are adjacent temperatures in the parallel tempering simulations, and we find $`\gamma `$ to be only slightly less than unity. More striking are the black regions off the diagonal where $`\gamma `$ is nearly $`1`$. These black regions correspond to anti-correlations between results at temperatures near the heat capacity maximum in the melting peak and temperatures near the center of the slope change region associated with the transition between icosahedral and truncated octahedral structures. These correlations imply the importance of performing sufficiently long simulations to ensure that statistical fluctuations of the data are small compared to important features in the data as a function of temperature.
## IV Conclusions
Using parallel tempering methods we have successfully performed ergodic simulations of the equilibrium thermodynamic properties of LJ<sub>38</sub> in the canonical ensemble. As discussed by Doye et al. the potential surface of this system is complex with two significant basins; a narrow basin about the global minimum truncated octahedral structure, and a wide icosahedral basin. These two basins are separated both by structure and a large energy barrier making simulations difficult. In agreement with the results of Doye et al. we find clear evidence of equilibria between structures at the basin of the global minimum and the icosahedral basin at temperatures below the melting region. Unlike previous work we find no heat capacity maximum associated with this transition, but rather a region with a change in the slope of the heat capacity as a function of temperature.
We have found parallel tempering to be successful with this system, and have noted correlations in our data at different temperatures when the parallel tempering method is used. These correlations imply the need to perform long simulations so that the statistical errors are sufficiently small that the correlations do not introduce artificial conclusions.
We believe that the methods used in this work could be applied to a variety of other systems including clusters of complexity comparable to LJ<sub>38</sub>. For instance, the 75-atom Lennard-Jones cluster is known to share many features with the 38-atom cluster investigated here. LJ<sub>75</sub> is also characterized by a double funnel energy landscape, one funnel being associated with icosahedral structures, and the other funnel being associated with the decahedral global minimum. The landscape of LJ<sub>75</sub> has been recently investigated by Doye, Miller and Wales who have used $`Q_6`$ as the order parameter. In another paper, Wales and Doye have predicted that the temperature where the decahedral/icosahedral equilibrium takes place should be close to $`0.09\epsilon /k_B`$. This prediction is made by using the superposition method, but no caloric curves have yet been reported for LJ<sub>75</sub>. The parallel tempering Monte Carlo method can be expected to work well for LJ<sub>75</sub>, and such a parallel tempering study would be another good test case for theoretical methods discussed in this work.
A useful enhancement of parallel tempering Monte Carlo is the use of multiple histogram methods that enables the calculation of thermodynamic functions in both the canonical and microcanonical ensembles by the calculation of the microcanonical entropy. In practice the multiple histogram method requires the generation of histograms of the potential energy at a set of temperatures such that there is appreciable overlap of the potential energy distributions at adjacent temperatures. This overlap requirement is identical to the choice of temperatures needed in parallel tempering.
In performing simulations on LJ<sub>38</sub> we have tried other methods to reduce ergodicity errors, and we close this section by summarizing the difficulties we have encountered with these alternate methods. It is important to recognize that the parallel tempering simulations include in excess of $`10^{10}`$ Monte Carlo points, and most of our experience with these alternate methods have come from significantly shorter simulations. Our ability to include this large number of Monte Carlo points with parallel tempering is an important reason why we feel parallel tempering is so useful.
¿From experience with other smaller and simpler clusters, for a J-walking simulation to include $`10^{10}`$ points, an external distribution containing at least $`10^9`$ points is required to prevent oversampling of the distribution. Such a large distribution is prohibitive with current computer technology. Our J-walking simulations containing about $`10^7`$ Monte Carlo points have resulted in data that have not been internally reproducible, and data that are not in good agreement with the parallel tempering data. Many long J-walking simulations with configurations initiated at random only have icosahedral structures at the lowest calculated temperatures. To stabilize the J-walking method with respect to the inclusion of truncated octahedral structures at low temperatures, we have attempted to generate distributions using the modified potential energy function $`U_m(𝐫,\lambda )=U(𝐫)\lambda Q_4`$. In this modified potential $`\lambda `$ is a parameter chosen to deepen the octahedral basin without significantly distorting the cluster. While this modified potential has led to more stable results than J-walking using the bare potential, the results with $`10^8`$ Monte Carlo points have not been reproducible in detail. The application of Tsallis distributions has not improved this situation.
We have also tried to apply the multicanonical J-walking approach recently introduced by Xu and Berne. While this multicanonical approach has been shown to improve the original J-walking strategy for other cluster systems, in the case of LJ<sub>38</sub> the iterations needed to produce the external multicanonical distribution have not produced truncated octahedral structures. The iterations have produced external distributions having either liquid-like structures or structures from the icosahedral basin. The multicanonical distribution is known to have deficiencies at low energies, and this low energy difficulty appears to be problematic for LJ<sub>38</sub>. We have attempted to solve these deficiencies by including prior information about the thermodynamics of the system. In this attempt we have chosen the multicanonical weight to be $`w_{\mathrm{m}u}(U)=\mathrm{exp}[S_{\mathrm{P}T}(U)]`$ where $`S_{\mathrm{P}T}(U)`$ is the microcanonical entropy extracted from a multihistogram analysis of a parallel tempering Monte Carlo simulation. In several attempts using this approach we have not observed either the truncated octahedral structure nor structures from the icosahedral basin with significant probability. The multicanonical distribution so generated is dominated by liquid-like structures, and the distribution appears to be incapable of capturing the solid-to-solid transition that leads to the low temperature peak in $`(C_V/T)_V`$. Whether there are other approaches to generate a multicanonical distribution that are more successful in capturing low temperature behaviors is unknown to us.
Much insight about phase change behaviors can be obtained from simulations in the microcanonical ensemble or using molecular dynamics methods. For example, the van der Waals loops observed in LJ<sub>55</sub> complement the interpretation of the canonical caloric curves. In the next paper we present parallel tempering results for LJ<sub>38</sub> using both molecular dynamics and microcanonical Monte Carlo methods.
## Acknowledgments
Some of this work has been motivated by the attendance of two of us (DLF and FC) at a recent CECAM meeting on ‘Overcoming broken ergodicity in simulations of condensed matter systems.’ We would like to thank CECAM, J.E. Straub and B. Smit who organized the meeting, and those who attended the workshop for stimulating discussions, particularly on the connections between J-walking and parallel tempering. Two of us (DLF and JPN) would also like to thank Professor M.P. Nightingale for helpful discussions concerning the parallel tempering method. This work has been supported in part by the National Science Foundation under grant numbers CHE-9714970 and CDA-9724347. This research has been supported in part by the Phillips Laboratory, Air Force Material Command, USAF, through the use of the MHPCC under cooperative agreement number F29601-93-0001. The views and conclusions contained in this document are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of Phillips Laboratory or the U.S. Government.
|
no-problem/0003/cond-mat0003245.html
|
ar5iv
|
text
|
# Collective and independent-particle motion in two-electron artificial atoms
## Abstract
Investigations of the exactly solvable excitation spectra of two-electron quantum dots with a parabolic confinement, for different values of the parameter $`R_W`$ expressing the relative magnitudes of the interelectron repulsion and the zero-point kinetic energy, reveal for large $`R_W`$ a ro-vibrational spectrum associated with a linear trimeric rigid molecule composed of the two electrons and the infinitely heavy confining dot. This spectrum transforms to that of a “floppy” molecule for smaller values of $`R_W`$. The conditional probability distribution calculated for the exact two-electron wave functions allows for the identification of the ro-vibrational excitations as rotations and stretching/bending vibrations, and provides direct evidence pertaining to the formation of such molecules.
The behavior of three-body systems has been a continuing subject of interest and a source of discoveries in various branches of physics, both in the classical and quantum regimes, with the moon-earth-sun system and helium-like atoms (in the ground and excited states) being perhaps the best known examples. Furthermore, insights gained through such investigations often provide the foundations for understanding the properties of systems with a larger number of interacting particles.
Recently, analysis of the measured conductance and differential capacitance spectra of two-dimensional (2D) quantum dots (QD’s,) created via voltage gates at semiconductor heterointerfaces, led to their naming (by analogy) as “artificial atoms”. In particular, this analogy refers to identification of regularities in the measurements which have been interpreted along the lines of the electronic shell model (SM) of natural atoms, which is founded on the physical picture of electrons moving in a spherical central field including the averaged contribution from electron-electron interactions.
Motivated by the central role that spectroscopy played in the development of our undestanding of atomic structure, we investigate in this paper the exactly solvable excitation spectrum of a two-electron (2e) parabolic QD as a prototypical three-body problem comprised of the two electrons ($`X`$’s) and the (infinitely heavy) confining quantum dot ($`Y`$). Through probing of the structure of the exact wave functions with the use of the conditional probability distribution (CPD) , in conjunction with identification of regularities of the excitation spectrum, we show that such a spectrum is characteristic of collective dynamics resulting from formation of a linear trimeric molecule $`XYX`$ . In particular, we find that the excitation spectrum of the 2e QD exhibits for a weak parabolic confinement (i.e., small harmonic frequency $`\omega _0`$) a well-developed, separable ro-vibrational pattern which is akin to the characteristic spectrum of natural “rigid” triatomic molecules (i.e., molecules with stretching and bending vibrational frequencies higher than the rotational one). For stronger confinements (i.e., large $`\omega _0`$), the spectrum transforms to one characteristic of a “floppy” triatomic molecule, converging finally to the independent-particle picture associated with the circular central mean field of the QD.
The Schrödinger equation for a 2e QD with a parabolic confinement of frequency $`\omega _0`$, with the 2D Hamiltonian given by $`H=_{i=1,2}𝐩_i^2/2m^{}+e^2/\kappa |𝐫_1𝐫_2|+0.5m^{}\omega _0^2_{i=1,2}𝐫_i^2`$, where $`\kappa `$ and $`m^{}`$ are, respectively, the dielectric constant and electron effective mass, is separable in the center-of-mass (CM) and relative-motion (rm) coordinates . Consequently, the energy eigenvalues may be written as $`E_{NM,nm}=E_{NM}^{\text{CM}}+\epsilon ^{\text{rm}}(n,|m|)`$, where $`E_{NM}^{\text{CM}}=\mathrm{}\omega _0(2N+|M|+1)`$ with the $`N`$ and $`M`$ quantum numbers corresponding to the number of radial nodes in the CM wave function and $`M`$ is the CM azimuthal quantum number, and $`\epsilon ^{\text{rm}}(n,|m|)`$ are the eigenvalues of the one-dimensional Schrödinger equation ,
$`{\displaystyle \frac{^2\mathrm{\Omega }}{u^2}}+\{{\displaystyle \frac{m^2+1/4}{u^2}}u^2{\displaystyle \frac{R_W\sqrt{2}}{u}}+{\displaystyle \frac{\epsilon }{\mathrm{}\omega _0/2}}\}\mathrm{\Omega }=0,`$
where $`\mathrm{\Omega }(u)/\sqrt{u}`$ is the radial part of the rm wave function $`\mathrm{\Omega }(u)e^{im\theta }/\sqrt{u}`$ with $`n`$ being the number of radial nodes; $`u=|𝐮_1𝐮_2|`$ with $`𝐮_i=𝐫_i/l_0\sqrt{2}`$ $`(i=1,2)`$ being the electrons’ coordinates in dimensionless units and $`l_0=(\mathrm{}/m^{}\omega _0)^{1/2}`$, that is the spatial extent of the lowest-state wave function of a single electron. The so-called Wigner parameter $`R_W=(e^2/\kappa l_0)/\mathrm{}\omega _0`$ multiplying the Coulomb repulsion term expresses the relative strength of the Coulomb repulsion between two electrons separated by $`l_0`$ and twice the zero-point kinetic energy of an electron moving in a harmonic confinement.
Denoting the exact spatial wave function of the 2e QD by $`\mathrm{\Phi }_{NM,nm}(𝐮_1,𝐮_2)`$ (which is the product of the CM and rm wave functions), and the spatial two-electron density by $`W_{NM,nm}(𝐮_1,𝐮_2)=|\mathrm{\Phi }_{NM,nm}(𝐮_1,𝐮_2)|^2`$, we define the usual pair-correlation function (PCF) as
$`G(v)=2\pi {\displaystyle \delta (𝐮_1𝐮_2𝐯)W(𝐮_1,𝐮_2)𝑑𝐮_1𝑑𝐮_2},`$
and the conditional probability distribution (CPD) for finding one electron at $`𝐯`$ given that the other is at $`𝐯_0`$ as,
$`𝒫(𝐯|𝐮_2=𝐯_0)={\displaystyle \frac{W(𝐯,𝐮_2=𝐯_0)}{𝑑𝐮_1W(𝐮_1,𝐮_2=𝐯_0)}},`$
where the $`M,N,n,m`$ indices of $`W`$ (and therefore of $`G`$ and $`𝒫`$) have been suppressed. Note that the exact electron densities are circularly symmetric.
With the above, we solved for the 2e QD energy spectra and wave functions for values of $`R_W=200`$, 20 and 3. We discuss first the $`R_W=200`$ case whose spectrum and selected PCF’s and CPD’s are displayed in Fig. 1. As can be seen immediately, for such a large value of $`R_W`$, the spectrum of the 2e QD (bottom part of Fig. 1) exhibits the following three well-developed regularities: (I) for every band $`(N_0,M_0,n_0,m)`$, with $`m=0,1,2,\mathrm{}`$, while $`N_0,M_0`$ and $`n_0`$ are kept constant (in the following the subscript “zero” denotes a number that is held constant in a particular sequence), the energy spacing between two adjacent levels $`m`$ and $`m+1`$ increases linearly in proportion to $`2m+1`$; the bands $`(N_0,\pm M_0,n_0,\pm m)`$ are degenerate. Note that the levels are spin singlet or triplet for $`m`$ even or odd, respectively, (II) the bands $`(0,M_0,0,m)`$ and $`(N_0,0,0,m)`$ correspond to excitations of the center-of-mass motion with $`M_0`$ and $`2N_0`$ vibrational quanta (phonons) of energy $`\mathrm{}\omega _0`$, respectively, and (III) the bottom levels of the bands $`(0,0,n_0,m)`$ form a one-dimensional harmonic-oscillator spectrum $`(n_0+1/2)\mathrm{}\omega _s`$.
The above three “spectral rules” specify a well-developed and separable ro-vibrational spectrum exhibiting collective rotations, as well as stretching and bending vibrations . Indeed, neglecting an overall constant term, the above rules can be summarized as,
$`E_{NM,nm}=Cm^2+(n+1/2)\mathrm{}\omega _s+(2N+|M|+1)\mathrm{}\omega _b,`$
where the rotational constant $`C0.037`$, the phonon for the stretching vibration has an energy $`\mathrm{}\omega _s3.50`$, and the phonon for the bending vibration coincides with that of the CM motion, i.e., $`\mathrm{}\omega _b=\mathrm{}\omega _0=2`$ (all energies are given in dimensionless units of $`\mathrm{}\omega _0/2`$). Note that the rotational energy is proportional to $`m^2`$, as is appropriate for 2D rotations, unlike the case of natural triatomic molecules where the rotational energy has a term proportional to $`l(l+1)`$, $`l`$ being the quantum number associated with the 3D angular momentum. Observe also that the bending vibration can carry by itself an angular momentum $`\mathrm{}M`$ and thus the rotational angular momentum $`\mathrm{}m`$ does not necessarily coincide with the total angular momentum $`\mathrm{}(M+m)`$.
Further insight into the collective character of the spectrum displayed in Fig. 1 can be gained by examining the CPD’s and PCF’s associated with selected states of the rotational bands $`(N_0,M_0,n_0,m)`$ (the CPD’s are displayed to the left of the PCF’s; notice that the PCF’s are always circularly symmetric). The band $`(0,0,0,m)`$, being purely rotational with zero phonon excitations, can be designated as the “yrast” band, in analogy with the customary terminology from the spectroscopy of rotating nuclei .
In Fig. 1(a), we display the CPD’s and PCF’s for three specific states of the yrast band, i.e., the (0,0,0,0), the (0,0,0,3), and the (0,0,0,6). The corresponding PCF’s are all alike and centered around $`2d_0=5.2`$, which implies that the two electrons keep apart from each other at a distance $`2d_0`$. Due to the circular symmetry of the PCF’s, however, one can only conclude that the two electrons are moving on a thin circular shell of radius $`d_0`$. To reveal the formation of an electron molecule, one needs to consider further the corresponding CPD’s \[plotted in the left column with $`𝐯_\mathrm{𝟎}=(d_0,0)`$; the point $`𝐯_0`$ is denoted by a solid dot\]. In fact, the CPD’s demonstrate that the two electrons reside at all instances at diametrically opposite points, thus forming a linear molecule $`XYX`$ with two equal bonds ($`XY`$ and $`YX`$) of length $`d_0`$. In addition, one can see that all three CPD’s are practically identical, in spite of the fact that the angular momentum changes from $`m=0`$ (lower subplot) to $`m=6`$ (upper subplot). This behavior, namely the constancy of the bond lengths irrespective of the rotational energy, properly characterizes the electron molecule as a rigid rotor.
Turning our attention away from the yrast band, we focus next on the bands $`(0,0,1,m)`$ and $`(0,0,2,m)`$, which are rotational bands built upon one- and two-phonon excitations of the stretching vibrational mode. We have verified that the PCF’s and the CPD’s corresponding to these bands share with the yrast band the property that they do not change (at least for the levels displayed in Fig. 1) as a function of $`m`$. Thus it is sufficient to study the bottom states, i.e., those with $`m=0`$, (0,0,1,0) and (0,0,2,0), whose corresponding PCF’s and CPD’s are displayed in the lower and upper subplots of Fig. 1(b), respectively. The PCF’s demonstrate the presence of internal excitations with one and two nodes in the relative motion, but they yield no further information regarding the electron molecule. The CPD’s, however, plotted here for $`𝐯_0=(d_0,0)`$ \[the point $`𝐯_0`$ is kept the same for all subplots in Fig. 1\] immediately reveal the presence of excitations (specified by the number of their nodes, i.e., here one or two) associated with the vibrational mode of the $`XYX`$ molecule along the interelecton axis (namely, the stretching vibration).
By examining the corresponding CPD’s, one can further demonstrate that the two degenerate rotational bands $`(0,2,0,m)`$ and $`(1,0,0,m)`$ are built upon the lowest two-phonon excitations of the bending vibrational mode of the linear molecule $`XYX`$. Again, we have verified that it is sufficient to consider the two states at the bottom of the bands, namely the $`(1,0,0,0)`$ \[see lower subplot of Fig. 1(c)\] and the $`(0,2,0,0)`$ \[see upper subplot of Fig. 1(c)\]. It can be seen that both CPD’s describe vibrational excitations of the $`XYX`$ molecule which are perpendicular to the interelectron axis (namely, bending vibrations), with the one associated with the $`(1,0,0,0)`$ level having one node and the one associated with the $`(0,2,0,0)`$ having no nodes (this is in agreement with the fact that the normal mode associated with the bending vibrations is related to the 2D harmonic-oscillator describing the CM motion). We note that the corresponding PCF’s \[see right column in Fig. 1(c)\] fail to describe (in fact they are completely unrelated to) the bending vibrations; indeed they are identical to the ones associated with the yrast band \[Fig. 1(a)\] which is devoid of any vibrational excitations.
The CPD and PCF of the bottom level (i.e., with $`m=0`$) of the rotational band $`(1,0,1,m)`$, which is built upon more complicated phonon excitations of mixed bending and stretching character (not shown in Fig. 1), are displayed in Fig. 1(d). It is easily seen that the CPD represents a vibrational motion of the electron molecule both along the interelectron axis (one excited stretching-mode phonon) and perpendicularly to this axis (two excited bending-mode phonons). In fact, the CPD in Fig. 1(d) can be viewed as a composite made out of two CPD’s shown previously, one in the lower subplot of Fig. 1(b) and the other in the lower subplot of Fig. 1(c). Returning to Fig. 1(d), one can see again that, in contrast to the CPD which enables detailed probing of the excitation spectrum, the information which may be extracted from the corresponding PCF is rather limited.
The rigidity of the electron molecule, which is so well established for $`R_W=200`$, will naturally weaken as the parameter $`R_W`$ decreases and the XYX molecule will start exhibiting an increasing degree of “floppiness”. Such floppiness can be best observed in the yrast band, which, beginning with the higher levels, will gradually deviate from the spectral rule (I) discussed above, and eventually it will become unrecognizable as a rotational band. This is illustrated in the lower subplot of Fig. 2(a) which displays the yrast band for $`R_W=20`$. Specifically, one can see that only the lowest four levels honor approximately rule (I), the higher ones tending to develop a constant energy spacing between adjacent levels \[this spacing converges slowly to the energy spacing $`\mathrm{}\omega _0`$ (i.e., to the value 2 in dimensionless units) of the parabolic confinement\]. In the case $`R_W=3`$, one can hardly identify any rotational sequence in the levels of the yrast band \[plotted at the bottom subplot of Fig. 2(b)\]. Indeed, although the energy spacing between the second and the third levels is larger than that between the first and the second levels (but with a ratio substantially different than 3/1), the spacing between higher levels approaches quickly the value 2 of the external confinement.
However, in spite of the floppiness exhibited by the excitation spectra in Fig. 2, the (singlet) ground-state of the 2e QD for both $`R_W=20`$ and $`R_W=3`$ drastically deviates from the 1$`s^2`$ closed-shell orbital configuration expected from the independent-particle picture. Rather, as demonstrated by the corresponding CPD’s \[top subplots in Fig. 2\], in both these cases of smaller $`R_W`$’s, the ground state is still associated with formation of rather well-developed XYX electron molecules, but with progressively smaller bond lengths. Finally, we remark that the stretching vibrations are more robust and tend to better preserve a constant spacing between the bottom levels of the bands $`(0,0,n_0,m)`$ \[these levels were grouped in a vibrational band $`(0,0,n,0)`$ and are plotted on the right-hand-side of the lower subplots in Fig. 2\].
The remarkable emergence of ro-vibrational excitations for parabolically confined 2eQD’s, under magnetic-field-free conditions, provides direct evidence for the formation of electron molecules in QD’s, with their rigidity controlled by the parameter $`R_W`$. Such electron molecules and associated collective excitation spectra are general properties of QD’s (with greater spectral complexity in many-electron QD’s), whose observations (and manipulations through controlled pinning of the collective rotations\[15(b)\]) form outstanding experimental challenges.
This research is supported by the US D.O.E. (Grant No. FG05-86ER-45234).
|
no-problem/0003/cond-mat0003093.html
|
ar5iv
|
text
|
# Computation of dendritic microstructures using a level set method
\[
## Abstract
We compute time-dependent solutions of the sharp-interface model of dendritic solidification in two dimensions by using a level set method. The steady-state results are in agreement with solvability theory. Solutions obtained from the level set algorithm are compared with dendritic growth simulations performed using a phase-field model and the two methods are found to give equivalent results. Furthermore, we perform simulations with unequal diffusivities in the solid and liquid phases and find reasonable agreement with the available theory.
\]
Various numerical approaches have been developed to solve the difficult moving boundary problem that governs the growth of dendrites . Unfortunately, the direct solution of the time-dependent Stefan problem is troublesome and usually requires front tracking and lattice deformation in order to contain the moving solid-liquid interface, which is often very complicated topologically. In general, the methods developed to tackle the free-boundary problem have difficulty in handling topology changes, such as the merging and breaking of surfaces, and are usually not easily extendible to higher dimensions.
In order to avoid the difficulties associated with tracking a sharp interface, the phase-field model of solidification has been developed and is currently the most popular technique for simulating dendritic growth. The phase-field model avoids the computational difficulties associated with front tracking by introducing an auxiliary order parameter, or phase-field, $`\psi (𝐫,t)`$ that couples to the evolution of the thermal field. The dynamics of $`\psi (𝐫,t)`$ are designed to follow the evolving solidification front , which is defined by the zero level set $`\psi (𝐫,t)=0`$. Because the interface is never explicitly tracked, complicated topology changes are handled easily. Furthermore, the extension of the phase-field model to higher dimensions is straightforward.
Although phase-field models have been very useful in studying solidification patterns, there are still some limitations in this approach. The proper use of these models requires that an asymptotic analysis be performed in order to obtain a mapping between the parameters of the phase-field equations and the sharp-interface equations . The asymptotics involve expanding the phase-field equations in some small parameter proportional to the interface width, $`W`$, and as a result, the phase-field model only reproduces the dynamics of the sharp-interface equations in the limit where the expansion parameter is sufficiently small. Computationally, the grid spacing must be small enough to resolve the interfacial region, which is on the order of $`W`$. This restriction is generally not a problem for the symmetric model of solidification (where the diffusivities in the solid and liquid phases are assumed to be the same) because it is possible to have $`W`$ on the order of the capillary length . However, phase-field asymptotics for unequal diffusivities can be problematic ; correction terms that are inconsistent with the sharp-interface equations are generated and non-monotonic behavior is required in the interfacial region, which requires extra grid resolution and hence slower computational performance. The generalization of the phase-field approach to handle discontinuous material properties requires a better understanding of the mapping between the phase-field model and the sharp-interface formulation in order to avoid problems with properly resolving the interface.
The level set method is a computational approach that has the capability of avoiding the above mentioned limitations of front tracking methods and phase-field models. This method, first introduced by Osher and Sethian , is conceptually similar to a phase-field model in that the solid-liquid interface is represented as the zero contour of a level set function, $`\varphi (𝐫,t)`$, which has its own equation of motion. The movement of the interface is taken care of implicitly through an advection equation for $`\varphi (𝐫,t)`$. Thus, topology changes and the extension of the method to higher dimensions can be handled in a straightforward manner. Unlike the phase-field model, there is no arbitrary interface width introduced in the level set method; the sharp-interface equations can be solved directly and, as a result, no asymptotics are required. Discontinuous material properties can also be dealt with in a simple manner.
The level set method has been applied to several problems involving moving boundaries , including solidification. Prior work on dendritic growth includes an application of the method to a boundary integral formulation as well as the direct solution of the sharp-interface equations . While these simulations have reproduced the qualitative features of dendrites, as well as some quantitatively accurate solutions to exactly soluble problems, some of the simulations of anisotropic dendritic growth were not necessarily converged . Furthermore, the results were not compared with theoretical predictions of dendritic growth.
In this paper, we demonstrate that the level set method can be used to solve the free-boundary problem for solidification to calculate quantitatively accurate solutions for dendritic growth. We present results from simulations in two dimensions and show that the solutions converge to the steady-state predicted by microscopic solvability theory. Time-dependent results are also compared with calculations using a phase-field model and good agreement is found for all times. Furthermore, we perform simulations with unequal diffusivities (a case which is not yet possible with phase-field models) and find that the prediction of Barbieri and Langer provides a fair quantitative fit to our results.
The solidification of a pure substance is described by a free-boundary problem for the temperature in the solid and liquid phases, and the position of the interface between them:
$`_tu`$ $`=`$ $`D^2u`$ (1)
$`V_n`$ $`=`$ $`(D_nu)_{_{Solid}}(D_nu)_{_{Liquid}}`$ (2)
$`u_i`$ $`=`$ $`d(\theta )\kappa \beta (\theta )V_n`$ (3)
The temperature $`T`$ has been rescaled as a dimensionless thermal field $`u=(TT_m)/(L/C_p)`$, where $`T_m`$, $`L`$, and $`C_p`$ represent the melting temperature, the latent heat of fusion, and the specific heat at constant pressure, respectively. The thermal diffusivity, $`D`$, can be different in the solid and liquid phases. Eq. 2 describes energy conservation at the solid-liquid interface, where $`V_n`$ is the local outward normal interface velocity and $`_n`$ refers to the outward normal derivative at the interface. Finally, Eq. 3 is known as the Gibbs-Thomson condition and describes the deviation of the interface temperature, $`u_i`$, from equilibrium due to the local curvature, $`\kappa `$, and interface kinetics. $`d(\theta )=\gamma (\theta )T_mC_p/L^2`$ is the anisotropic capillary length, proportional to the surface tension $`\gamma (\theta )`$, and $`\beta (\theta )`$ is the anisotropic kinetic coefficient. Here we assume that $`\beta (\theta )=0`$ and that the capillary length has the form $`d(\theta )=d_o(115ϵ\mathrm{cos}4\theta )`$, where $`ϵ`$ is the anisotropy strength and $`\theta `$ is the angle between the local normal vector at the interface, $`\stackrel{}{n}`$, and the $`x`$-axis.
We solve the above free-boundary problem by using a level set algorithm, which involves the following steps: i) advancing the interface, ii) reinitializing the level set function to be a signed distance function, and iii) solving for the new thermal field. The general level set method is described below. We wish to note that in our simulations we implement a localized level set method, described in detail in Ref. , in which calculations of $`\varphi `$ are performed only in a narrow region around the interface. We have not yet made an attempt to make our algorithm more computationally efficient by using adaptive mesh refinement.
i) Advancing the interface. The level set function is defined as the signed normal distance from the solid-liquid interface such that $`\varphi `$ is positive in the liquid phase, negative in the solid phase, and zero at the interface. $`\varphi `$ satisfies the pure advection equation
$$\frac{\varphi }{t}+F|\varphi |=0$$
(4)
Integrating Eq. 4 for one timestep results in moving the contours of $`\varphi `$ along the directions normal to the interface according to the velocity field $`F`$, which varies in space. $`F`$ is constructed to be an extension of the interface velocity, $`V_n`$, such that $`F=V_n`$ for points on the interface and the lines of constant $`F`$ are normal to the interface. Thus, advecting $`\varphi `$ according to Eq. 4 moves the front with the correct velocity.
Rather than using a partial differential equation to generate $`F`$ (as in Refs. ), we construct $`F`$ in the following manner: $`\varphi `$ represents the normal distance from the solidification front, so the value of $`\varphi `$ at each gridpoint on the computational lattice can be used to locate a particular point on the interface. If $`\stackrel{}{x}_g`$ is the location of the gridpoint, the associated point on the interface is at $`\stackrel{}{x}_i=\stackrel{}{x}_g\varphi \stackrel{}{n}`$, where the normal vector $`\stackrel{}{n}=\varphi /|\varphi |`$. The temperature at $`\stackrel{}{x}_i`$ is then calculated by using Eq. 3; $`\theta `$ is easily found from $`\stackrel{}{n}`$, and the curvature, $`\kappa =\stackrel{}{n}`$, is interpolated at $`\stackrel{}{x}_i`$ from values of $`\kappa `$ at neighboring gridpoints. $`\stackrel{}{n}`$ and $`\kappa `$ are calculated using standard, centered finite difference approximations to the partial derivatives of $`\varphi `$. Next, values of $`u`$ are interpolated in both the liquid and solid phases, a distance $`\mathrm{\Delta }x`$ (the size of the grid spacing) away from $`\stackrel{}{x}_i`$ along the normal direction. These two interpolated temperatures are used along with $`u_i`$ to approximate the difference in the normal derivative of $`u`$ at $`\stackrel{}{x}_i`$ and thus find $`V_n`$ (Eq. 3). Because $`\stackrel{}{x}_i`$ and $`\stackrel{}{x}_g`$ lie in the same line normal to the interface, the value of $`F`$ at $`\stackrel{}{x}_g`$ is simply $`V_n`$. The field $`F`$ can be determined at all gridpoints in this way.
After $`F`$ is known, the interface can be advanced one timestep. For stability, we discretize Eq. 4 using a 5th order WENO (weighted essentially non-oscillatory) scheme in space and a 3rd order Runge-Kutta scheme in time . However, the overall accuracy of our algorithm is second order in space and first order in time.
ii) Reinitialization. After solving Eq. 4 for one timestep, the level set function will no longer be equal to the distance away from the interface. It is necessary to reinitialize $`\varphi `$ to be a signed distance function. This step is accomplished by solving
$$\frac{\varphi }{t}+S(\varphi )[|\varphi |1]=0$$
(5)
to steady state. $`S(\varphi )`$ takes on the value $`+1`$ in the liquid phase, $`1`$ in the solid phase, and is zero at the interface. We typically iterate Eq. 5 three times in order to obtain an accurate distance function. Like Eq. 4, this equation is discretized using a 5th order WENO scheme in space and a 3rd order Runge-Kutta scheme in time .
iii) Solving for the new thermal field. The thermal field is updated by solving Eq. 1 using a modified Crank-Nicolson scheme. Different diffusivities in the two phases can be taken into account by simply noting the sign of the level set function and using the appropriate diffusion coefficient in the finite difference stencil. Special care has to be taken for gridpoints near the interface. If $`|\varphi |\mathrm{\Delta }x`$, the level set function is used to determine whether the front intersects the stencil and, if so, interpolate where the interface crosses the stencil. The stencil is then modified to take into account the location of the interface and the Gibbs-Thomson condition.
We compute four-fold symmetric dendrites in a $`L\times L`$ square box using the procedure described above. Solidification is initiated by a small quarter disk of radius $`R_o`$ in the lower left-hand corner of the box. The initial level set function is $`\varphi (x,y)=\sqrt{x^2+y^2}R_o`$, where $`x`$ and $`y`$ are the usual Cartesian coordinates. The initial temperature is $`u=0`$ in the solid and decays exponentially away from the interface to $`u=\mathrm{\Delta }`$ as $`\stackrel{}{x}\mathrm{}`$, where the far-field undercooling is $`\mathrm{\Delta }=(T_mT_{\mathrm{}})/(L/C_p)`$ and $`T_{\mathrm{}}`$ is the temperature far ahead of the solidification front in the liquid.
Eqs. 1-3 have been studied extensively to determine the steady state features of dendritic growth . According to microscopic solvability theory, these equations admit a family of discrete solutions. Only the fastest growing of this set of solutions is stable. This solution is the dynamically selected “operating state” for the dendrite and corresponds to a unique tip shape and velocity. Recent calculations of dendritic growth using phase-field models have been found to be in good agreement with the predictions of microscopic solvability theory . We observe similar agreement with the use of the level set algorithm and obtain results that are within a few percent of theoretical predictions. Figure 1 shows the tip velocity of the dendrite versus time for computations at undercoolings of $`\mathrm{\Delta }=0.65`$ and $`0.55`$. For all of these simulations $`D=1`$, $`d_o=0.5`$, $`\beta =0`$, $`R_o=15`$, and $`ϵ=0.05`$. For the $`\mathrm{\Delta }=0.65`$ simulation, $`L=200`$, $`\mathrm{\Delta }x=0.2`$, and the timestep is chosen to be $`\mathrm{\Delta }t=0.002`$. For the $`\mathrm{\Delta }=0.55`$ simulation, $`L=800`$, $`\mathrm{\Delta }x=0.4`$, and $`\mathrm{\Delta }t=0.008`$. To ensure grid convergence, $`\mathrm{\Delta }x`$ and $`\mathrm{\Delta }t`$ were refined until the steady-state tip velocity did not vary by more than $`2\%`$.
We also compare our level set results with simulations of dendritic growth performed using a phase-field model. Although calculations using phase-field models have been compared with a steady-state theory, there have been no comparisons made between time-dependent phase-field calculations and time-dependent solutions of the sharp-interface equations for multi-dimensional dendritic growth. The phase-field model calculations presented here were performed using a special adaptive mesh algorithm, as described in Ref. . The tip velocity data from the phase-field model and level set method at $`\mathrm{\Delta }=0.55`$ are in excellent agreement with each other (within $`3\%`$), as shown in Figure 1. Similar agreement is found in the dendritic shapes for these simulations, presented at time=9400 in Figure 2. These comparative results, combined with the recent demonstration of the equivalence of various phase-field models , provide an excellent foundation for the validity of the phase-field approach in simulating solidification microstructures.
The results presented here so far have used $`D_S=D_L`$, where $`D_S`$ and $`D_L`$ are the diffusivities in the solid and liquid phases, respectively. With our level set algorithm, we can also investigate the more general case where the diffusivities are unequal. We performed additional simulations at $`\mathrm{\Delta }=0.65`$ with $`D_S=0.75,0.5,0.25`$ and $`0`$ while keeping $`D_L=1`$. The only available benchmark for the case of non-symmetric diffusion is the linearized solvability theory of Barbieri and Langer , which predicts
$$\rho ^2V\frac{1+D_S/D_L}{2}(\rho ^2V)_{_{D_S/D_L=1}}$$
(6)
where $`\rho `$ and $`V`$ are the steady-state tip radius and velocity, respectively. The values of $`\rho ^2V`$ obtained from the level set simulations are compared with Eq. 6 in Figure 3. The fit is surprisingly good (an error of about $`13\%`$ at $`D_S/D_L=0`$), considering Eq. 6 was obtained from a linearized theory in the limit of small undercoolings.
In conclusion, the level set method should be considered as a viable alternative to the use of phase-field models. We have used a level set algorithm that can produce accurate calculations of dendritic growth which can be compared favorably with solvability theory as well as time-dependent phase-field model simulations. The level set method can also handle discontinuous material properties easily, which is currently very difficult with the phase-field approach. However, we should note that our implementation is not at all efficient. The practical application of this method to more realistic systems will require some sort of adaptive technique. In the future, we would like to use more computationally efficient implementations of this algorithm and apply these methods to problems in directional solidification, where the ability to simulate unequal diffusivities is of great interest.
We thank Susan Chen and Stanley Osher for useful discussions, Nikolas Provatas for helpful remarks and for providing the adaptive phase-field code used in this work, and Wouter-Jan Rappel for providing the solvability code used to test our simulations. This work has been supported by the NASA Microgravity Research Program, under Grant NAG8-1249. We also acknowledge the support of the National Center for Supercomputing Applications (NCSA) for the use of its computer resources.
|
no-problem/0003/math0003060.html
|
ar5iv
|
text
|
# The global indices of log Calabi-Yau varieties –A supplement to Fujino’s paper: The indices of log canonical singularities–
## 1. Introduction
In this paper, we study a log pair $`(X,B_X)`$ with a normal projective variety $`X`$ defined over $``$ and a boundary $`B_X`$ of standard coefficients (i.e., $`B_X=b_iB_i`$, where $`b_i=1`$ or $`11/m`$ for $`m`$). A pair $`(X,B_X)`$ is called a log Calabi-Yau variety if it has lc singularities and $`K_X+B_X0`$. For a log Calabi-Yau variety $`(X,B_X)`$ assume that there exists $`r`$ such that $`r(K_X+B_X)0`$. (For $`dimX3`$ this holds true for every log Calabi-Yau variety, by the abundance theorem (\[6, 11.1.3\] and ). We define the global index $`\mathrm{Ind}(X,B_X)`$ by the minimum of such $`r`$.
It is well known that a non-singular surface $`X`$ with $`K_X0`$ has $`\mathrm{Ind}(X,0)=1,2,3,4,6`$. Blache proved that a normal surface $`X`$ with $`K_X0`$ and having lc non-klt singularity has also $`\mathrm{Ind}(X,0)=1,2,3,4,6`$. This is generalized into the case that log Calabi-Yau surface $`(X,B_X)`$ has lc and non-klt singularities in \[12, 2.3\].
In this paper we prove the following:
###### Theorem 1.1.
Let $`(X,B_X)`$ be a log Calabi-Yau 3-fold with lc non-klt singularities. Then $`r`$ can be the global index $`\mathrm{Ind}(X,B_X)`$, if and only if $`\phi (r)20`$ and $`r60`$, where $`\phi `$ is the Euler function. In particular the global index is bounded.
This theorem is a corollary of the following:
###### Theorem 1.2.
Assume the Abundance Theorem and $`G`$-equivariant log Minimal Model Program for dimension $`n`$, where $`G`$ is a finite group. Let $`(X,B_X)`$ be an $`n`$-dimensional log Calabi-Yau variety with non-klt singularities. If the conjectures $`(F_j^{})`$ and $`(F_l)`$ in hold true for $`j=n1`$, $`ln2`$, then the global index $`\mathrm{Ind}(X,B_X)`$ is bounded.
The author would like to express her gratitude to Prof. Yuri Prokhorov for calling her attention to this problem and offering useful comments and stimulating discussions. She also express her gratitude to Dr. Osamu Fujino and Prof. Pierre Milman for giving useful information about their papers. Dr. Osamu Fujino also pointed out a mistake of the preliminary version of this paper, for which she is grateful to him.
## 2. The global indices
###### 2.1.
Throughout this paper, we use the notation and the terminologies in . We assume the Abundance Theorem and the $`G`$-equivariant log Minimal Model Program (as is well known, these hold for dimension $`3`$ by \[6, 11.1.3\], and \[7, 2.21\]).
###### 2.2.
Let $`(X,B_X)`$ be an $`n`$-dimensional log Calabi-Yau variety. Since we assume the Abundance Theorem, there exists $`r`$ such that $`r(K_X+B_X)0`$. Let $`\pi :(Y,B)(X,B_X)`$ be the index 1 cover with
$$K_Y+B=\pi ^{}(K_X+B_X).$$
Here the index 1 cover is constructed as follows: let $`r=\mathrm{Ind}(X,B_X)`$, then there exists a rational function $`\phi `$ on $`X`$ such that $`r(K_X+B_X)=\mathrm{div}(\phi )`$; take the integral closure $`Y`$ in $`K(X)(\sqrt[r]{\phi })`$. Note that $`K_Y+B0`$, that $`B=\pi ^{}(B_X)`$ is a reduced divisor and that $`\pi `$ ramifies only over the components of $`B_X`$ whose coefficients are $`<1`$, as the coefficients of $`B_X`$ are standard. Since $`K_X+B_X`$ is lc (resp. klt) if and only if $`K_Y+B`$ is lc (resp. klt), $`(Y,B)`$ is log Calabi-Yau of global index 1. Therefore we obtain that every log Calabi-Yau variety $`(X,B_X)`$ is the quotient of a log Calabi-Yau variety of global index 1 by the action of a finite cyclic group.
###### 2.3.
Let $`G`$ be the cyclic group acting on a log Calabi-Yau variety $`(Y,B)`$ of global index 1. Since $`G`$ acts on $`\mathrm{\Gamma }(Y,K_Y+B)=`$, there is a corresponding representation $`\rho :GGL(\mathrm{\Gamma }(Y,K_Y+B))=^{}`$.
###### Lemma 2.4.
Under the notation above, Let $`(X,B_X)`$ be the quotient $`(Y,B)/G`$ by $`G`$. Then
$$\mathrm{Ind}(X,B_X)=|\mathrm{Im}\rho |.$$
###### Proof.
For a generator $`\theta \mathrm{\Gamma }(Y,K_Y+B)`$, $`\theta ^{|\mathrm{Im}\rho |}`$ is $`G`$-invariant, therefore $`\mathrm{\Gamma }(X,|\mathrm{Im}\rho |(K_X+B_X))0`$, which yields $`\mathrm{Ind}(X,B_X)|\mathrm{Im}\rho |`$. Conversely, for a generator $`\eta \mathrm{\Gamma }(X,\mathrm{Ind}(X,B_X)(K_X+B_X))`$, $`\pi ^{}\eta \mathrm{\Gamma }(Y,\mathrm{Ind}(X,B_X)(K_Y+B))`$ is $`G`$-invariant. If we write $`\pi ^{}\eta =a\theta ^{\mathrm{Ind}(X,B_X)}`$ $`(a)`$, for a generator $`gG`$, $`(a\theta ^{\mathrm{Ind}(X,B_X)})^g=aϵ^{\mathrm{Ind}(X,B_X)}\theta ^{\mathrm{Ind}(X,B_X)}=a\theta ^{\mathrm{Ind}(X,B_X)}`$, where $`ϵ`$ is a primitive $`|\mathrm{Im}\rho |`$-th root of unity. Hence, $`\mathrm{Ind}(X,B_X)|\mathrm{Im}\rho |`$. ∎
###### 2.5.
Now we are going to study lc and non-klt log Calabi-Yau varieties. Let $`(Y,B)`$ be an $`n`$-dimensional log Calabi-Yau variety of global index 1 with lc and non-klt singularities. Assume that a cyclic group $`G`$ acts on $`(Y,B)`$. Then we have a projective $`G`$-equivariant log resolution $`\phi :\stackrel{~}{Y}Y`$ of $`(Y,B)`$. Indeed, let $`\phi ^{}:\stackrel{~}{Y}^{}Y`$ be the canonical resolution of $`(Y,B)`$ constructed in , then $`\phi ^{}`$ is projective and $`\phi _{}^{}{}_{}{}^{1}(B)`$(the exceptional set) is normal crossing divisor. By the blow up at a suitable $`G`$-invariant center, we obtain the divisor with simple normal crossings. Define the subboundary $`F`$ on $`\stackrel{~}{Y}`$ by $`K_{\stackrel{~}{Y}}+F=\phi ^{}(K_Y+B)`$. Run $`G`$-equivariant log MMP for $`K_{\stackrel{~}{Y}}+F^B`$ over $`Y`$ (The notation $`F^B`$ is in \[3, 1.5\] and $`F^B=F^c`$ in our case). Then we obtain $`G`$-factorial dlt pair $`f:(Y^{},B^{})(Y,B)`$ over $`(Y,B)`$. Since $`K_Y^{}+B^{}`$ is $`f`$-nef and $`(Y,B)`$ is lc, we obtain that $`K_Y^{}+B^{}=f^{}(K_Y+B)0`$. By \[3, 2.4\], $`B^{}`$ has at most two connected components.
###### Definition 2.6 (for the local version, see \[3, 4.12\]).
Let $`(Y,B)`$ and $`(\stackrel{~}{Y},F)`$ be as in 2.5. We define
$$\mu =\mu (Y,B):=\mathrm{min}\{dimWWCLC(\stackrel{~}{Y},F)\}.$$
Note that in case $`B^{}`$ is connected, then $`0\mu n1`$ and in case $`B^{}`$ has two connected components, then $`\mu =n1`$.
Case 1 ($`B^{}`$ is connected)
There exist a $`G`$-isomorphism $`\mathrm{\Gamma }(Y,K_Y+B)\mathrm{\Gamma }(Y^{},K_Y^{}+B^{})`$ and an exact sequence:
$$0=\mathrm{\Gamma }(Y^{},K_Y^{})\mathrm{\Gamma }(Y^{},K_Y^{}+B^{})\mathrm{\Gamma }(B^{},(K_Y^{}+B^{})|_B^{})=,$$
where the last term is isomorphic to $`\mathrm{\Gamma }(B^{},K_B^{})`$, as $`K_Y^{}+B^{}`$ is a Cartier divisor. Therefore, we have only to check the action of $`G`$ on $`\mathrm{\Gamma }(B^{},K_B^{})`$.
###### Proposition 2.7 (for the local case, see \[3, 4.11\]).
If there exists a non-zero admissible section in $`\mathrm{\Gamma }(B^{},m_0K_B^{})`$, then $`G`$ acts on $`\mathrm{\Gamma }(B^{},m_0K_B^{})`$ trivially.
###### Proof.
The proof is the same as that of \[3, 4.11\]. We have only to note that $`B^{}=E=E^c`$ in our case. ∎
###### Proposition 2.8 (for the local case, see \[3, 4.14\]).
Assume that $`\mu (Y,B)n2`$. Then there exists a non-zero admissible section $`s\mathrm{\Gamma }(B^{},m_0K_B^{})`$ with $`m_0D_\mu `$. In particular, $`s`$ is $`G`$-invariant. Thus, $`\mathrm{Ind}((Y,B)/G)I_\mu `$.
###### Proof.
The proof is the same as that of \[3, 4.14\]. Again $`B^{}=E=E^c`$. ∎
###### Proposition 2.9.
Assume that $`B^{}`$ is connected and $`\mu (Y,B)=n1`$. Then $`\mathrm{Ind}(Y,B)/GI_{n1}^{}`$.
###### Proof.
In this case, $`B^{}`$ is irreducible, therefore $`(Y^{},B^{})`$ is plt. Then, by Adjunction \[6, 17.6\], $`B^{}`$ is klt and $`K_B^{}0`$. Now apply 2.4. ∎
Case 2 ($`B^{}`$ has two connected components).
Note that $`B^{}`$ is the disjoint union of two irredicible components, therefore $`(Y^{},B^{})`$ is plt (see \[3, 2.4\]). Run $`G`$-equivariant log MMP for $`K+B^{}ϵB^{}`$, then we obtain a $`G`$-equivariant contraction $`p:Y^{\prime \prime }Z`$ of an extremal face for $`K+B^{\prime \prime }ϵB^{\prime \prime }`$ to a lower dimensional variety $`Z`$, where $`B^{\prime \prime }=B_1^{\prime \prime }B_2^{\prime \prime }`$ is the divisor on $`Y^{\prime \prime }`$ corresponding to $`B^{}`$. Here $`dimZ=n1`$, because $`h^{n1}(Y^{},𝒪_Y^{})=h^1(Y^{},K_Y^{})0`$. We also obtain that $`B_i^{\prime \prime }`$’s are generic sections of $`p`$. Since $`(Y^{\prime \prime },B^{\prime \prime })`$ is plt and $`K_{Y^{\prime \prime }}+B^{\prime \prime }0`$, each $`B_i^{\prime \prime }`$ has canonical singularities and $`K_{B_i^{\prime \prime }}0`$ by \[6, 17.6\]. Then the birational image $`Z`$ has $`K_Z0`$, and therefore it has canonical singularities. Since the group $`G=g`$ acts on $`B^{\prime \prime }`$, the subgroup $`H:=g^2`$ acts on each $`B_i^{\prime \prime }`$ $`(i=1,2)`$. Consider the exact sequence:
$$0=\mathrm{\Gamma }(Y^{\prime \prime },K_{Y^{\prime \prime }}+B_2^{\prime \prime })\mathrm{\Gamma }(Y^{\prime \prime },K_{Y^{\prime \prime }}+B^{\prime \prime })\stackrel{\alpha }{}\mathrm{\Gamma }(B_1^{\prime \prime },K_{B_1^{\prime \prime }}),$$
where $`\alpha `$ is an $`H`$-equivariant isomorphism. On the other hand, the homomorphism $`\mathrm{\Gamma }(B_1^{\prime \prime },K_{B_1^{\prime \prime }})\mathrm{\Gamma }(Z,K_Z)`$ induced from $`p|_{B_1^{\prime \prime }}`$ is also an $`H`$-equivariant isomorphism. Hence, for two representations $`\rho :GGL(\mathrm{\Gamma }(Z,K_Z))`$ and $`\rho ^{}:GGL(\mathrm{\Gamma }(Y^{\prime \prime },K_{Y^{\prime \prime }}+B^{\prime \prime }))`$, we obtain the equality $`|\rho (H)|=|\rho ^{}(H)|`$. Note that, for any representation $`\lambda :G^{}`$, $`\lambda (H)=\lambda (G)`$ if and only if $`|\lambda (G)|`$ is an odd number. If we denote $`|\rho (G)|`$ by $`r`$, then $`rI_{n1}^{}`$, and either: (1) $`|\rho ^{}(G)|=r`$ or (2) $`|\rho ^{}(G)|=2r`$ and $`r`$ is odd or (3) $`|\rho ^{}(G)|=r/2`$ and $`r/2`$ is odd. By defining $`I_k^{\prime \prime }:=I_k^{}\{2rrI_k^{}\mathrm{odd}\}\{r/2rI_k^{},r/2\mathrm{odd}\}`$, we obtain:
###### Proposition 2.10.
Assume $`B^{}`$ has two connected components. Then $`\mathrm{Ind}((Y,B)/G)I_{n1}^{\prime \prime }`$.
By 2.8, 2.9 and 2.10, we obtain Theorem 1.2. In particular, for the 3-dimensional case $`G`$-equivariant log MMP, the Abundance Theorem and $`(F_j^{})`$, $`(F_l)`$ $`(j=2,l1)`$ hold. Here note that $`I_0=\{1,2\}`$, $`I_1=\{1,2,3,4,6\}`$ and $`I_2^{}=\{r\phi (r)20,r60\}`$ by and . By the list of the values of $`I_2^{}`$ in \[9, Table 1\], we can check that $`I_2^{\prime \prime }=I_2^{}`$. Therefore we obtain the necessary condition of the global index $`\mathrm{Ind}(X,B_X)`$ in Theorem 1.1.
The following shows that it is the sufficient condition of the global index:
###### Example 2.11.
Let $`r`$ be a positive integer that satisfies $`\phi (r)20`$ and $`r60`$. Then by and , there exists a $`K3`$-surface $`S`$ with an action $`G`$ of order $`r`$ and $`r=|\mathrm{Im}\rho |`$. Let $`Y=S\times ^1`$ and $`B=S\times \{0\}+S\times \{\mathrm{}\}`$. Let $`G`$ act on $`Y`$ by trivial action on $`^1`$ and the action above on $`S`$. Let $`(X,B_X)`$ be the quotient of $`(Y,B)`$ by $`G`$ with $`K_Y+B=\pi ^{}(K_X+B_X)`$. Then $`(X,B_X)`$ is a log Calabi-Yau 3-fold with global index $`r`$.
###### Remark 2.12.
We can also prove Theorem 1.1 by using instead of . Indeed, we used only for propositions 2.7 and 2.8. For the 3-dimensional case, these propositions can be replaced by the discussion on the order of the action of $`G`$ on $`H^2(F^B,𝒪_{F^B})`$ for type $`(0,0)`$ and $`(0,1)`$. Theorems \[4, 4.5\] and \[4, 4.12\] give the same results as in 2.8.
###### Remark 2.13.
Osamu Fujino informed the arthor that the boundedness of the indices of log Calabi-Yau 3-folds also follows from \[3, 4.17\] and the proof of \[3, 4.14\]. By this proof we obtain the index in $`I_2`$ instead of $`I_2^{}`$.
###### Remark 2.14.
If we assume $`(F_n^{})`$, then it is clear that $`n`$-dimensional klt log Calabi-Yau variety has the global index $`rI_n^{}`$ by Lemma 2.4. Therefore klt log Calabi-Yau surface has the global index $`r`$ such that $`\phi (r)20`$ and $`r60`$.
For a klt log Calabi-Yau 3-fold with $`B_X=0`$, the global index satisfies the same condition as above \[9, Corollary 5\].
|
no-problem/0003/astro-ph0003429.html
|
ar5iv
|
text
|
# 𝑅𝐽𝐾 Observations of the Optical Afterglow of GRB 9912161footnote 11footnote 1Based on the observations collected at the F. L. Whipple Observatory 1.2 m telescope and the University of Hawaii 2.2 m telescope
## 1 INTRODUCTION
The BeppoSAX (Boella et al. 1997) and RXTE (Levine et al. 1996) satellites have brought a new dimension to gamma-ray burst (GRB) research, by providing rapid localizations of several bursts per year. This has allowed many GRBs to be followed up at other wavelengths, ranging from the X-ray (Costa et al. 1997) and optical (van Paradijs et al. 1997) to the radio (Frail et al. 1997). Precise positions have also allowed redshifts to be measured for a number of GRBs (e.g. GRB 970508: Metzger et al. 1997), providing definitive proof of their cosmological origin.
The extremely bright gamma-ray burst GRB 991216 was detected by BATSE (Kippen, Preece, & Giblin 1999) on December 16.671544 UT, with its peak flux (fluence) ranking it as the 2nd (13th) of all BATSE bursts detected so far. The RXTE PCA search for the X-ray afterglow of GRB 991216 started about four hours after the burst (Takeshima et al. 1999) and detected a strong, decaying X-ray afterglow, providing much improved burst position. It should be noted that the X-ray afterglow of GRB 991216 was also detected by much less sensitive RXTE ASM instrument as early as one hour after the burst (Corbet & Smith 1999), providing a measurement of the X-ray afterglow at times which have previously not been studied. In addition, observations of GRB 991216 by the Chandra Observatory resulted in the first arcsecond position determination for an X-ray afterglow (Piro et al. 1999).
The optical afterglow of GRB 991216 was identified by Uglesich et al. (1999) with data taken about $`12`$ hours (December 17.142 and 17.372 UT) after the burst, using the MDM 1.3-m telescope. It was recognized as a bright variable object ($`R18.8`$ at Dec. 17.142), not present in the digitized POSS II plate, declining with a temporal decay index of $`1.4`$. Numerous independent confirming observations of the fading optical transient (OT) have followed, starting with Henden et al. (1999) and Jha et al. (1999). Near-infrared observations were also reported by Vreeswijk et al. (1999a) and Garnavich et al. (1999b).
Absorption lines at $`z=1.02`$ seen in the optical spectrum of GRB 991216 taken with the VLT-UT1 8-m telescope by Vreeswijk et al. (1999b) provide a lower limit to the redshift of the GRB source. Given the gamma-ray fluence (Kippen 1999), the isotropic energy from the burst was more than 8$`\times 10^{53}`$ ergs ($`H_o=65`$ km s$`^1`$Mpc<sup>-1</sup>, $`\mathrm{\Omega }_m=0.3`$, $`\mathrm{\Omega }_\mathrm{\Lambda }=0.7`$), or nearly half a Solar rest mass radiated away in under 10 seconds. This exceedingly large energy requirement can be reduced if the burst emission is beamed. To date, evidence for jets has been found in only a handful of GRB afterglows (Sari, Piran & Halpern 1999; Kulkarni et al. 1999; Stanek et al. 1999) and it remains to be shown whether anisotropy is ubiquitous.
We present optical and near-IR photometry of GRB 991216 from observations obtained at the Hawaii 88-inch and the Fred L. Whipple 1.2m telescopes. We describe the data and the reduction procedure in Section 2. In Section 3 we discuss the multiband temporal behavior of the GRB OT. In Section 4 we describe the broad-band spectral properties of the afterglow deduced from our IR/optical data.
## 2 OBSERVATIONS
The near-infrared data were obtained with the Fred L. Whipple observatory 1.2-meter telescope on four consecutive nights beginning 1999 Dec. 17.22 (UT). Images were taken with the “STELIRCAM” two-channel IR camera which utilizes two $`256^2`$ pixel HgCdTe arrays. A dichroic mirror splits the beam at $`\lambda 1.8\mu `$m allowing simultaneous observations in two filters. The GRB afterglow was observed in J and K filters manufactured by Barr. The camera has three sets of re-imaging optics and we employed the 5 field-of-view with a 1.2$`\mathrm{"}`$ per pixel scale.
We immediately began imaging the RXTE localization error box after being notified of a bright GRB detected by BATSE through the GCN Circulars. A 3$`\times `$3 mapping (15 field) around the initial RXTE position was performed with two 60 second exposures taken at each pointing. A $`J=17`$ mag object that did not appear on the digitized sky survey was tentatively identified as the afterglow candidate (Garnavich et al. 1999a), however it was pointed out by Diercks et al. (1999a) that the star appeared on the POSS-II N emulsion photographs and was likely to be a very red star. RXTE revised its error box 8 northward from the original position and a new mapping was begun. Uglesich et al. (1999) then identified the true afterglow near the revised position soon after observations at FLWO were terminated. Fortunately the original mapping and the mapping centered on the revised RXTE error box included the object in several of the images. In subsequent nights GRB 991216 was observed in $`J`$ and $`K`$ with 9$`\times `$60 second exposure sets. An extensive number of Persson et al. (1998) standards were observed on Dec. 18 (UT) and used to calibrate stars in the GRB field (Table 1; Figure 1). Our $`J`$ and $`K`$ calibrations are in good agreement with that of Henden, Guetter, & Vrba (2000), but our GRB magnitudes are 20% to 30% fainter in $`J`$ and brighter in $`K`$ than the Vreeswijk et al. (1999a) infrared photometry.
After the optical counterpart was identified, a single exposure of the field was obtained with the University of Hawaii 88-inch telescope. By then, the target was well past the meridian and the data suffered from a high airmass. On Dec. 18 (UT) the field was observed at two epochs and Landolt standards (Landolt 1992) imaged to calibrate the data. Since GRB 991216 was only observed in the $`R`$ filter, no color correction was possible and we estimate the uncertainty from unknown color term as 5% . Our $`R`$-band calibration of stars A and B are in good agreement with that found by Dolan et al. (1999). Our final $`R`$-band magnitudes are on average 0.08 mag fainter than the preliminary magnitudes given in Jha et al. (1999) and Garnavich et al. (1999b).
From our optical imaging we find a position for the transient of $`\alpha =05^h09^m31.29^s`$ $`\delta =+11^{}17^{}07.3\mathrm{"}`$ (J2000) with an accuracy of $`\pm 0.\mathrm{}2`$ based on positions from the USNO A2.0 catalog (Monet et al. 1996).
## 3 THE TEMPORAL BEHAVIOR
Figure 2 shows the $`RJK`$ light curves of GRB991216. Additional $`R`$-band points obtained from GCN Circulars (Uglesich et al. 1999; Dolan et al. 1999; Vreeswijk et al. 1999a; Diercks et al. 1999b; Jensen et al. 1999; Leibowitz et al. 1999; Mattox 1999) are also plotted, but the comparison stars used in their calibration are sometimes not known and these points are used here only to confirm the trends seen in our data. Late-time observations by Djorgovski et al. (1999) and Schaefer (2000) use the Dolan et al. (1999) or Jha et al. (1999) calibrations and are consistent with our points. The light curves appear to follow a single power-law between 0.5 days and four days after the burst. So not to confuse the temporal and spectral variations, we will use the convention that $`F_\nu t^\alpha \nu ^\beta `$.
A single power-law was fitted to our data points by allowing the magnitude shift between $`J`$ and $`K`$ and the shift between $`J`$ and $`R`$ be free parameters. The result is shown as the solid lines in Figure 2 and provides an index of $`\alpha =1.36\pm 0.04`$ ($`1\sigma `$). Fitting the individual bands gives indices of $`\alpha =1.44\pm 0.06`$ for $`K`$, $`1.31\pm 0.06`$ for $`J`$ and $`1.42\pm 0.16`$ for $`R`$. Combining our three $`R`$-band observations with six observations from the GCN Circulars obtained within four days of the burst gives a power-law index of $`\alpha =1.30\pm 0.05`$, somewhat steeper than the decay rate found by Sagar et al. (2000) from the raw GCN $`R`$-band magnitudes. Clearly, a power-law index of $`\alpha =1.36`$ is a good fit to all three bands given the estimated errors. Extrapolating the $`R`$ fit to the late-time observations by Schaefer (2000) and Djorgovski et al. (1999) shows that the single power-law is consistent with the data out to 20 days after the burst. The $`R`$-band point by Mattox (1999) appears significantly below the trend. Our $`J`$-band photometry, obtained near that time, shows no deviation from the fit, however, we can not rule out a change in slope beginning four days after the burst and then a recovery at late-times due to a possible underlying supernova or host galaxy.
## 4 REDDENING AND BROAD-BAND SPECTRAL ENERGY DISTRIBUTION
The GRB 991216 is located at Galactic coordinates of $`l=190\mathrm{°}.44,b=16\mathrm{°}.63`$. To remove the effects of the Galactic interstellar extinction we used the reddening map of Schlegel, Finkbeiner & Davis (1998, hereafter: SFD). The expected Galactic reddening towards the burst is substantial, $`E(BV)=0.626`$ mag. We use $`R_V=3.1`$ and the standard reddening curve of Cardelli, Clayton & Mathis (1989), as tabulated by SFD (their Table 6), to correct our optical and IR data. As discussed by Stanek et al. (1999), there is some indication that the SFD map overestimates the $`E(BV)`$ values by a factor of 1.3-1.5 close to the Galactic plane ($`|b|<5^{}`$) (Stanek 1998) and in high extinction ($`A_V>0.5`$mag) regions (Arce & Goodman 1999). It is not clear at all that such a correction should be applied to the SFD $`E(BV)`$ value for the GRB 991216, but it would reduce this value to about $`E(BV)=0.46`$.
We synthesize the $`RJK`$ spectrum from our data by interpolating the magnitudes to a common time. As discussed in the previous section, the colors of the GRB 991216 counterpart do not show significant variation. We therefore select an epoch of Dec. 18.32 UT ($`40`$ hours after the burst) for the color analysis, which is near the time when simultaneous $`RJK`$ data were taken.
We convert the $`RJK`$ magnitudes to fluxes using the effective wavelengths and normalizations of Fukugita, Shimasaku & Ichikawa (1995) for the optical and Mégessier (1995) for the IR. These conversions are accurate to about 5%, which increases the error-bars correspondingly. Note that while the error in the $`E(BV)`$ reddening value has not been applied to the error-bars of individual points, we include it in the error budget of the fitted slope. The results are plotted in Figure 3 for both the observed and the dereddened magnitudes. The corrected spectrum is well fitted by a single power-law with $`\beta =0.58\pm 0.08`$. If we use the lower value of $`E(BV)=0.46`$, as discussed above, the corresponding number is $`\beta =0.87\pm 0.08`$. We have assumed that there is no extinction within the host galaxy, but any reddening from the host will make the intrinsic spectrum more flat and reduce the derived value of $`\beta `$.
A radio observation (Taylor & Berger 1999; Frail et al. 2000) found the afterglow at 8.5 GHz to be 960$`\pm 67`$ $`\mu `$Jy on 1999 Dec. 18.16 . Adjusting our $`K`$-band flux to this date, we find a power-law index between the radio and near-IR to be $`0.15`$, much more shallow than the index between the near-IR and the optical. The Chandra X-ray observatory also observed the GRB on Dec 18.2 (UT) (Piro et al. 1999) and converting to a flux density we find a power-law index between the IR and X-ray points of $`\beta =0.8\pm 0.1`$, slightly steeper than a simple extrapolation from the IR/optical data. We note that this slope is close to what is found for the IR/optical data if the extinction is set to the lower value of the range discussed above. Figure 4 shows the overall spectrum from the radio to the X-rays and evidence for a spectra break at frequencies less than the $`K`$-band.
The GRB afterglow model described by Sari, Piran, & Halpern (1999) can be used to compare the spectral and temporal power-law indices observed. The IR/optical region is within the cooling regime (Figure 4), so the observed spectral slope of $`\beta =0.6`$ ($`0.8`$ for IR to X-ray) gives an estimate of the electron distribution index of $`p=1.2`$ ($`1.6`$). For a spherical shock, we then expect a temporal index of $`\alpha =(3\beta 1)/2=0.4`$ ($`0.7`$) which is much more shallow than the observed index. For a jet, however, the expected light curve index is $`\alpha =2\beta =1.2`$ ($`1.6`$), close to the observed value of 1.4. At frequencies less than the cooling break, we expect a spectral index of 0.1 based on the IR/optical slope. This is similar to the observed index of 0.15, but it should be noted that other spectral breaks may be present between the radio and IR points.
## 5 CONCLUSIONS
We present well-calibrated $`RJK`$ observations of the GRB 991216. Our data indicates that the decay of the optical afterglow is well represented by a single power-law with index $`\alpha =1.36\pm 0.04`$ from 0.5 days to four days after the burst. Combining published late-time $`R`$-band observations with our data suggests a single power-law is a good fit out to 20 days after the burst.
The optical spectral energy distribution, corrected for significant Galactic reddening, is well fitted by a single power-law with an index of $`\beta =0.58\pm 0.08`$. However, when the possible systematic error in the SFD extinction map is considered, the index may be somewhat steeper ($`\beta =0.87\pm 0.08`$). A Chandra X-ray observation obtained near the time of our photometry provides a spectral index between the near IR and X-rays of $`\beta =0.8\pm 0.1`$.
A comparison between the spectral and temporal power-law indices suggest that the GRB is not consistent with a simple spherical shock model. The IR/optical light curve and colors are better matched by a shock produced from a collimated jet.
S. Barthelmy, the organizer of the GRB Coordinates Network (GCN), is recognized for his extremely useful effort. Support for M. A. P. (HF-01099.01-97A) and K. Z. S. (HF-01124.01-99A) was provided by NASA through Hubble Fellowship grants from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. RPK and SJ acknowledge NSF support through AST98-19825 and a NSF Graduate Research Fellowship.
|
no-problem/0003/astro-ph0003385.html
|
ar5iv
|
text
|
# Explosion Implosion Duality and the Laboratory Simulation of Astrophysical Systems
## 1 Introduction
There is much interest at present in the possible use of the new generation of high-power laser facilities (in particular the National Ignition Facility at Livermore and the Laser MegaJoule in Bordeaux) to simulate astrophysical phenomena such as supernovae. At first sight this programme appears to suffer from one obvious drawback. The phenomena one wishes to simulate generally involve explosions while the laser facilities are designed to produce implosions. Remarkably, as we will show, this is not a problem. Under certain, not too restrictive, conditions there exists an exact mathematical duality which allows one to transform an explosion problem to an implosion problem and vice versa. Thus it is possible, in a precise sense, to use implosion experiments to simulate exploding systems.
## 2 The duality transformation
The Euler equations of perfect gas dynamics can be conveniently written in the form
$`{\displaystyle \frac{D\rho }{Dt}}`$ $`=`$ $`\rho 𝐔`$ (1)
$`{\displaystyle \frac{D𝐔}{Dt}}`$ $`=`$ $`{\displaystyle \frac{p}{\rho }}`$ (2)
$`{\displaystyle \frac{D}{Dt}}`$ $`=`$ $`(+p)𝐔`$ (3)
where $`D/Dt`$ denotes the Lagrangian, material or convective derivative defined by
$$\frac{D}{Dt}=\frac{}{t}+𝐔$$
(4)
and $`\rho `$ is the mass density, $``$ the thermal energy density, $`p`$ the pressure and $`𝐔`$ the velocity. In addition to these differential equations one algebraic relation is needed, an equation of state relating the pressure to the mass and energy densities the simplest being a polytropic equation of state,
$$p=(\gamma 1)$$
(5)
with $`\gamma `$ a constant. These equations are mathematically equivalent to the mass, momentum and energy conservation equations in smooth regions of the flow. Only at shocks is it necessary to revert to the fundamental conservation forms to recover the correct shock jump conditions.
Now consider the following transformation of the dependent and independent variables,
$`𝐱^{}`$ $`=`$ $`a(t)^1𝐱,`$ (6)
$`t^{}`$ $`=`$ $`{\displaystyle a(t)^2𝑑t},`$ (7)
$`\rho ^{}`$ $`=`$ $`a(t)^3\rho ,`$ (8)
$`p^{}`$ $`=`$ $`a(t)^5p,`$ (9)
$`𝐔^{}`$ $`=`$ $`a(t)𝐔\dot{a}(t)𝐱,`$ (10)
$`^{}`$ $`=`$ $`a(t)^5`$ (11)
where for the moment $`a(t)`$ is an arbitrary function of time. Apart from the, at first sight rather strange, time-dependent scaling factors this is essentially a transformation to a coordinate system which is expanding or contracting with a scale factor $`a(t)`$. If we define $`a^{}=a^1`$ and note that
$$\frac{da^{}}{dt^{}}=a^2\frac{d}{dt}\left(\frac{1}{a}\right)=\frac{da}{dt}$$
(12)
it is easy to see that it is an involutionary transformation with inverse obtained by simply interchanging the starred and unstarred quantities
$`𝐱`$ $`=`$ $`a^{}(t^{})^1𝐱^{},`$ (13)
$`t`$ $`=`$ $`{\displaystyle a^{}(t^{})^2𝑑t^{}},`$ (14)
$`\rho `$ $`=`$ $`a^{}(t^{})^3\rho ^{},`$ (15)
$`p`$ $`=`$ $`a^{}(t^{})^5p^{},`$ (16)
$`𝐔`$ $`=`$ $`a^{}(t^{})𝐔^{}\dot{a}^{}(t^{})𝐱^{},`$ (17)
$``$ $`=`$ $`a^{}(t^{})^5^{}.`$ (18)
Let us now consider how the dynamical equations transform under this change of variables. It is easy to see that
$`{\displaystyle \frac{D}{Dt}}`$ $``$ $`a^2{\displaystyle \frac{D}{Dt^{}}}`$ (19)
$``$ $``$ $`a^{}^{}`$ (20)
and thus, after some elementary algebra,
$`{\displaystyle \frac{D\rho ^{}}{Dt^{}}}`$ $`=`$ $`\rho ^{}^{}𝐔^{},`$ (21)
$`{\displaystyle \frac{D𝐔^{}}{Dt^{}}}`$ $`=`$ $`{\displaystyle \frac{^{}p^{}}{\rho ^{}}}+{\displaystyle \frac{\ddot{a}^{}}{a^{}}}𝐱^{},`$ (22)
$`{\displaystyle \frac{D^{}}{Dt^{}}}`$ $`=`$ $`(^{}+p^{})^{}𝐔^{}+{\displaystyle \frac{\dot{a}^{}}{a^{}}}(3p^{}2^{}).`$ (23)
Remarkably, we see that if the scale factor $`a`$ is such that
$$\ddot{a}^{}=\frac{d^2a^{}}{dt^2}=a^2\ddot{a}=0$$
(24)
and the gas is a polytrope of exponent $`5/3`$ with $`p=2/3`$ then the Euler equations are invariant under this transformation. Note that, because the Euler equations in conservation form are algebraically equivalent to the simplified forms, the conservation forms are also invariant and thus the whole structure of ideal gas dynamics, including the Rankine-Hugoniot shock relations, is preserved.
The condition that the acceleration of the scale factor be zero, $`\ddot{a}=0`$, requires that $`a(t)`$ be a linear function of $`t`$ and, without loss of generality, we can take $`a=t/t_0`$ where $`t_0`$ is a constant characteristic expansion time. The time transformation is then
$$t^{}=\frac{dt}{a(t)^2}=t_0^2\frac{dt}{t^2}=\text{const.}\frac{t_0^2}{t}$$
(25)
and it is convenient to set the constant to zero and choose
$$t^{}=\frac{t_0^2}{t},t=\frac{t_0^2}{t^{}}.$$
(26)
The initial singularity of the expansion in physical space occurs at $`t=0`$ and is mapped to $`t^{}=\mathrm{}`$, the long term behaviour as $`t+\mathrm{}`$ is mapped to $`t^{}=0`$. It is important to note that in the dual representation the time variable is bounded from above, $`t^{}<0`$, whereas in physical space it is bounded from below, $`t>0`$.
The remarkable result is that for an ideal gas of point particles with no internal structure (which is what the 5/3 polytrope is) hydrodynamics in a uniformly expanding system is exactly equivalent to hydrodynamics in a static system. This result, or special forms of it, appears to have been discovered a number of times by Cosmologists (where the idea of factoring out the general expansion of the universe is very natural); a recent discussion is that of Martel and Shapiro (1998) where they propose the felicitous name of “supercomoving variables” to describe this transformation. What does not seem to have been generally noted is that this transformation can be used outside the cosmological context (however Poyet and Spiegel, 1979, did use a variant in an analysis of stellar pulsations).
## 3 Interpretation
The fact that the transformation is exact for the gas of ideal point particles strongly hints that it is derived from a similar result for the free particle motion. In fact there is such a duality, although it is almost trivial. The freely moving point particle moves along a straight line trajectory,
$$𝐱=𝐱_0+𝐯_0t,$$
(27)
with starting point $`𝐱_0`$ and velocity $`𝐯_0`$. If we write this as
$$\frac{𝐱}{t}=𝐯_0+𝐱_0\frac{1}{t}$$
(28)
we see that there is a dual representation of the trajectory in which $`t`$ is replaced by $`1/t`$, lengths are scaled by a factor proportional to time, initial points and final velocities are interchanged, but the trajectory remains a straight line. If collisions are instantaneous, localised and elastic they look the same in either system, and thus in both systems one can write down a Boltzmann equation and then derive the hydrodynamic equations as limits of moments of the Boltzmann equation. This approach also shows that higher order effects, such as viscosity and heat conduction, can formally be treated in the same way; however the resulting transformed transport coefficients will in general have unphysical time dependencies (for an application see Drury and Stewart, 1976).
This analysis also shows that similar results will hold in different numbers of spatial dimensions, but the equation of state will have to correspond to the ideal gas in that number of dimensions. In $`d`$ spatial dimensions it is easy to verify that the ”super-comoving” transformation takes the form
$`𝐱^{}`$ $`=`$ $`a(t)^1𝐱,`$ (29)
$`t^{}`$ $`=`$ $`{\displaystyle a(t)^2𝑑t},`$ (30)
$`\rho ^{}`$ $`=`$ $`a(t)^d\rho ,`$ (31)
$`p^{}`$ $`=`$ $`a(t)^{d+2}p,`$ (32)
$`𝐔^{}`$ $`=`$ $`a(t)𝐔\dot{a}(t)𝐱,`$ (33)
$`^{}`$ $`=`$ $`a(t)^{d+2}`$ (34)
and that the Euler equations are invariant if the gas has a polytropic equation of state such that
$$dp=2$$
(35)
corresponding to an adiabatic exponent
$$\gamma =1+\frac{2}{d}.$$
(36)
An interesting way of looking at this transformation (for which we are indebted to our colleague Etienne Parizot) is that it provides an analogue in spherical geometry to the freedom that Galilei transformations allow in planar geometry. If we are looking at a planar shock, it is often convenient to transform to a reference frame where the upstream medium, or the downstream medium, or the shock itself, appears stationary. In spherical systems one cannot apply Galilei boosts because the origin is fixed, however this transformation, by allowing one to take out an arbitrary uniform expansion, gives one much the same freedom.
## 4 Application to a Supernova Remnant
Computational studies of the evolution of a Supernova Remnant commonly start with initial conditions of dense pressure-free ejecta expanding ballistically away from the site of the explosion, which it is convenient to locate at the coordinate origin, and interacting with a stationary, or slowly moving, ambient medium of much lower density and negligible pressure. To illustrate the application of the duality transformation let us consider the simple, if somewhat artificial, case of uniform density ejecta interacting with a uniform and stationary ambient medium in perfect spherical symmetry. Then the initial conditions correspond to
$`\rho (r,t)`$ $`=`$ $`\rho _0\left({\displaystyle \frac{t}{t_{\mathrm{SW}}}}\right)^3,`$ (37)
$`U(r,t)`$ $`=`$ $`{\displaystyle \frac{r}{t}},`$ (38)
$`p(r,t)`$ $`=`$ $`0`$ (39)
in the region $`r<V_0t`$ occupied by the ejecta ($`V_0`$ is the maximum expansion speed of the ejecta) and
$`\rho (r,t)`$ $`=`$ $`\rho _0`$ (40)
$`U(r,t)`$ $`=`$ $`0,`$ (41)
$`p(r,t)`$ $`=`$ $`0`$ (42)
in the external ($`rV_0t`$) medium of constant density $`\rho _0`$. The sweep-up time $`t_{\mathrm{SW}}`$ corresponds to the point where the ejecta, if expanding unimpeded, would have a density equal to the ambient medium.
This defines the physical problem of expanding ejecta interacting with a stationary environment. Let us now consider the dual problem obtained by applying the transformation with scale factor
$$a(t)=\frac{t}{t_{\mathrm{SW}}}.$$
(43)
Then the dependent variables transform as
$`r^{}=t_{\mathrm{SW}}{\displaystyle \frac{r}{t}},`$ (44)
$`t^{}={\displaystyle \frac{t_{\mathrm{SW}}^2}{t}}`$ (45)
so that the explosion, which occurs at $`t=0`$ in physical problem, occurs at $`t^{}=\mathrm{}`$ in the dual problem. Conversely the asymptotic evolution as $`t\mathrm{}`$ in the physical problem is mapped to the behaviour at $`t^{}=0`$ in the dual problem.
The ejecta density in the dual problem is constant,
$$\rho ^{}(r^{},t^{})=a^3\rho (r,t)=\rho _0$$
(46)
and the velocity is zero, $`U^{}=0`$, in $`r^{}<V_0t_{\mathrm{SW}}`$. However the ambient medium is now time-dependent with density, in the region $`r^{}V_0t_{\mathrm{SW}}`$
$$\rho ^{}(r^{},t^{})=\left(\frac{t}{t_{\mathrm{SW}}}\right)^3\rho _0=\left(\frac{t^{}}{t_{\mathrm{SW}}}\right)^3\rho _0$$
(47)
and velocity
$$U^{}(r^{},t^{})=\frac{r^{}}{t^{}}.$$
(48)
Thus in the dual problem we have stationary ejecta interacting with an imploding ambient medium whereas in the physical problem we have exploding ejecta interacting with a stationary ambient medium. Instead of the initial explosion at $`t=0`$ in the physical problem we have the final crunch at $`t^{}=0`$ in the dual problem.
The evolution in physical space of the supernova remnant structure has been often discussed and is well-known (eg Truelove and McKee, 1999; Dwarkadas and Chevalier, 1998). At early times, $`tt_{\mathrm{SW}}`$, the bulk of the ejecta expand ballistically except for a thin interaction region on the outside consisting of a forward shock running into the ambient medium, a zone of hot shocked ambient medium, a contact discontinuity, a zone of shocked ejecta and a reverse shock propagating slowly into the ejecta. At later times, when the mass of swept up ambient material becomes comparable to the ejecta mass, the reverse shock detaches itself from the contact discontinuity and implodes on the centre and the outer forward shock approximates the self-similar Sedov solution for a strong point explosion in a cold gas.
In the dual system the interaction looks a little different, and in some ways is simpler. Initially we have the stationary sphere of high density material (which for convenience we continue to call the ejecta, although in the dual representation it has not been ejected but is simply sitting there) surrounded by a very low density converging flow. The inflowing gas has to decelerate at a shock which stands about 10% further out in radius than the edge of the ejecta. Writing for convenience $`\tau =t^{}/t_{\mathrm{SW}}`$ there is an exact similarity solution in which $`U^{}\tau ^1`$, $`\rho ^{}\tau ^3`$ and $`p^{}\tau ^5`$ in the region external to the sphere of ejecta. This steeply rising pressure ($`\tau ^5`$) drives the reverse shock into the ejecta and starts the implosion of the ejecta.
At later times, as the ejecta collapse, the shock in the imploding ambient medium also moves inwards thereby reducing the rate of increase of the pressure. Transforming the Sedov solution to the dual system we see that the shock radius scales as
$$r^{}\tau ^{3/5}$$
(49)
and the postshock pressure as
$$p^{}\tau ^{19/5}.$$
(50)
Figure 1 attempts to show schematically the relation between the two representations. We note in passing that the dual representation is also useful for analytic and numerical studies; this aspect will be explored in a companion paper (Dwarkadas and Drury, in preparation).
## 5 Prospects for laboratory simulations
The perfectly symmetric explosion is neither realistic nor especially interesting; it is the easiest case to analyse numerically and there is no reason to suppose that a laboratory simulation would yield any additional information. However reality is more complicated. It is clear that the ejecta emerging from real supernova explosions are highly nonuniform on a wide range of scales and that to calculate the resulting remnant evolution in three dimensions is likely to remain a computationally challenging problem for some considerable time (cf Arnett, 1999).
The interesting implication of this work is that it should be possible with the new generation of implosion facilities to simulate precisely this problem, the interaction of highly structured ejecta with their surroundings including all the effects of spherical geometry. One can easily imagine constructing a solid target whose density distribution models the density distribution of the expanding ejecta. If this target is then used in an implosion experiment, and if the momentum loading on the surface is tailored to rise in the same manner as the pressure behind the forward shock in the dual system, a steep initial rise as $`\tau ^5`$ decreasing to $`\tau ^{3.8}`$, the evolution of the internal structures including all the turbulent mixing, instabilities and shock formation, should be exactly replicated.
We emphasise finally that the transformation discussed in this paper is additional to and complements the well-known linear scaling relations as excellently discussed by Ryutov et al (1999) in the astrophysical context, or Connor and Taylor (1977) in the plasma physics context. Dimensional similarity and scaling arguments are obviously central to any attempt at simulation on a laboratory scale of astrophysical systems, however precisely because they are very general and linear they cannot turn an explosion into an implosion. The remarkable nonlinear symmetry discussed in this paper is specific to the ideal gas equation of state, but subject to this constraint gives a powerful new degree of freedom in simulation studies by allowing an arbitrary uniform expansion or contraction to be factored out thereby transforming an explosion problem to an implosion one or vice versa.
## 6 Acknowledgments
This work was in part supported by the EU under the TMR programme, contract FMRX-CT98-0168. Some of it was carried out while LD was a visitor at the Research Centre for Theoretical Astrophysics of the University of Sydney.
## 7 References
Arnett, D. 1999, astro-ph/9909031
Connor J W and Taylor J B, 1977, Nuclear Fusion 17, 1047.
Drury, L O’C and Stewart, J M, 1976 MNRAS 177, 377.
Dwarkadas, V V and Chevalier, R A, 1998, ApJ 497, 807.
Martel, H and Shapiro, P R, 1998, MNRAS 297, 467.
Ryutov, D et al, 1999, ApJ 518, 821.
Truelove, J K and McKee, C F, 1999, ApJS 120, 299.
Poyet, J P and Spiegel, E A, 1979, AJ 84, 1918.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.