id
stringlengths 30
36
| source
stringclasses 1
value | format
stringclasses 1
value | text
stringlengths 5
878k
|
---|---|---|---|
no-problem/9907/cond-mat9907311.html
|
ar5iv
|
text
|
# Posture Sway and the Transition Rate for a Fall
## I Introduction
A problem of considerable interest in human body motion dynamics is the prediction of the probability to fall in a given environment. Essential information is contained in the small amplitude displacement noise exhibited by the body during quiet standing. Our purpose is to explore how the quiet standing noise data may be used to compute the probability (more precisely the transition rate per unit time) of falling in various external environments.
In the previous work of other groups, the data on body displacement noise were taken using a multicomponent force plate upon which the subject quietly stands. In the present work, we employ a sound wave assessment (SWA) device which measures body displacements using sound wave echoes; e.g., by measuring the time taken for a sound wave pulse to travel from the source to the standing subject and/or the time taken for a sound wave pulse to travel from the standing subject to a receiver. The experimental details concerning the SWA device will be discussed in Sec.II.
Three fundamental movement strategies to maintain a standing balance are well known: (i) body movement from the ankles, (ii) body movement from the hips, and (iii) stepping motions. Stepping motions will not be considered any further in this work which concerns quiet standing. As described in Sec.III, we find very different frequency scales for hip and ankle motions. The ankle motions proceed more slowly than the hip motions by about a factor of ten. Another feature of the ankle motions, also discussed in Sec.III, concerns the stability angle with respect to the vertical direction beyond which a subject, (without bending or stepping) will fall. The falling instability will be modeled by an appropriate potential in Sec.IV. Further, the noise itself will be model using the Fokker-Planck stochastic equation with a noise temperature intimately related to the mean square velocity fluctuations of the standing subject.
Apart from the analytic work on stochastic equations, we have performed computer simulations of quiet standing subjects. In Sec.V we compare the computer simulations with data taken on subjects using the SWA device. The agreement between theory and experiment appears satisfactory.
In Sec.VI we make use of the Kramers potential well “escape” formula. This determines the transition rate per unit time for a subject to fall. The Kramers result for the transition rate $`\mathrm{\Gamma }_K`$ for a fall is given by
$$\mathrm{\Gamma }_K=\left(\frac{\omega _0Q}{\pi \sqrt{2}}\right)\mathrm{exp}\left(\left(\frac{\omega _0a}{2V}\right)^2\right),$$
(1)
where $`\omega _0`$ is the frequency of the ankle oscillation mode, $`Q`$ is the quality factor for this mode, $`a`$ is the critical displacement amplitude beyond which the ankle mode is unstable, and $`V`$ is the root mean square velocity which is determined by the environmental noise temperature.
In the discussion Sec.VII, we note that the transition rate $`\mathrm{\Gamma }_K`$ for a fall has an essential singularity $`\mathrm{\Gamma }_K0`$ as the root mean square velocity $`V0`$. This implies that $`V`$ is the most sensitive parameter which may be used to determine the probability of falling. The implications will be discussed in the concluding Sec.VII of this work.
## II The Sound Wave Assessment Device
The SWA device employs two small ultrasonic transducers each of which can send or receive ultrasonic signals. The first is positioned on a stable laboratory stand. The second is attached with belt to the lower back of quietly standing subject. Each transducer sends sixteen ultrasonic pulses every $`3.3\times 10^2sec`$, which are later detected by the opposite transducer. The distance between the two transducers can then be obtained from the time required for the pulses to travel from the sender to the receiver.
A single transducer system has been used in industry, e.g. by Polaroid for photography applications, in order to determine the distance to an object. The single transducer is used to both emit and detect its own echo. Our use of two transducers improves the accuracy by having less air absorption as well as less spatial dispersion (fanning out) of the pulses. Our position measurements are quiet precise; i.e. displacements are measured to within an instrumental accuracy of $`0.02cm`$ in a bandwidth of $`12kHz`$. This yields a displacement noise error of $`\delta x2(\mu m/\sqrt{Hz})`$. Thus, we can record the fine movements of a subject (both towards and away from a static transducer) as a function of time.
## III Frequency Scales for Hip and Ankle Strategies
Two fundamental body movement strategies (to maintain a standing balance) are apparent in the SWA measurements discussed below. These are the quiet standing ankle movements and hip movements, which occur in widely different frequency bands. The hip motions take place on a time scale of $`\tau _{hip}2sec`$, while the ankle motions take place on a time scale of $`\tau _{ankle}30sec`$. In order to measure ankle motions, the durations of our measurements were rather long, i.e. $`3min`$, while the shorter runs of $`1min`$ duration were sufficient to study hip motions. The short times scales, of less than $`20sec`$ were sufficient to verify the fractal diffusion estimates of hip motions which have appeared in the pioneering studies using the force plate technology.
To illustrate the decomposition of movements into ankle motions and hip motions we plot, in FIG.1, the displacement as a function of time for a typical quiet standing subject. One notes the slowly varying drifts back and forth characteristic of ankle motions along with the more rapid (and in this case) much smaller hip motions.
Were the quiet standing subject to move without hip motions, i.e. move rigidly, then the stiff body would be held steady, if and only if the angle $`\vartheta `$ formed with the vertical were less then some critical angle $`\vartheta _c`$. The geometrical cone $`\vartheta <\vartheta _c`$ is called the sway envelop or the cone of stability. Typical values for the critical (forward) angle is $`\vartheta _c8^o`$. For angles $`\vartheta >\vartheta _c`$, the ankles can no longer hold the subject upright. In the absence of a step or other supports (such as arm motions) a fall will take place. If one considers the displacement of the part of the body at a height $`h`$ above ground, then $`a=h\mathrm{tan}\vartheta _c`$ represents the critical displacement amplitude beyond which the ankle mode is unstable. Such critical displacements are normally measured using standard tests of the subjects forward or backward reach.
## IV Fokker-Planck-Langevin Theory
The Fokker-Planck-Langevin stochastic approach to quiet standing sway processes has been shown to be extremely useful. In what follows we apply this method to the ankle movements. As discussed in Sec.III, the ankle movements can be described by a slow oscillatory motion modulated by faster random noise.
To mathematically describe the cone of stability, we model the metastable potential by the potential
$$U(x)=\left(\frac{m\omega _0^2a^2}{4}\right)\left[1\left(\frac{x}{a}\right)^2\right]^2,$$
(2)
where $`\omega _0`$ is the frequency of the ankle oscillation mode, $`m`$ is the mass of the mode described by the displacement $`x`$, and $`a`$ is the critical displacement.
The ankle mode, as shown in the experimental FIG.1, is far from a perfect oscillation. There exists a random force $`f(t)`$ and a finite quality factor $`Q`$ for the mode which, in the Langevin theory, are intimately related. The equation of motion for the mode may be written as
$$m\left[\ddot{x}+\left(\frac{\omega _0}{Q}\right)\dot{x}+\omega _0^2x\right]+U^{}(x)=f(t)+F(t),$$
(3)
where $`F(t)`$ is an applied force, and $`f(t)`$ is a “white noise” random force obeying
$$f(t)f(t^{})_{noise}=\left(\frac{2m\omega _0k_BT_n}{Q}\right)\delta (tt^{}),$$
(4)
and where $`T_n`$ is noise temperature. The noise temperature is by no means equal to the ambient temperature of the room in which the subject stands. Rather the noise temperature describes all of those fluctuations which couple into the coordinate $`x`$. In particular, all of the environmental and internal fluctuations which couple into $`x`$ contribute to the root mean square velocity $`V`$
$$V^2=\dot{x}^2$$
(5)
which can be obtained from experimental data such as that pictured in FIG.1. The noise temperature $`T_n`$ is here defined in terms of the root mean squared velocity $`V`$ via the equipartition theorem
$$\left(\frac{mV^2}{2}\right)=\left(\frac{k_BT_n}{2}\right).$$
(6)
Thus, for the case in which the applied force is written $`F(t)=m\beta (t)`$, the random force is written $`f(t)=m\alpha (t)`$ and the potential $`U(x)=m\varphi (x)`$, the Langevin equation reads
$$\ddot{x}+\left(\frac{\omega _0}{Q}\right)\dot{x}+\omega _0^2x+\varphi ^{}(x)=\alpha (t)+\beta (t),$$
(7)
where
$$\varphi (x)=\left(\frac{\omega _0^2a^2}{4}\right)\left[1\left(\frac{x}{a}\right)^2\right]^2,$$
(8)
and
$$\alpha (t)\alpha (t^{})_{noise}=\left(\frac{2\omega _0V^2}{Q}\right)\delta (tt^{}).$$
(9)
Eqs.(7), (8) and (9), with a hip modulation given by
$$\beta (t)=\beta _{max}cos(\omega _{hip}t)$$
(10)
can be used to perform a computer simulation of the ankle mode of motion.
For the healthy quiet standing subject, shown in FIG.1, an output of the random motion simulation is shown in FIG.2. The qualitative similarity between a number of quiet standing subjects and simulations allow us to conclude that the Langevin theory of postural sway is reasonable.
The Langevin theory can also be analytically expressed as an equation for the probability $`P(x,v,t)dxdv`$ for the subject to have a velocity in the interval $`dv`$ and a displacement in the interval $`dx`$. For example, without the hip modulation, the Fokker-Planck equation reads
$$\left(\frac{P}{t}\right)+v\left(\frac{P}{x}\right)=$$
$$\frac{}{v}\left\{\left(\frac{\omega _0v}{Q}\right)P+\varphi ^{}(x)P\right\}+\left(\frac{\omega _0V^2}{Q}\right)\left(\frac{^2P}{v^2}\right).$$
(11)
The Fokker-Planck and the Langevin formulations of the problem are equivalent. The former is useful for analytical calculations while the later is useful for computer simulations.
## V Postural Sway Data
Not all of the quiet standing subjects measured exhibited large undamped ankle mode oscillations. For some quiet standing subjects the ankle movements were strongly damped. Shown in FIG.3 is an example of a subject with suppressed ankle movements. The hip oscillations are still clearly visible.
The computer simulations for the overdamped case proceed exactly as previously described. However the quality factor of the mode is taken to be much smaller than for the underdamped case. For the quiet standing subject, shown in FIG.3, an output of the random motion simulation is shown in FIG.4. We again conclude, for the overdamped case, that the Langevin theory of postural sway is reasonable.
The ankle movement is more clearly seen in the mean square displacement fluctuation defined in mathematical Brownian motion theory as
$$\mathrm{\Delta }x(\tau )^2=$$
$$\underset{T\mathrm{}}{lim}\frac{1}{T}_{t_0(T/2)}^{t_0+(T/2)}\left|x(t+\tau )x(t)\right|^2𝑑t.$$
(12)
In physical systems the time $`T`$ of a run is finite. In our case $`T=180sec`$. We can then plot $`\mathrm{\Delta }x(\tau )^2`$ on the interval $`0<\tau <140sec`$.
For the underdamped experimental data of FIG.1, we show in FIG.5 a plot of $`\mathrm{\Delta }x(\tau )^2`$. The ankle movements show an oscillation in a very clear form when the averaging procedure of Eq.(12) is performed. The hip movement modulations are largely smoothed away by this same averaging procedure.
The data produced by the computer simulations can also be averaged according to Eq.(12) and then compared with mean square fluctuations taken from the experimental data. The hip movement modulations are barely noticeable in $`\mathrm{\Delta }x(\tau )^2`$.
In FIG.6 we have plotted $`\mathrm{\Delta }x(\tau )^2`$ for the simulation in FIG.2. We note that the oscillations for underdamped ankle movements are clearly present although the amplitude of the oscillation are high when compared with the experimental FIG.5. However, we still conclude that the Langevin model is qualitatively reasonable.
For the overdamped case, the data in FIG.3 gives rise to an experimental $`\mathrm{\Delta }x(\tau )^2`$ shown in FIG.7. Notice for the overdamped case the absence of oscillations in the ankle motion. The behavior of $`\mathrm{\Delta }x(\tau )^2`$ on the time scale shown is qualitatively not very far from simple Brownian diffusion.
This diffusion-like behavior is also present in the computer simulation shown in FIG.8. The simulated amplitudes are again large compared with the experimental amplitudes.
The Fokker-Plank-Langevin theory is mainly of qualitative significance when comparing simulations to experimental data.
## VI Kramers “Escape Rate” for a Fall
Although the form of the metastable potential in Eq.(2) is commonly used, the potential has not yet been shown to be unique. The qualitative features required are a minimum near $`x=0`$ and a barrier to falling near $`x=a`$. Eq.(2) is plotted in FIG.9.
A quiet standing subject exhibits oscillations about the potential minimum. The random force at a noise temperature $`T_n`$ will at some time push the displacement to values $`|x|>a`$ over a potential maxima at which time there is a fall. The potential barrier protecting the subject from a fall is given by
$$\mathrm{\Delta }U=U(|a|)U(0)=\left(\frac{m\omega _0^2a^2}{4}\right).$$
(13)
Neglecting hip motion modulations, the Kramers equation for the escape rate contains the Boltzman factor for overcoming a barrier; It is
$$\mathrm{\Gamma }_K=\left(\frac{\omega _0Q}{\pi \sqrt{2}}\right)e^{\mathrm{\Delta }U/k_BT_n}.$$
(14)
In terms of experimental parameters
$$\mathrm{\Gamma }_K=\left(\frac{\omega _0Q}{\pi \sqrt{2}}\right)\mathrm{exp}\left(\left(\frac{\omega _0a}{2V}\right)^2\right),$$
(15)
We used the parameters in Eq.(15) to describe twenty healthy subjects (between sixteen and forty five years of age) in street clothing with eyes wide open. The reach test length scale $`a`$ was calculated from the critical angle $`\vartheta _c`$ given in the literature. We find the Kramers predicted “times for a fall” to be anywhere from one week to three years. The time shortens considerably as the mean square velocity increases. Of course, this prediction by no means implies that these subjects will all experience a fall within the next three years! The fall transition rate of our model will be further discussed in the next and concluding section.
## VII Discussion
We have exhibited measurements of displacement noise with quiet standing subjects along with computer simulations based on a Fokker-Plank-Langevin model. For all ranges of the quality factor $`Q`$, from underdamped $`Q>>1`$ to overdamped $`Q<<1`$, the Langevin equation computer simulations were in qualitative agreement with the data. In this regard, one should note that the same subject when measured on different days can exhibit both overdamped and/or underdamped behavior, as well a somewhat different effective noise temperatures $`T_n`$, i.e. a somewhat different root mean square velocity $`V`$.
Since the Kramers transition rate for a fall depends sensitively on the root mean square velocity $`V`$, it follows that a persons transition rate for a fall can vary from day to day depending on the “noise temperature”. For example, on a day when a person is tired we conjecture that a fall is more likely than on a day when the person is wide awake. However, we presently have no data with which to prove this conjecture.
While the above Fokker-Planck-Langevin theory seems to be qualitatively correct, the theory overestimates the displacement noise amplitudes and thereby also overestimates the transition rate for a fall. Apart from leaving out the step strategy for avoiding a fall, we believe that we have perhaps also underestimated the subtlety of the hip strategy. Thus far, the hip motions have been described by an added modulation force in our computer simulations, and this inclusion gives at least qualitative agreement with experimental data. Nevertheless, the hip modulations might be more strongly correlated with the center of mass body coordinate than is presently being included. Further improvements on the present model are presently being pursued on both the experimental and theoretical level.
|
no-problem/9907/nucl-th9907055.html
|
ar5iv
|
text
|
# Fractal Structure of Random Matrices*footnote **footnote *Supported in part by the CNPq - Brazil and FAPESP
## Abstract
A multifractal analysis is performed on the universality classes of random matrices and the transition ones. Our results indicate that the eigenvector probabi;ity distribution is a linear sum of two $`\chi ^2`$-distribution throughout the transition between the universality ensembles of random matrix theory and Poisson.
The Anderson localization is a wave phenomenon characterized by a destructive interference that gives rise to unaccessible regions in the configuration space of a physical system. It occurs in many situations and, in particular, in condensed matter physics, it is the responsible for the metal-insulator phase transition (MIT) caused by an increasing of disorder in a quantum disordered system, in which, as a consequence, the material undergoes a transformation from a metallic to an insulator phase. The eigenstates of the system, in the metallic phase, extend over all the available space and, in the other hand, at the insulator side, it is localized around the impurities. This situation implies, as the transformation proceeds, in a modification of the fractal dimension of the wavefunctions. This aspect of the transition has been studied throw the use of the multifractal analysis which was introduced some years ago.
Random matrix ensembles are another powerful theoretical tool to study transitions from extended to localized states. The phase of the extended states, i.e., the metallic phase in the MIT case, is approached by the universal ensembles of random matrix theory (RMT), namely, the Gaussian Orthogonal Ensemble (GOE), if the system has time-reversal invariance, and the Gaussian Unitary Ensemble (GUE), if not. On the other hand, the insulator phase, where the states become localized, can be simulated by a Poissonian ensemble. Accordingly, the energy levels of the strong mixing metallic states are expected to follow Wigner-Dyson statistics of RMT, while the levels of the uncorrelated localized states of the insulator phase, have fluctuations that follow the Poisson statistics. The Maximum Entropy Principle (MEP) has been used to generate matrix ensembles that make the connection between two universal limiting situations, e.g., RMT and Poisson. The joint probability distribution of the matrix elements of the interpolating ensembles obtained are given by
$$P(H,\beta ,\alpha )=K_N\mathrm{exp}[\alpha _0Tr(H^2)]\mathrm{exp}\{\beta Tr[(HH_0)^2]\}$$
(1)
where $`N`$ is the dimension of the matrix, $`K_N`$ is the normalization constant, $`\alpha _0`$ is fixed by a choice of units and $`\beta `$ is the parameter that controls the transition. When $`\beta `$ varies from zero to infinity, the ensemble undergoes a transition from the RMT ensemble to the ensemble of $`H_0`$. By choosing $`H_0`$ to be a diagonal, we have the desired transition from RMT to Poisson. This connection between RMT and Poisson is not expected to be universal and, in fact, other possible interpolating ensembles have already been proposed, e.g., band matrices and U(N) invariant ensembles. However, the above formalism has the advantage that the chaoticity parameter $`\beta `$ is easily expressed in terms of the coupling constant. In this letter, we extend the multifractal analysis to the abstract space of random matrices to investigate statistical properties of the eigenstates of this interpolating ensemble.
Multifractal is a mathematical object that is not characterized by a unique dimension but by an infinite spectrum of dimensions. For example, in a classical dynamical system, the probability given by the frequency with which different ”cells ” of a partition of the strange atractor is visited, in the time evolution of a chaotic system, is a multifractal. In the quantum case, chaoticity may be defined as the situation in which the eigenfunctions spread uniformly over all components with respect to any basis. Of course, this reflects the fact that the system does not have any other good quantum numbers but the energy and it is the quantum mechanical equivalent of the absence of integrals of motion in the classical case. This property may be considered as an statement about the dimension of the wave function. On the other hand, when transitions towards regularity occur, one should expect that the presence of conserved quantities implies in some localization that should be followed by modifications in the respective dimensions of the eigenstates. This can be seen in Fig. 1, where it is plotted the logarithm of the components of a random eigenstate of the above ensemble as a function of its label, calculated at a critical value of the parameter $`\beta `$. It is also shown in the figure, a Gaussian fit that makes clear the localization. It can be seen that only a small fraction of the components contribute to the normalization. In this sense the state does not occupy all the available space. Here the support is given by the states of the basis and the probability distribution by the way these basis states are populated by the eigenstates of the Hamiltonian.
The critical behavior of the above transition ensemble has been defined in Ref. applying Shannon’s concept of entropy to the eigenstates components treated as probabilities. As a start point to introduce the multifractal formalism, we observe that this entropy is a particular case of Tsallis generalized entropy of a probability distribution $`p_i,`$ with $`i=1,N_p,`$ which in terms of the partition function
$$\chi (q)=\underset{i=1}{\overset{N_p}{}}p_i^q$$
(2)
is defined by
$$S_q=\frac{1\chi (q)}{q1}.$$
(3)
for any real $`q.`$ Associated with $`S_q`$, a spectrum of dimension functions, $`D_q,`$ can then be introduced as
$$D_q=\underset{l0}{lim}\frac{\mathrm{ln}\left[1\left(q1\right)S_q\right]}{(q1)\mathrm{ln}l},$$
(4)
where $`l`$ is a characteristic size associated with the partition.
Some positive integer values of $`q`$ have an immediate interpretation. Thus, $`D_0=\frac{\mathrm{ln}N_c}{\mathrm{ln}l},`$ where $`N_c`$ is the number of occupied cells, i.e., those with probability different from zero, gives the fractal dimension of the support. For $`D_1`$ we obtain
$$D_1=\frac{S_1}{\mathrm{ln}l}$$
(5)
where
$$S_1=\underset{i}{}p_i\mathrm{ln}p_i$$
(6)
is, by definition the Shannon’s information entropy, and $`D_1`$ is the information dimension. $`D_2`$ is the correlation dimension.
We now assume the scaling $`p_il^\alpha ^{}`$ and, also, that the exponents $`\alpha ^{}`$ vary continuously, as $`i`$ is varied, with a certain distribution $`\rho \left(\alpha ^{}\right)`$ that scales with $`l^{f\left(\alpha ^{}\right)}.`$ Thus the partition function can be transformed into an integral as
$$\chi (q)=𝑑\alpha ^{}\rho \left(\alpha ^{}\right)l^{q\alpha ^{}f\left(\alpha ^{}\right)}$$
(7)
Since $`l`$ is a small quantity, this integral will be determined asymptotically by the value $`\alpha ^{}=\alpha `$ that minimizes the exponent of $`l`$ in the integrand. This yields
$$D_q=\frac{1}{q1}\left[q\alpha f\left(\alpha \right)\right]$$
(8)
from which we can deduce
$$\alpha =\frac{d}{dq}\left[\left(q1\right)D_q\right]$$
(9)
These two equations give $`\alpha `$ and $`f\left(\alpha \right)`$ in terms of $`D_q`$ or, alternatively, $`D_q`$ if we know $`\alpha `$ and $`f\left(\alpha \right).`$ Some universal properties follow from these definitions. Thus $`f\left(\alpha \right)`$ is a convex function whose maximum is located at $`\alpha \left(0\right)`$ with $`f\left[\alpha \left(0\right)\right]=D_0`$ and $`\alpha `$ varies in the interval $`[D_{\mathrm{}},D_{\mathrm{}}].`$ $`f\left(\alpha \right)`$ gives the dimension of the support of those $`p_i`$ that scale with $`\alpha .`$
As a first application of the above formalism, we consider the special case of the transition from GOE to the Poisson for matrices of dimension $`N=2.`$ In this case, the probability distribution of a given component $`C`$ can be worked out analytically. It has been shown in Ref. that it is given by
$$P(y)=\frac{\alpha _0}{2\pi \beta }\sqrt{1+\frac{\beta }{\alpha _0}}\frac{1}{\sqrt{y\left(1y\right)}}\frac{1}{\frac{\alpha _0}{4\beta }+y\left(1y\right)}$$
(10)
where $`0<y=C^2<1`$. When $`\beta =0,`$ the GOE limit, the distribution is that of the component of a two dimensional unit vector that can point evenly in any direction. In the other limit, $`\beta \mathrm{},`$ the distribution goes to a sum of delta functions and the vector is completely localized, $`y`$ can then have only the values $`0`$ or $`1`$. This means that for small values of ratio $`\beta /\alpha _0,`$ the distribution is dominated by the two power law singularities located at the extremities of the segment. On the other hand, as $`\beta `$ increases, the two poles $`\frac{1}{2}\left(1\sqrt{1+\frac{\alpha _0}{\beta }}\right)`$ of the denominator, approach the interval from the left$`()`$ and the right$`(+)`$, respectively, and deform the power law behavior. In order to perform a more detailed analysis, we partition the interval $`[0,1]`$ into $`N_p`$ equal subintervals of size $`l=1/N_p.`$ We have then two kind of contributions to the partition function. Those that come from the power law singularities at $`y=0`$ and $`y=1`$ and those from the rest of the segment. To calculate the contributions of the first ones we integrate from $`0`$ to $`l`$ and from $`1l`$ to $`1`$ and for the others we just approximate them as $`\rho \left(y\right)l.`$ The partition function is then given by
$$\chi \left(q\right)\left[\mathrm{arctan}\left(\sqrt{\frac{\beta l}{\alpha _0}}\right)\right]^q+l^{q1}.$$
(11)
For small values of the parameter $`\beta `$, we can assume $`\beta l/\alpha _01`$ and the $`\mathrm{arctan}`$ can then be replaced by its argument and the first term becomes $`l^{q/2}`$. We then deduce
$$D_q=\{\genfrac{}{}{0pt}{}{1,q<2}{\frac{q}{2\left(q1\right)},q>2}$$
(12)
from which we derive a $`f(\alpha )`$spectrum with only two points, $`\alpha =1`$ with $`f\left(\alpha \right)=1`$ and $`\alpha =\frac{1}{2}`$ with $`f\left(\alpha \right)=0.`$ These values mean that the extremity points have fractal dimension zero while the others have the dimension of the support. Incidentally, we remark that we have exactly here the same distribution of that provide by the iteration of the logistic map.
In the second situation, we assume that although the $`l`$ are very small $`\beta `$ is so big that we can not linearize the $`\mathrm{arctan}`$ anymore. The first term scales then as $`l^0`$ and we deduce
$$D_q=\{\genfrac{}{}{0pt}{}{1,q<1}{0,q>1}$$
(13)
and the two points $`\alpha =1`$ with $`f\left(\alpha \right)=1`$ and $`\alpha =0`$ with $`f\left(\alpha \right)=0`$ for the $`f(\alpha )`$spectrum. We remind that we are here getting close to the situation in which the distribution becomes a sum of delta functions concentrated in the limits of the interval. The components approximate then the two values $`1`$ and $`0`$. That is why the exponent $`\alpha `$ vanishes.
We see, in both cases discussed above, that the dimension function exhibits discontinuity, in its first derivative in one case and in the function itself in the other. In the thermodynamics picture of the multifractal analysis, these discontinuity are interpreted as phase transitions and what we learn from the above results is that the transition chaos-order, here obtained by the variation of the parameter $`\beta ,`$ is followed by a change in the qualitative behavior of the dimension function. This modification may also be considered as a phase transition with respect to the variation of the external parameter $`\beta `$. This is exhibited in the figure where we can see that the information dimension $`D_1`$ plotted as a function of $`\mathrm{ln}\frac{\beta }{\alpha _0}`$ shows a typical first order phase transition pattern. One should expect that what we are observing in this case of ensemble of $`2\times 2`$ matrices are universal features of the transition RMT- Poisson. We pass now to the discussion of ensembles of matrices of size arbitrarily large.
We start discussing the GOE limit. It is known that, in this case, the probability distribution of the components is that of the components of a unit vector in the hypersphere in the space of $`N`$ dimensions. It can be proved that this is given by
$$P\left(y\right)=\frac{2\mathrm{\Gamma }\left(\frac{N}{2}\right)}{\mathrm{\Gamma }\left(\frac{1}{2}\right)\mathrm{\Gamma }\left(\frac{N1}{2}\right)}y^{\frac{1}{2}}\left(1y\right)^{\frac{N3}{2}}$$
(14)
The distribution is again characterized by the power singularities at the extremities $`y=0`$ and $`y=1.`$ The partition function associated to a division of the interval into $`N_p`$ cells of equal sizes $`l=1/N_p`$ approaches, for small $`l,`$ the behavior
$$\chi \left(q\right)l^{\frac{q}{2}}+l^{q1}+l^{\frac{2}{N1}}.$$
(15)
and, in the limit when $`l0`$, we find the dimension function
$$D_q=\{\begin{array}{c}\frac{N1}{2}\frac{q}{q1},q<\frac{2}{N3}\\ 1,\frac{2}{N3}<q<2\\ \frac{1}{2}\frac{q}{q1},q>2\end{array}$$
(16)
We have therefore a double phase transition separating three states, two of them are defined by the equation of state $`\frac{q1}{q}D_q=const.`$ which is generated by power law singularities and, in the middle of them, there is the state whose equation is given by $`D_q=const..`$ For the $`f\left(\alpha \right)`$-spectrum we deduce, making use of Eqs. $`\left(\text{8}\right)`$and $`\left(\text{9}\right),`$
$$f\left(\alpha \right)=\{\begin{array}{c}0,\alpha =\frac{1}{2}=D_{\mathrm{}}\\ 1,q=1\\ 0,\alpha =\frac{N1}{2}=D_{\mathrm{}}\end{array}.$$
(17)
which means that the two singular contributions from the extremities of the interval have zero fractal dimension while the rest of the others have the dimension of the support.
The more general case, with $`\beta 0`$ and $`N>2,`$ has been investigated by numerical simulation of ensemble of matrices. The dimension function in the limit $`l0`$ was extrapolated from the dimensions obtained with two small $`l^{}s`$ by the relation
$`D_q\left(0\right)={\displaystyle \frac{D_q\left(l_1\right)\mathrm{ln}l_1D_q\left(l_2\right)\mathrm{ln}l_2}{\mathrm{ln}l_1\mathrm{ln}l_2}}`$which follows from the assumption that for sufficiently small $`l`$, the partition function behaves as
$`\chi _qA\left(0\right)l^{(q1)D_q\left(0\right)}.`$The results obtained in this way for matrices of dimension $`N=100`$ are shown in Fig. 2. We see that the structure with two phase transitions and three states of the GOE case evolves to the picture given by Eq. (13), typical of the limiting distribution with two $`\delta `$-functions at the extrema of the interval. The correspondent $`f\left(\alpha \right)`$-spectrum is shown in the next figure, Fig. 3, for three values of the chaoticity parameter and matrices of dimension $`N=10`$. It is seen that the singular behavior of the GOE limit, Eq. (17) has been smoothed out by the numerical simulation. Finally, in the subsequent figure, Fig. 4, the first-order phase transition exhibited by the information dimension $`D_1`$, is seen. In the same figure, it is also shown the Shannon’s entropy of the eigenstates given by Eq. (6) with $`p_i=\left|C_i^k\right|^2`$ averaged over the $`k`$ states. In Ref. , the inflection point of this entropy has been taken as a definition of the critical value of the chaoticity parameter which separates the phase of localized and extend states. We can conclude from this figure that this definition is consistent with the results obtained for the information dimension.
One important point to remark here is the consequence the present analysis has on the question of what is the probability distribution of wavefunction components and, also, of matrix elements of an operator- strength function- in the intermediate regime between RMT and Poisson or, in more general terms, between chaos and order. It has been proposed that a $`\chi ^2`$-distribution of $`\nu `$ degrees freedom would fit this distribution. However, numerical simulations seem to suggest that a combination of two $`\chi ^2`$distributions is necessary in order to have a good description throw all the intermediate steps of the transition. The above results of the multifractal analysis seem to give a theoretical support to this empirical observation. Indeed, what we have shown is that the chaoticity parameter acts like an external thermodynamic variable that induces a first-order phase transition.. Therefore, one should expect a modification of the nature of the probability distribution as the transition proceeds. In the $`N=2`$ case, we have seen that this change in the structure of the function, Eq. (10), is provided by the coming into action of two poles that lie outside of the physical domain. For large size of the matrices it is the appearance of an extra $`\chi ^2`$\- distribution that takes care of the modification of the distribution in the passage from chaos to order.
Figure Captions:
Fig. 1 Logarithm of the components of a random eigenstate for a case intermediate getween GOE and Poisson. The calculations were done with matrices of dimension $`N=100`$. A Gaussian fit is also shown.
Fig. 2 The dimension function $`D_q`$ for matrices of size $`N=100`$ for four values of the parameter $`\beta (\alpha _0=1).`$
Fig. 3 The $`f\left(\alpha \right)`$-spectrum for matrices of size $`N=10`$ for four values of the parameter $`\beta (\alpha _0=1).`$
Fig. 4 Information dimension $`D_1`$ for matrices of size $`N=100`$, showing a first-order phase transition as a function of the logarithm of the chaoticity parameter. Also shown is Shannon’s entropy of the eigenstates.
|
no-problem/9907/nucl-th9907063.html
|
ar5iv
|
text
|
# Distribution of Matrix Elements of Random Operators*footnote **footnote *Supported in part by the CNPq and FAPESP (Brazil)
## Abstract
It is shown that an operator can be defined in the abstract space of a random matrix ensemble whose matrix elements statistical distribution simulates the behavior of the distribution found in real physical systems. It is found that the key quantity that determine these distributions is commutator of the operator with the Hamiltonian. Application to symmetry breaking in quantum many-body system is discussed.
Ensembles of random matrix theory (RMT) have had a wide application as models to describe statistical properties of eigenvalues and eigenfunctions of chaotic many-body systems. More general ensembles have also been considered, in order to cover situations that depart from the conditions of applicability of RMT. One such class of ensembles is the so-called deformed Gaussian orthogonal ensemble (DGOE). These ”intermediate ” ensembles are particularly useful when one wants to study the breaking of a given symmetry in a many-body system such as the atomic nucleus. Further, different variances of the deformed random matrix ensembles can interpolate any one of the three universal RMT ensembles, i.e., the GOE, the GUE and the GSE, and the Poisson ensemble, which would represent the transition between a fully chaotic situation, with no conserved quantity but energy, to a regular one, i.e., one with a complete set of operators that commute with the Hamiltonian. When discrete symmetries such as isospin, parity and time reversal are violated in a complex many-body environment,one relies on a description based on the transition between one GOE into two coupled GOE ’s in the first two cases and a GOE into GUE in the last case. This latter case has recently been studied by us in the case of disordered metal-insulator transition.
The above mentioned statistical properties refer to fluctuations, around average values, of quantities connected to the eigenvalues and the eigenfunctions. These mean values are specific to the physical system being considered and, in fact, semiclassical estimates of them can be derived in terms of the underlying Hamiltonian. The fluctuations, on the other hand, have a quantum origin and are, in principle, universal, in the sense that the only information they carry about the system, is the class of underlying symmetry they belong. In order to compare the statistics generated by these fluctuations with the predictions of the ensembles, it is therefore necessary, in both cases, of eigenvalues and eigenfunctions, to perform some convenient rescaling of variables that eliminates their average behavior. In the case of the eigenvalues, they are first unfolded, which means that they are mapped onto new levels with a constant density made equal to one. For the eigenfunctions, the most convenient quantities to be statistically analyzed are not directly the components of the eigenfunctions, taken with respect to some particular basis, but rather matrix elements of a given operator. These matrix elements also show statistical fluctuations around their mean values and they have to be subjected to some local average process that extracts secular variations as a function of the energy. However, what is still lacking is a direct comparison of matrix elements distributions with ensemble calculations. The difficulty being that, except in the limiting case of fully chaotic regime, nothing has been so far done in random matrix ensembles studies about these distributions. In the chaotic situation, with no conserved quantities and, as consequence, without any active selection rule, the matrix elements behave like components of a isotropic random vector and one expects them to follow the same distribution of the eigenstate components, i.e., the Porter-Thomas law. Nevertheless, this argument does not hold in intermediate situations between chaos and order.
Of course, the reason why matrix elements distributions have not yet been investigated, in the context of matrix ensembles, is simple: it is not clear how one can define an operator associated to an observable in this abstract space. This is exactly the question that is being addressed in this letter. We want to show that it is possible to introduce some random operator whose matrix elements would simulate, in some way, the behavior of observables found in calculations and measurements performed in real physical systems. In the construction of this operator, we will be guided by the idea that when a system undergoes a chaos-order transition, a quantity that has a key role in determining the statistical behavior of the matrix elements of an operator, is the value of its commutator with the Hamiltonian.This is implied by the equation
$$\left(E_lE_k\right)<E_lOE_k>=<E_k[O,H]E_k>=i\mathrm{}\frac{d}{dt}<E_lOE_k>$$
(1)
here the last equality was obtained using Schrödinger equation and assuming an operator with no explicit dependence on time. Eq.$`\left(\text{1}\right)`$ clearly shows that the commutator supplies the connection between the matrix elements of an observable and its behavior as a function of time.
To see how this is reflected in the statistical distribution, suppose that we choose to look at the matrix elements of an observable $`O`$ which becomes a conserved quantity in the regular regime. The distribution of these elements will undergo a transition from the Porter-Thomas law, at the chaotic side, to the singular distribution $`<E_lOE_k>`$ $`\delta _{kl},`$ at the regular side, since the last term in $`\left(\text{1}\right)`$ is zero in this case. In a more general situation, as it occurs when reduced transition strengths are measured, what happens is that some transitions might become forbidden at the less chaotic regime. Then as the selection rules becomes operative, the first term in $`\left(\text{1}\right)`$ vanishes for some pairs $`(l,k)`$ and, as a consequence, we may say that we have a partial conservation of the observable which is causing the transitions. Again one should expect the missing transitions to cause a deviation of the statistical distributions from the Porter-Thomas law.
To define the ensembles of random matrices we are going to work with, we follow the construction based on the Maximum Entropy Principle, that leads to a random Hamiltonian which can be cast in the form
$$H=H_0+\lambda H_1,$$
(2)
where $`\lambda `$ is the parameter that controls the chaoticity of the ensemble. We will assume it to be defined in the domain $`0\lambda 1`$, in such a way that for $`\lambda =1,`$ $`H=H^{GOE},`$ and for $`\lambda =0,`$ we have some reduced chaotic situation defined by the choice of $`H_0`$. Since we are specifically interested in the transitions from GOE to Poisson and from GOE to two coupled GOE’s, the above requirements are sufficient to determine $`H_0`$ and $`H_1.`$
First, we consider the GOE$``$Poisson transition, in which case we write
$$H_0=\underset{i=0}{\overset{N}{}}P_iH^{GOE}P_i$$
(3)
and
$$H_1=\underset{ij}{\overset{N}{}}P_iH^{GOE}P_j$$
(4)
where $`H^{GOE}`$ is a $`N`$ \- dimensional random matrix taken from the Gaussian Orthogonal Ensemble, and we have introduced the projection operators $`P_i=i><i,`$ $`i=1,\mathrm{},N`$ . It is straightforward to verify, from the usual properties of projectors, that $`H=H^{GOE}`$ for $`\lambda =1`$ and, on the other hand, when $`\lambda =0`$, $`H`$ becomes a diagonal matrix whose eigenvalues are known to the follow Poisson distribution.
Considering now the GOE$``$2GOE’s transition, we write
$$H_0=PH^{GOE}P+QH^{GOE}Q$$
(5)
and
$$H_1=PH^{GOE}Q+QH^{GOE}P$$
(6)
where $`P=\underset{i=1}{\overset{M}{}}P_i`$ and $`Q=1P.`$ Here $`H_0`$ is a two blocks diagonal matrix of dimensions $`M`$ and $`NM`$ and each block is by construction a GOE random matrix. Again, it is easily verified that $`H=H^{GOE}`$ for $`\lambda =1.`$
Turning now to the operator, we choose it to have the form
$$O=\underset{i=0}{\overset{N}{}}P_iH^{GOE}P_i.$$
(7)
In the case of the transition towards Poisson, since $`O=H_0`$, we immediately derive the commutator relation
$`[H,O]=\lambda [H_1,O].`$where the commutator on the RHS has matrix elements given by
$$[H_0,H_1]_{ij}=\left(H_{ii}^{GOE}H_{jj}^{GOE}\right)H_{ij}^{GOE}$$
(8)
which obviously is a nonvanishing antisymmetric random matrix. Therefore, the statistical behavior of the elements of $`O`$ is controlled by $`\lambda `$ and it evolves from the Porter-Thomas distribution to a singular delta distribution as $`\lambda `$ goes from $`1`$ to $`0.`$
On the other hand, for the transition to two GOE’s, it is convenient to separate the sum in Eq. $`\left(\text{7}\right)`$ into two parts in which the first $`M`$ terms define the operator $`O_P`$ and the other $`NM`$ define the operator $`O_Q`$ , by construction $`O=O_P+O_Q.`$ The commutator with the Hamiltonian can then be written as
$$[O_P,PHP]+[O_Q,QHQ]+\lambda \left(O_PPHQ+O_QQHPQHPO_PPHQO_Q\right)$$
(9)
These terms have a simple interpretation. The first and the second ones are responsible for the transitions between states inside the blocks $`PHP`$ and $`QHQ`$, respectively. On the other hand, the four terms inside the parenthesis cause transitions among states located inside different blocks. Since the latter terms in (9) are all multiplied by the parameter $`\lambda ,`$ when $`\lambda 0`$ these transitions become forbidden. This property shows that the operator we have introduced is very convenient to study transition towards two coupled GOE’s, a scenario appropriate to investigate discrete symmetry violation in complex quantum systems.
To make our model more flexible we are going to consider, in this case of the transition to two GOE’s, matrix elements of the generic operator
$`O^{}=\left(1q\right)H_0+qO`$where $`H_0`$ is given by Eq. (5) and $`q`$ varies between $`0`$ and $`1`$. With this form $`O^{}`$ represents an operator which has a conserved part. We have now a model with two parameters, the parameter $`\lambda `$ of the Hamiltonian which may be fixed by the fitting the eigenvalues distribution and the parameter $`q`$ that selects the operator. For $`q=1`$, we have just a selection rule, as $`q`$ decreases we are introducing a localization in the matrix elements inside the selection rule.
Following the standard procedure, we first construct, with the operator $`O,`$ the normalized vector
$`\alpha _k>={\displaystyle \frac{OE_k>}{<E_kO^2E_k>}}`$where $`E_k>`$ with $`k=1,\mathrm{},N`$ is an eigenvector of the Hamiltonian. From these $`N`$ vectors we define the matrix elements
$`T_{kl}=<E_l\alpha _k>`$which are the quantities to be statistically analyzed.. It is convenient to work with $`T_{kl}^2,`$ and, as mentioned above, perform a local average that extracts secular variation with the energies. Thus we introduce the quantities
$`y_{kl}={\displaystyle \frac{T_{kl}^2}{<T_{kl}^2>}}`$where the average is done by using a Gaussian filter of variance equal to 2. It has become standard in the analysis of these quantities to histogram their logarithm. In the Figs. 2 and 3, it is shown the numerical results obtained for the two transitions.
In Fig. 1, calculations performed in the case of the GOE-Poisson transition are presented. The histograms of the logarithm of the matrix elements for four different values of the chaoticity parameter $`\lambda `$ are plotted together with three theoretical distributions: the Porter-Thomas, corresponding to a $`\chi ^2\left(\nu \right)`$ distribution with $`\nu =1`$ degree of freedom , a $`\chi ^2\left(\nu \right)`$ where the degree of freedom $`\nu `$ is derived directly from the ”data ” and finally a distribution in which the histograms were fitted with the superposition of two $`\chi ^2`$ distributions. Whereas one $`\chi ^2`$ distribution of the type suggested in Ref. , is constrained to a peak always around zero, our calculations, however, suggest the need of a linear combination of two distribution as we have already proposed in Ref. . This kind of behavior has seems to be typical of transitions in which the eigenstates become more and more localized. It can be understood as a signature of the multifractal nature of the states.
In Fig. 2, the results for the transition towards two GOE’s are shown. The chaoticity parameter was fixed at value $`\lambda =0.032`$ and the parameter $`q,`$ that measures the localization inside the selection rule, is varied. Again, the same theoretical distributions of Fig. 1 are also shown. At this value $`\lambda `$, we expect to be near the case in which the two GOE’s are completely decoupled. We see from the figure that the distributions are greatly dependent on the parameter $`q`$.
We observe that in the extreme situation when we have two diagonal uncoupled blocks, the probability distribution can be written down explicitly. In fact, since inside each block we have Porter-Thomas distributions and outside of them the matrix elements vanish, we have for a generic reduced strength $`y=T_{kl}^2`$, the distribution
$`P\left(y\right)=\left({\displaystyle \frac{M_1}{N}}\right)^2\sqrt{{\displaystyle \frac{\pi M_1}{2y}}}\mathrm{exp}\left({\displaystyle \frac{M_1y}{2}}\right)+\left({\displaystyle \frac{M_2}{N}}\right)^2\sqrt{{\displaystyle \frac{\pi M_2}{2y}}}\mathrm{exp}\left({\displaystyle \frac{M_2y}{2}}\right)+4{\displaystyle \frac{M_1M_2}{N^2}}\delta \left(y\right)`$where $`M_1+M_2=N`$. The case $`q=0`$, in Fig.2, corresponds, for the above value of the chaoticity parameter, to an almost uncoupled situation. We see that we have only a slight deviation from Potter-Thomas. This means that the delta function, in the above expression, is practically not observed. This can explained by the fact that we are plotting the logarithm of the intensities. In this kind of plot the zero elements are not observed and, more than that, they gain a vanishing statistical weight that comes from the Jacobian of the transformation $`y\mathrm{ln}y`$. Physically this means that the selection rule is hardly detected in this kind of analysis if what is being considered are statistics of matrix elements of an operator without a conserving part.
In conclusion, we have extended in this paper the Maximum entropy theory to the case of the distribution of matrix elements. Contrary to what have been suggested recently in the literature, we find that in the intermediate situation described by the deformed Gaussian ensembles, the distribution is like a sum of two $`\chi ^2`$ ones. Our theory is quite fit to address the question of symmetry breaking in complex many-body systems. The application too isospin symmetry violation in light nuclei is underway.
Figure Captions:
Fig. 1 Four histograms of the logarithm of the matrix elements distributions of the random operator O, see text, in the case of the transition GOE $``$ Poisson, for the indicated values of the chaoticity parameter $`\lambda .`$ The calculations were done with matrices of dimension $`N=100.`$
Fig. 2 Four histograms of the logarithm of the matrix elements distributions of the random operator $`O^{}`$, see text, in the case of the transition GOE $``$ 2GOE’s, for the indicated values of the parameter $`q.`$ The calculations were done with matrices of dimension $`N=120`$ and block sizes $`M_1=40`$ and $`M_2=80.`$
|
no-problem/9907/astro-ph9907320.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
The Visual and Near Infrared Multi Object Spectrographs (VIMOS and NIRMOS respectively) are two spectrographs being developed by the Franco-Italian Consortium VIRMOS for the VLT (LeFevre et al, 1998). They are due in Operation for Spring 2000 and Spring 2001 respectively. Both will have imaging capabilities and an Integral Field Unit, allowing for full field spectroscopy over 1 arcmin<sup>2</sup>. VIMOS field of view will be composed of four quadrants of 7x8 arcmin, for a total of 224 arcmin<sup>2</sup>. Spectroscopy will be done using user defined slits cut into INVAR masks, one mask per quadrant. Multiobject spectroscopy will be possible in either High resolution mode ($`R2500`$) and low resolution mode ($`R200`$). In the low resolution mode, the design of the instrument has been done in such a way as to exploit spectra stacking, so that the maximum number of spectra per exposure can be obtained. Given the CCD size (2048x4096 pixels) and the grism resolution, up to 200 spectra per quadrant can be stacked in one image (a total of 800 spectra per exposure).
To exploit these capabilities, tools must be provided to the astronomer to automatize the object selection procedure, and to produce masks for the spectroscopic observations. In the case of the VLT, ESO has provided a common scenario (the so called ”VLT Data Flow System”) within which we had to accomodate the particular needs of VIMOS. In this contribution we will outline the software we are going to provide for handling VIMOS (and NIRMOS) spectroscopic observations. In part b), the choice of the mask manufacturing unit and its handling are outlined
## 2 Operations Overview
VIMOS observations will be typically performed in two phases: an observation in imaging mode of the field, in order to get the field object positions, and the spectroscopic observation of an appropriate number of the previously selected objects. After the imaging observation, but before the spectroscopic one, 4 masks, one for each quadrant, shall be manufactured. During the manufacturing process, the slits corresponding to the objects to be spectroscopically observed will be cut.
Before the spectroscopic observations start, all masks to be used during the night will be loaded into the instrument by putting them into an appropriate container (Instrument Cabinet). The Instrument Cabinets will be brought back and forth from the instrument to the mask manufacturing building, where the Laser Cutting Machine will be installed, together with its own control devices.
The Mask Preparation Software shall perform the selection of the objects to be spectroscopically observed, the slit positioning, the transformation from astronomical to laser machine coordinates and will interact with MMU control software.
MPS is divided into two parts: MPS-P2PP (Phase 2 Proposal preparation), as it is tied with P2PP, and MPS-IWS (Instrument WorkStation), as it shall run exclusively on the IWS in Paranal.
MPS-P2PP provides the astronomer with tools for the selection of the objects to be spectroscopically observed and an algorithm for the slit positioning in such a way as to get the most effective solution in terms of number of spectra per field. Slit dimensions and positions will be stored in Aperture Definition Files. ADFs will be included in the appropriate Observation Block during the spectroscopic P2PP phase. When ADFs get in Paranal, ESO Observation Handling System will extract the ADFs from Observations Blocks and pass them to MPS-IWS for mask manufacturing
MPS-IWS is responsible for converting slit coordinates from astronomical (RA & Dec) to manufacturing machine coordinates, assigning a code identification for each mask to be manufactured, sending such files to the Mask Manufacturing Control Unit (Mask Manufacturing Order), and receiving acknowledge when the masks have been completed and stored (Mask Manufacturing Report). Such information will be passed back to OHS, and from that moment the observation can be scheduled for the following nights.
When a spectroscopic observation is scheduled, OHS sends to MPS-IWS a Mask Insertion Order, i.e. the list of masks to be put into the Instrument Cabinet for subsequent nights. Such order is reformatted and passed on to MMCU. The acknowledge from MMCU is passed backed to OHS under the form of a Mask Insertion Report. Only observations for which the corresponding masks exist and are in the instrument cabinet can be performed
At the moment of observation, mask alignment onto the focal plane is performed by a procedure implemented at Observation template level. The mask is put in the focal plane, then a short exposure without dispersion element is performed. The 4 masks have some holes, corresponding to ”reference objects”, of pre-defined dimension (at moment 5x5 arcsec are foreseen). Given the VLT pointing accuracy, in the resulting image the reference objects should fall within the holes, though not necessarily centered. An iterative procedure takes care of computing the pointing shift needed to center reference objects in reference holes. Such procedure can be either manual, or even fully automatic. When the reference objects are centered in their respective holes, within a predefined accuracy, the dispersion element is introduced and the observation can start.
As a last step, when an observation has been successfully carried out (the quality Control ESO Pipeline is in charge of assuring that), OHS sends to MPS-IWS a Mask Discarding Order, which is re-sent to MMCU. Once a mask has been discarded, also the OB can be deleted.
In figure 1, the Data flow in the case of Manufacturing is schematized.
Here, we will not further deal with MPS-IWS, as it basically acts as a link bewteen ESO OHS and the PC-based machine control computers.
## 3 User’s requirements and observational constraints
In designing MPS-P2PP, we have taken into considerations the following user’s requirements
$``$ To exploit multiplexing capabilities, an automatized way for choosing objects must be provided
$``$ Manual choice of some particularly interesting objects must be possible
$``$ Manual exclusion of particular objects must be possible
$``$ Manual definition of arbitrary shaped slits (i.e. curved slits) must be implemented
$``$ Interactions must rely on a user friendly graphical user interface
$``$ Choice of objects starting from a user provided catalogue (including at least Right Ascension and Declination for each object) must be possible
On top of these, the observation procedure outlined above, also requires
$``$ Manual choice of reference objects to be used for mask alignment onto the focal plane
The need of a graphical interaction has led us to adopt one of the available display systems. Our first choice has been MIDAS, which gives the advantage of supporting table handling. A working version of MPS-P2PP MIDAS based is already available. To keep similarity with FORS FIMS package, though, we are now implementing a second version of MPS-P2PP, based on Skycat for the image display and catalogue overlay features
The algorithm for automatic object choice has been totally developed in house.
## 4 Slit Positioning Optimization Code: SPOC
SPOC is a C program which, taking into account the initial list of user’s preferred objects, the intrument constraints and the constraints given by the wish of having well reduced data, finds the best solution in terms of number of slits. Instrument constraints are mainly given by the spectrum length, coupled with CCD size. Slits must be positioned so that the dispersed spectrum falls entirely within the detector. To ease data reduction, we have to allow for a certain amount of sky on each side of the object falling in the slit, we must avoid spectra superposition and we must take into account spectra higher order superpositions. In fig. 2 we illustrate different configurations
Panel a) is a representaion of a possible ”strip” of 4 slits. Slit length is tuned so that the largest object is well contained within the slit and a sky region on each side is allowed for. In Figure 2b, we show how to accomodate spectra in the FOV. Each slit (black thick line) produces a first order spectrum centred in Y around the slit, a zero order spectrum of few pixels right below, and a second order spectrum above. The zero order consits in a very high, narrow (in the Y direction) peak. Its contamination to underlying spectra cannot be removed, but, being not larger than few pixels, good reduction can be achieved by simply masking out the affected pixels. Contribution of the second order is around only 10%, but it is spread over $``$ 600 pixels along the dispersion direction. Therefore contamination by second order must be handled carefully (see below). Making use of the fact that the FOV is 2048x2340 pixels, but the CCD s are 2048x4096 pixels, we can place slits over the full 8 arcminutes in Y. In Figure 2 c,d,e,f we illustrate what are the contraints to be taken into account when placing slits. In 2c, two slits of the same width are placed, not aligned in Y direction. In this case the second order of the first slit (starting from the bottom) partially covers the first order of the following slit. Such a situation makes background subtraction very difficult. In the second case (2d), we have two slits aligned in Y, but the first being shorter than the second. Also in this case, good quality background subtraction from the second spectrum is impossible. In 2e we show two slits of same length, aligned, but with the first order of the first one partially superimposed to the first order of the second one. It is obvious that data will be of bad quality. Finally, in 2f, we put two slits of same length, aligned one to the other, and positioned so that the second order of the first slit totally covers the first order of the second slit. In this case background subtraction is still possible with reasonable results. To summarize, SPOC must take into account the following constraints: along a vertical ”strip” slits must be aligned and with same length. Their distance in Y must be so that the first order spectra do not fall one onto the other.
When designing SPOC, great attention must be paied to the possible biases an automatization algorithm can introduce in observations. A first bias is intrinsic to any MOS instrument: to allow for multiplexing, slits will have the tendency to align themselves in horizontal bands. But there are two more biases which can be easily introduced: if the algorithm starts its work always from one point (e.g. top left) then more slits will be placed there with respect to other parts of FOV, and, secondly, if the only criterium is to maximize the number of objects, and the slit length is tuned on object size, smaller (i.e. fainter) objects will be privileged.
Given all what above stated, we can now choose the algorithm. If we divide each quadrant into subsequent vertical ”strips” each strip being as large as the longest possible slit, then this becomes a purely combinational problems. Taking into account all possibilities sums up to $`10^{73}`$ different combinations to be computed to find the best one. A simplification can be introduced by applying the same criteria as in the well known travelling salesman problem, i.e. consider only the most probable solutions. For each X (spatial) position, we can vary the strip width from a minimum (given by the space we allow for sky) and a reasonable maximum (given by the largest object in the field we are interested in). It is easy to show that the function (Number of slits)/(slit width) vs. the slit width has a maximum for a paricular slit width. We can compute such a maximum for each X position, and finally find the best combination of strip widths which maximizes the total number of slits.
## 5 Results
We have applied SPOC to both real data sets, and to artificial ones. Computational time is negligible, being around 2 seconds on a Sun Ultra Sparc 5 for one quadrant. Apart from the intrinsic MOS bias, we have found that there is still a tendency to privilege smaller objects with respect to bigger ones. To overcome this, we have introduced the possibility of ignoring the object size to place slits in a strip, and introducing it only when the maximum number of objects has been placed in a strip. The optimization in this case decreases by about 10% the maximum number of slits, with the advantage that diameter and magnitude distributions of the input catalogue and of the observed objects are now statistically indistinguishable. Both possibilities (full optimization, slightly biased and unbiased lower optimization) will be offered to the user.
In Fig. 3 we show the results obtained by applying SPOC to a simultaed catalogue. The black line represents the input catalogue, and the grey line the distributions obtained with SPOC (fully optimized in the left panels, unbiased in the right panels). Panels a and b clearly show that the revised version is unbiased against object size. The difference in terms of number of objects is 5%
|
no-problem/9907/quant-ph9907053.html
|
ar5iv
|
text
|
# Determination of Atom-Surface van der Waals Potentials from Transmission-Grating Diffraction Intensities
## Abstract
Molecular beams of rare gas atoms and D<sub>2</sub> have been diffracted from 100 nm period SiN<sub>x</sub> transmission gratings. The relative intensities of the diffraction peaks out to the 8th order depend on the diffracting particle and are interpreted in terms of effective slit widths. These differences have been analyzed by a new theory which accounts for the long-range van der Waals $`C_3/l^3`$ interaction of the particles with the walls of the grating bars. The values of the $`C_3`$ constant for two different gratings are in good agreement and the results exhibit the expected linear dependence on the dipole polarizability.
Already in 1932 Lennard-Jones predicted that the van der Waals interaction of atoms and molecules with solid surfaces is given by
$$V=\frac{C_3}{l^3},\text{ }l10\text{ Å}$$
(1)
where $`l`$ is the distance from the surface. This potential plays an important role in understanding virtually all static (thermodynamical) and dynamical aspects of gas adsorption phenomena. Despite its importance, very few experimental determinations of $`C_3`$ have so far been reported and most of our present knowledge is based on theoretical estimates . The pioneering experiments by Raskin and Kusch on the deflection of Cs atoms from a conducting metal surface have recently been extended to alkali atoms in high Rydberg states by measuring the transmission through $`8\text{mm}`$ long narrow ($`29\mu \text{m}`$) channels as a function of their principal quantum number $`n`$ . Similar techniques have also been applied to the interaction of alkali atoms in their ground state or in low excited states . Although the scattering of many different atoms and molecules from solid single crystal surfaces has been extensively studied, the reflection coefficients are relatively insensitive to the weak long range attractive forces since the collisions are largely determined by the reflection from the hard repulsive wall close to the surface .
Here, a new atom optical technique using transmission grating diffraction of molecular beams is employed. The van der Waals force causes a change in the diffraction intensities just as a smaller slit width would. A newly developed theory makes it possible to interpret measurements over a range of different beam energies in terms of the potential constant $`C_3`$. For an incident plane wave the diffraction peak heights depend on the number of illuminated slits $`N`$, as $`N^2`$. With $`N=100`$ slits the gain in sensitivity is about four orders of magnitude over previous experiments.
The measurements were made with a previously described molecular beam diffraction apparatus. The beams are produced by a free jet expansion of the purified gas through a $`5\mu \text{m}`$ diameter, 2 $`\mu `$m long orifice from a source chamber at a temperature $`T_0`$, into vacuum of about $`2\times 10^4\text{mbar}`$. At $`T_0=300\text{K}`$ the source pressure $`P_0`$ was $`140\text{bar}`$ for He, Ne, Ar and D<sub>2</sub> and $`50\text{bar}`$ for Kr. At lower source temperature $`P_0`$ was reduced to avoid cluster formation. The atomic beams are characterized by narrow velocity distributions with $`\mathrm{\Delta }v/v2.1`$ % (He), 5 % (Ne), 7.6 % (D<sub>2</sub>), 7.7 % (Ar), and 10 % (Kr) at $`T_0=300`$ K, where $`\mathrm{\Delta }v`$ and $`v`$ denote the full half width and the mean value, respectively. After passing through the 0.39 mm diameter skimmer the beam is collimated by two $`10\mu `$m wide and $`5\text{mm}`$ tall slits $`6\text{cm}`$ and $`48\text{cm}`$ downstream from the source before it impinges on the silicon nitride (SiN<sub>x</sub>) transmission grating with a grating period of $`d=100\text{nm}`$ and $`5\text{mm}`$ high slits with nominal widths of $`s_{\mathrm{nom}}=50\text{nm}`$ placed $`2.5\text{cm}`$ behind the second collimating slit. The diffraction pattern is measured by rotating the electron impact ionization mass spectrometer detector around an axis parallel to the grating slits. A third, $`25\mu \text{m}`$ wide slit, $`52\text{cm}`$ downstream from the grating, provides a measured angular resolution of $`70\mu \text{rad}`$ (FWHM).
Transmission measurements with He and Kr atomic beams indicate that the grating bars have a truncated trapezoidal profile (thickness in the beam direction $`t`$) with the narrow face towards the incident beam. The measured wedge angles $`\beta `$ and geometrical slit widths $`s_0`$ (see below) are listed in Table I.
The diffraction measurements are illustrated in Fig. 1 for four inert gases as a function of the perpendicular wave vector transfer $`\kappa =k\mathrm{sin}\vartheta `$, where $`\vartheta `$ is the diffraction angle. The area under the $`n`$-th order diffraction peak, $`I_n`$, is proportional to the grating slit function evaluated at the diffraction angle of the maximum position, $`\vartheta _n`$. For this grating, I, which has equally wide bars and slits, the zeros of the slit function coincide with the even diffraction orders , which are therefore expected to vanish. Whereas for He this is almost the case, for the heavier rare gases, an increasing deviation is observed. For example, the small He intensity ratio of the second and third order peaks is slightly larger for Ne, almost unity in the case of Ar and, finally, for Kr is greater than one. Similar trends are observed for the ratio of the sixth and fifth order peaks and in the ratio of the most intense zeroth and first orders, which increases significantly from about 0.39 for He to about 0.52 for Kr.
These differences are attributed to the interaction of the atoms with the bar walls, Eq. (1), which so far has not been accounted for in the theory of atom/molecule diffraction. For a plane wave $`e^{ikz}`$ incident on a transmission grating with perfectly reflecting grating bars and with an additional (attractive) potential at the bar sides, the diffracted wave function is, for large $`r`$,
$$\psi (𝐫)\underset{r\mathrm{}}{}f(\vartheta )\frac{e^{i(kr\pi /4)}}{\sqrt{r}},$$
(2)
where $`r^2=x^2+z^2`$ is in the scattering plane normal to the height of the slits. The scattering amplitude $`f(\vartheta )`$ is determined by the grating transmission function $`\psi (x,0)`$, i. e. by the wave function at the far side slit boundaries ($`z=0`$), which depends on the attractive potential. Huygens’ principle yields
$$f(\vartheta )=\frac{\mathrm{cos}\vartheta }{\sqrt{\lambda }}_{\mathrm{slits}}𝑑x\psi (x,0)e^{ikx\mathrm{sin}\vartheta }.$$
(3)
If the slit and the bar widths are much larger than the de Broglie wave length $`\lambda `$, the intensity $`I(\vartheta )=|f(\vartheta )|^2`$ can be written as a product
$$I(\vartheta )=\left(\frac{\mathrm{sin}\left(\frac{1}{2}Nkd\mathrm{sin}\vartheta \right)}{\mathrm{sin}\left(\frac{1}{2}kd\mathrm{sin}\vartheta \right)}\right)^2\left|f_{\mathrm{slit}}(\vartheta )\right|^2,$$
(4)
where $`N`$ denotes the number of slits and $`|f_{\mathrm{slit}}|^2`$ is the slit function. Thus, the atomic diffraction pattern consists of principal maxima at the diffraction angles $`\mathrm{sin}\vartheta _n=n\lambda /d`$, $`n=0,\pm 1,\pm 2,\mathrm{}`$ while $`\left|f_{\mathrm{slit}}(\vartheta )\right|^2`$ plays the role of an envelope function. Eq. (3) gives, after a change of variable from $`x`$ to a variable with the origin at the edge of a slit, $`\zeta s_0/2x`$,
$$f_{\mathrm{slit}}(\vartheta )=\frac{\mathrm{cos}\vartheta }{\sqrt{\lambda }}2_0^{\frac{s_0}{2}}𝑑\zeta \mathrm{cos}\left[\kappa \left(\frac{s_0}{2}\zeta \right)\right]\tau (\zeta ),$$
(5)
where $`\tau (\zeta )=\psi (s_0/2\zeta ,0)`$, $`0\zeta s_0/2`$, is the single-slit transmission function.
It is instructive to first deduce the general structural form of $`f_{\mathrm{slit}}(\vartheta )`$. Since the grating bars reflect those atoms which touch the bar walls, the wave function in the slit vanishes at the walls, i. e. $`\tau (0)=0`$. Taking this into account and after a partial integration Eq. (5) becomes
$$f_{\mathrm{slit}}(\vartheta )=\frac{\mathrm{cos}\vartheta }{\sqrt{\lambda }}\tau \left(\frac{s_0}{2}\right)\frac{e^{i\kappa \frac{s_0}{2}}\mathrm{\Phi }(\kappa )e^{i\kappa \frac{s_0}{2}}\mathrm{\Phi }(\kappa )}{i\kappa },$$
(6)
where
$$\mathrm{\Phi }(\pm \kappa )_0^{\frac{s_0}{2}}𝑑\zeta e^{\pm i\kappa \zeta }\frac{\tau ^{}(\zeta )}{\tau \left(\frac{s_0}{2}\right)},$$
(7)
with $`\mathrm{\Phi }(0)=1`$. The logarithm of $`\mathrm{\Phi }`$ can be expanded as
$$\mathrm{log}\mathrm{\Phi }(\pm \kappa )=\underset{n=1}{\overset{\mathrm{}}{}}\frac{(\pm i\kappa )^n}{n!}R_n,$$
(8)
where the complex $`R_n`$ are known as cumulants ,
$`R_1`$ $`=`$ $`{\displaystyle _0^{\frac{s_0}{2}}}𝑑\zeta \zeta {\displaystyle \frac{\tau ^{}(\zeta )}{\tau \left(\frac{s_0}{2}\right)}}={\displaystyle \frac{s_0}{2}}{\displaystyle _0^{\frac{s_0}{2}}}𝑑\zeta {\displaystyle \frac{\tau (\zeta )}{\tau \left(\frac{s_0}{2}\right)}},`$ (9)
etc.. For the small wave-vector transfer $`\kappa `$ of interest here, only the first two terms are needed in the series Eq. (8). The single-slit amplitude Eq. (6) then becomes
$$f_{\mathrm{slit}}(\vartheta )=2\frac{\mathrm{cos}\vartheta }{\sqrt{\lambda }}\tau \left(\frac{s_0}{2}\right)e^{\frac{\kappa ^2}{2}R_2}\frac{\mathrm{sin}\left[\kappa \left(\frac{s_0}{2}R_1\right)\right]}{\kappa }.$$
(10)
For a comparison with experiment the surface roughness of the grating bars must be accounted for. In a first approximation roughness has been included by rigid shifts of the individual bar sides (see also Ref. ), which are randomly Gaussian distributed. In the case of a weak surface potential, this results in an additional Debye-Waller like damping factor $`\mathrm{exp}(k^2\sigma _0^2\mathrm{sin}^2\vartheta _n)`$ in the intensity ratio of the principal maxima, $`I_n/I_0`$, where $`\sigma _0^2`$ is the variance of the geometrical slit width . Taking this into account, Eq. (4) with Eq. (10) yields
$$\frac{I_n}{I_0}=\frac{e^{\left(\frac{2\pi n\sigma }{d}\right)^2}}{\left(\frac{\pi n\sqrt{s_{\mathrm{eff}}^2+\delta ^2}}{d}\right)^2}\left[\mathrm{sin}^2\left(\frac{\pi ns_{\mathrm{eff}}}{d}\right)+\mathrm{sinh}^2\left(\frac{\pi n\delta }{d}\right)\right],$$
(11)
where $`\sigma ^2\sigma _0^2+\mathrm{Re}(R_2)`$, $`s_{\mathrm{eff}}s_02\mathrm{R}\mathrm{e}(R_1)`$ and $`\delta 2\mathrm{I}\mathrm{m}(R_1)`$. The first term in the brackets of Eq. (11) leads to a Kirchhoff-like slit function (see e. g. Ref. ) with a Debye-Waller term and an effective reduced slit width $`s_{\mathrm{eff}}`$, while the second term suppresses the zeros of the Kirchhoff pattern, as can be seen in the insets of Fig. 1.
The effective variance $`\sigma ^2`$ as well as $`s_{\mathrm{eff}}`$ and $`\delta `$ in Eq. (11) can be calculated for the potential Eq. (1). The standard eikonal approximation is used to determine the grating transmission function, given by $`\psi (x,0)=e^{i\phi (x)}`$ in the slits and zero elsewhere. The phase shift reads
$$\phi (x)=\frac{1}{\mathrm{}v}𝑑zV(x,z),$$
(12)
where $`v=\mathrm{}k/m`$ is the particle velocity. Taking the trapezoidal bar profile into account, after some algebra the single-slit transmission function becomes
$$\tau (\zeta )=\mathrm{exp}\left[i\frac{t\mathrm{cos}\beta }{\mathrm{}v}\frac{C_3}{\zeta ^3}\frac{1+\frac{t}{2\zeta }\mathrm{tan}\beta }{\left(1+\frac{t}{\zeta }\mathrm{tan}\beta \right)^2}\right].$$
(13)
An analysis of Eqs. (13) and (9) reveals that $`\mathrm{Re}(R_1)`$ and hence $`s_{\mathrm{eff}}`$ is especially sensitive to the potential.
The effective slit width $`s_{\mathrm{eff}}`$ as well as $`\delta `$ and $`\sigma `$ were determined from the experiment by fitting the relative experimental diffraction intensities $`I_n/I_1`$ as depicted in the insets of Fig. 1 to the corresponding ratios determined from Eq. (11). These ratios and not $`I_n/I_0`$ are compared with theory since small concentrations of clusters in the beams can falsify the $`I_0`$ intensities. The effective slit widths are plotted versus the particle velocity in Fig. 2 (points) for two different gratings. The difference between the effective slit widths for $`T_0=300\text{K}`$ beams and the geometrical slit width $`s_0`$ increases from $`1\text{nm}`$ (He) to more than $`6\text{nm}`$ for Kr as expected from the increasing interaction strength of the van der Waals potential. With increasing $`C_3`$ the slope of the curves also increases. The solid lines in Fig. 2 represent least squares fits of the theoretical expression $`s_{\mathrm{eff}}=s_02\mathrm{R}\mathrm{e}(R_1)`$, with $`R_1`$ given by Eqs. (9) and (13), to the experimentally determined effective slit widths, which allow for the determination of $`C_3`$ and $`s_0`$. Since He has the smallest polarizability and measurements over the largest range of velocities were possible they were used to determine the values of $`s_0`$ in Table I for each of the gratings. Identical values for $`s_0`$ were obtained from D<sub>2</sub> measurements. This value of $`s_0`$ was then used for Ne, Ar and Kr, with $`C_3`$ the only remaining fit parameter, and hence for these systems measurements at various velocities are not necessary.
The $`C_3`$ parameters are plotted versus the static electric dipole polarizabilities $`\alpha `$ in Fig. 3. The error bars were determined by assuming a realistic uncertainty in the bar geometry by varying $`\beta `$ by $`\pm 2^{}`$ in Eq. (13). This uncertainty seems to be the only systematic source of error in the present $`C_3`$ determination and leads to errors of about 20 %. Figure 2 indicates that the influence of the surface potential is restricted to distances much smaller than the slit width and therefore, by Ref. , corrections due to the finite bar width should be negligible.
Within the error bars the data from both gratings fall on a straight line in agreement with Hoinkes’ empirical rule . Accordingly the slope provides information on the optical dielectric constant of the grating material. An approximation to the theoretical expression for $`C_3`$ predicts that D<sub>2</sub> should in fact have a slightly smaller ratio of $`C_3/\alpha `$ than the rare gas atoms, while among them Ne is expected to have the largest ratio. It is satisfying to see that the small deviations from the straight line in Fig. 3 agree with this expected trend.
The big advantage of the present method is its large sensitivity as can be seen from Fig. 2 and its universality. In principle all atoms and molecules are accessible for study. The only restrictions will be to produce gratings of different solids and molecular beams with sufficiently narrow velocity distributions and to reduce the corresponding background in the mass spectrometer detector to assure an adequate signal to noise ratio. The present work also allows for a quantitative understanding of diffraction intensities in atom optics and atom interferometry experiments using transmission structures as optical elements.
We are greatly indebted to Tim Savas and Henry I. Smith of MIT for providing the transmission gratings to us. Further, we thank Dick Manson and G. Schmahl for fruitful discussions.
|
no-problem/9907/quant-ph9907102.html
|
ar5iv
|
text
|
# Quantization via hopping amplitudes: Schrödinger equation and free QED
## 1 Introduction
For at least two standard quantum systems, canonical quantization (or other classical-to-quantum substitution rules) can be avoided; it can be replaced by an intrinsic quantum mechanical consideration of “hopping” in a discrete configuration space. This only requires to interpret a familiar tool from model building—hopping amplitudes—as a first-principle concept.
Hopping amplitudes have a long tradition particularly in solid-state theory . On a fundamental level they have been used in lattice gauge theory for discretizing (not avoiding) path-integral actions. More recently, in the field of quantum computation, hopping parameters are being used as collision constants in unitary cellular automata designed for efficient simulation of the Schrödinger equation or 1-photon and Weyl equation . These latter applications differ in a crucial way from the viewpoint taken here, by assuming locality in conjunction with a finite, irreducible time step. It has proven to be a major challenge to design algorithms satisfying that computational requirement. Apart from technical complications, however, unitary cellular automata in some cases require configuration spaces larger than the physical ones. For example, local hopping rules in $`d`$ spatial dimensions are found to require $`2d`$-component wave functions . Consequently, a real spinless particle (as opposed to its computer simulation) can have a unitary and local equation of motion only with respect to continuous time.
Hopping amplitudes can do more than approximate or discretize processes originally defined otherwise. They necessarily emerge as coefficients of a superposition when a particle is prepared in a position eigenstate. The crucial axiom here is that the state of a quantum particle is completely specified by a position at one instant of time. To illustrate the idea, consider a particle confined to a 1-dimensional array of discrete positions at a spacing $`a`$. Let us work in the Heisenberg picture and denote by $`|n,t`$ the eigenstate of position $`x=na`$ at time $`t`$.
To prepare a position $`n`$ at time $`t`$ means to prepare a state with an uncertain position at time $`t+\mathrm{d}t`$, because any motional information is lacking from $`|n,t`$. For $`\mathrm{d}t`$ small enough, the uncertainty only relates to positions $`n`$, $`n+1`$, and $`n1`$. Furthermore, $`n+1`$ and $`n1`$ will occur symmetrically if we assume the symmetries of a free particle. Thus
$$|n,t=\alpha |n,t+\mathrm{d}t+\beta |n+1,t+\mathrm{d}t+\beta |n1,t+\mathrm{d}t$$
(1)
where $`\alpha `$ and $`\beta `$ are some numbers dependent on the size of the time step. For $`\mathrm{d}t0`$ we must have $`\alpha 1`$ and $`\beta 0`$, hence
$$\alpha =1+\alpha _1\mathrm{d}t+𝒪(\mathrm{d}t^2)\beta =\beta _1\mathrm{d}t+𝒪(\mathrm{d}t^2)$$
Thus the basic hopping equation (1) converges to the differential equation
$$\frac{\mathrm{d}}{\mathrm{d}t}|n,t=\alpha _1|n,t+\beta _1|n+1,t+\beta _1|n1,t$$
(2)
We now use the statistical interpretation of the scalar product. From
$$n,t|n^{},t=\delta _{n,n^{}}$$
we find by differentiating with respect to $`t`$ and using (2) that the coefficients $`\alpha _1`$ and $`\beta _1`$ must be purely imaginary. Finally, we consider a general state vector in the Heisenberg picture,
$$|\psi =\underset{n}{}\psi (n,t)|n,t$$
(3)
We take $`\mathrm{d}/\mathrm{d}t`$, use (2), put $`x=na`$, and reexpress $`\alpha _1`$ and $`\beta _1`$ by
$$U=(\alpha _1+2\beta _1)i\mathrm{}\frac{1}{2m}=\frac{a^2\beta _1}{i\mathrm{}}$$
Thus we find
$$i\mathrm{}\frac{\mathrm{d}}{\mathrm{d}t}\psi (x,t)=U\psi (x,t)\frac{\mathrm{}^2}{2m}\frac{\psi (x+a,t)+\psi (xa,t)2\psi (x,t)}{a^2}$$
This equation converges to the free Schrödinger equation in the continuum limit $`a0`$.
In Section 2, the hopping-parameter description of a Schrödinger particle is discussed in full generality. Hopping amplitudes will not be restricted to next neighbours, and it will only be assumed that the hopping amplitudes realise the full translational and cubical symmetries of the lattice in $`𝒪(1/a^2)`$ while any inhomogeneities in the hopping process are at most of $`𝒪(1/a)`$. Then a (trivial) renormalization scheme exists for the continuum limit $`a0`$ which leads to the standard nonrelativistic Schrödinger equation, with a vector potential and a scalar potential.
In Section 3, the hopping-parameter approach is applied to quantum electrodynamics without charges and currents. This requires the discretization of both, the values of a field $`u(x)`$ and its spatial variable $`x`$. The reader of section 3 is assumed to be somewhat familiar with lattice gauge theory . In fact, the model considered in this section is a Hamiltonian version of the intensively studied $`Z(N)`$ lattice gauge theory . The Hamilton operator of the electromagnetic field is recovered in the twofold limit of $`N\mathrm{}`$ and zero lattice spacing. Section 4 contains some concluding remarks.
## 2 Schrödinger particle in 3 dimensions
Consider a simple cubic lattice where $`\stackrel{}{x}=a\stackrel{}{n}`$ is the position vector of a site, $`a`$ is the lattice spacing, and $`\stackrel{}{n}`$ an integer vector. The most general hopping equation for a single-component wave function as defined in (3) is
$$i\mathrm{}\frac{\mathrm{d}}{\mathrm{d}t}\psi (\stackrel{}{x},t)=\underset{\stackrel{}{n}}{}\kappa (\stackrel{}{x},\stackrel{}{n},t)\psi (\stackrel{}{x}+a\stackrel{}{n},t)$$
(4)
The factor of $`i\mathrm{}`$ is only cosmetic, since the hopping parameters $`\kappa (\stackrel{}{x},\stackrel{}{n})`$ can be any complex numbers, so far. Conservation of probability requires
$$\kappa (\stackrel{}{x}a\stackrel{}{n},\stackrel{}{n},t)=\overline{\kappa (\stackrel{}{x},\stackrel{}{n},t)}$$
(5)
An important case of reference is that of a free particle, characterized by hopping parameters with the full symmetry of the lattice. Then $`\kappa (\stackrel{}{x},\stackrel{}{n},t)=\kappa _0(\stackrel{}{n})`$ because of translational invariances. Cubic symmetry implies
$$\kappa _0(\stackrel{}{n})=\kappa _0(\stackrel{}{n})$$
(6)
so that all $`\kappa _0(\stackrel{}{n})`$ are real because of (5). Most importantly, the symmetry also implies $`_\stackrel{}{n}\kappa _0(\stackrel{}{n})n_in_j\delta _{ij}`$. A convenient parametrization is
$$\underset{\stackrel{}{n}}{}\kappa _0(\stackrel{}{n})n_in_j=\frac{\mathrm{}^2}{ma^2}\delta _{ij}$$
(7)
The reduced parameter $`m`$ will be identified as the particle mass later on; the sign of $`m`$ is discussed in the Conclusions. In general, the sum in equation (7) need not converge. Assuming convergence here is the basis for the nonrelativistic physics as it emerges in the form of the Schrödinger equation in the continuum limit.
To recover the Schrödinger equation, we Taylor-expand the displaced wave functions on the rhs of (4),
$$\psi (\stackrel{}{x}+a\stackrel{}{n},t)=\psi (\stackrel{}{x},t)+an_i_i\psi (\stackrel{}{x},t)+\frac{1}{2}a^2n_in_j_i_j\psi (\stackrel{}{x},t)+𝒪(a^3)$$
(8)
Again, let us consider a free particle first. Inserting $`\kappa (\stackrel{}{x},\stackrel{}{n},t)=\kappa _0(\stackrel{}{n})`$ in (4) and using (8), (6), and (7) we find
$$i\mathrm{}\frac{\mathrm{d}}{\mathrm{d}t}\psi (\stackrel{}{x},t)=E_0\psi (\stackrel{}{x},t)\frac{\mathrm{}^2}{2m}\stackrel{}{}\stackrel{}{}\psi (\stackrel{}{x},t)+𝒪(a)$$
(9)
where $`E_0=_\stackrel{}{n}\kappa _0(\stackrel{}{n})`$ is certainly infinite but does not affect the shape of the wavefunctions. In contrast, the parameter $`m`$ determines the particle mass and must be finite, as anticipated in definition (7).
Now we “turn on” deviations of the hopping parameters from $`\kappa _0(\stackrel{}{n})`$. Let us put
$$\kappa (\stackrel{}{x},\stackrel{}{n},t)=\kappa _0(\stackrel{}{n})+\kappa _1(\stackrel{}{x},\stackrel{}{n},t)$$
(10)
Again, we insert (8) in (4). The multiplicative terms on the rhs of (4) now are $`E_0\psi (\stackrel{}{x},t)+_\stackrel{}{n}\kappa _1(\stackrel{}{x},\stackrel{}{n},t)\psi (\stackrel{}{x},t)`$. The inhomogeneous term can be rewritten as
$$\frac{1}{2}\underset{\stackrel{}{n}}{}\left(\kappa _1(\stackrel{}{x},\stackrel{}{n},t)+\kappa _1(\stackrel{}{x},\stackrel{}{n},t)\right)\psi (\stackrel{}{x},t)$$
Using (5) and expanding the ensuing displaced argument, we obtain the following form of the multiplication operator:
$$\frac{1}{2}\underset{\stackrel{}{n}}{}\left(\kappa _1(\stackrel{}{x},\stackrel{}{n},t)\overline{\kappa _1(\stackrel{}{x},\stackrel{}{n},t)}\right)+\frac{1}{2}\underset{\stackrel{}{n}}{}a\stackrel{}{n}\stackrel{}{}\overline{\kappa _1(\stackrel{}{x},\stackrel{}{n},t)}+𝒪(a^2\kappa _1)$$
This shows that for a finite $`\stackrel{}{x}`$-dependent contribution, the real part of $`\kappa _1`$ must be of $`𝒪(1)`$ while the imaginary part can be of $`𝒪(1/a)`$. Hence, if we define a vector potential
$$\stackrel{}{A}(\stackrel{}{x},t)=\frac{ma}{e\mathrm{}}\underset{\stackrel{}{n}}{}\stackrel{}{n}\mathrm{}\kappa _1(\stackrel{}{x},\stackrel{}{n},t)$$
(11)
then the multiplicative terms of (4) take the form
$$\left(E_0+\underset{\stackrel{}{n}}{}\mathrm{}\kappa _1(\stackrel{}{x},\stackrel{}{n},t)\right)\psi (\stackrel{}{x},t)+i\frac{e\mathrm{}}{2m}\left(\stackrel{}{}\stackrel{}{A}(\stackrel{}{x},t)\right)\psi (\stackrel{}{x},t)$$
(12)
The gradient terms on the rhs of (4) can be written as
$$\frac{a}{2}\stackrel{}{}\psi (\stackrel{}{x},t)\underset{\stackrel{}{n}}{}\stackrel{}{n}\left(\kappa (\stackrel{}{x},\stackrel{}{n},t)\kappa (\stackrel{}{x},\stackrel{}{n},t)\right)$$
By (10) and (5) this is equal to
$$\frac{a}{2}\stackrel{}{}\psi (\stackrel{}{x},t)\underset{\stackrel{}{n}}{}\stackrel{}{n}\left(\kappa _1(\stackrel{}{x},\stackrel{}{n},t)\overline{\kappa _1(\stackrel{}{x}a\stackrel{}{n},\stackrel{}{n},t)}\right)$$
The displacement of $`\stackrel{}{x}`$ in $`\overline{\kappa _1(\stackrel{}{x}a\stackrel{}{n},\stackrel{}{n},t)}`$ produces a term of higher order in $`a`$ which can be neglected in the limit $`a0`$. Thus the only relevant contribution to the gradient terms comes from the imaginary part of $`\kappa _1(\stackrel{}{x},\stackrel{}{n},t)`$ and is of the form
$$i\frac{e\mathrm{}}{m}\left(\stackrel{}{}\psi (\stackrel{}{x},t)\right)\stackrel{}{A}(\stackrel{}{x},t)$$
(13)
where $`\stackrel{}{A}(\stackrel{}{x},t)`$ is the same as in (11).
With inhomogeneities of $`𝒪(1)`$ in the real part, and of $`𝒪(1/a)`$ in the imaginary part, it is clear that the double-gradient terms of equation (4) are the same as in the free-particle case (9). Collecting all the terms discussed above, we recover from (4) the general, nonrelativistic Schrödinger equation
$$i\mathrm{}\frac{}{t}\psi (\stackrel{}{x},t)=\frac{1}{2m}\left(\frac{\mathrm{}}{i}\stackrel{}{}e\stackrel{}{A}(\stackrel{}{x},t)\right)^2\psi (\stackrel{}{x},t)+U(\stackrel{}{x},t)\psi (\stackrel{}{x},t)$$
(14)
with the vector potential of equation (11) and the scalar potential
$$U(\stackrel{}{x},t)=E_0+\underset{\stackrel{}{n}}{}\mathrm{}\kappa _1(\stackrel{}{x},\stackrel{}{n},t)\frac{e^2}{2m}\stackrel{}{A}(\stackrel{}{x},t)^2$$
(15)
In canonical quantization, the prescription is to identify $`U(\stackrel{}{x},t)`$ and $`\stackrel{}{A}(\stackrel{}{x},t)`$ with the corresponding functions of the classical Hamiltonian. This amounts to an extrapolation into the microscopic domain. The corresponding procedure in the present context is as follows. By Ehrenfest’s theorem, eq. (14) will reproduce the classical equations of motion for the centre of a wave packet in the limit $`\mathrm{}0`$. The classical $`U(\stackrel{}{x},t)`$ and $`\stackrel{}{A}(\stackrel{}{x},t)`$ then coincide with those in the Schrödinger equation. Thus, if desired, $`U(\stackrel{}{x},t)`$ and $`\stackrel{}{A}(\stackrel{}{x},t)`$ can be extrapolated as with canonical quantization.
In concluding the section, it should be noted that the order-of-magnitude assumptions for the hopping parameters depend on the further assumption that no dramatic cancellations occur between $`\kappa (\stackrel{}{x},\stackrel{}{n},t)`$ for different $`\stackrel{}{n}`$. Of course, those cancellations would require some extra reason for a fine-tuning. In the absence of a reason, the assumptions describe the most general and, hence, the most likely set of parameters consistent with the constraints.
## 3 Free electromagnetic field
This section is to demonstrate that “unitary hopping” can be a useful concept also for quantum field theories. We here consider source-free $`U(1)`$ gauge theory. Its Hamilton operator in the temporal gauge is an $`\mathrm{}`$-dimensional version of (14). A “hopping” scenario requires the configuration space to be discrete. Thus local gauge invariance will have to be discretized, too. In case of $`U(1)`$ this can be done in a way that preserves an exact local gauge group, namely $`Z(N)`$, whose limit $`N\mathrm{}`$ reproduces $`U(1)`$.
In lattice gauge theory, a gauge field lives on the links between next-neighbour lattice sites. A link can be specified by the site $`\stackrel{}{s}=(n_x,n_y,n_z)`$ from which it emanates in a positive direction, and by the corresponding $`k=1,2,3`$. In $`Z(N)`$ gauge theory the link variables are phase factors of the form
$$e^{2\pi il/N}l=0,1,\mathrm{},N1$$
(16)
They are related to the electromagnetic vector potential $`A(\stackrel{}{s},k)`$, integrated along the link, by
$$\mathrm{exp}\left(2\pi il/N\right)=\mathrm{exp}\left(iaeA/\mathrm{}\right)$$
(17)
Thus a $`Z(N)`$ gauge field configuration is determined by the numbers
$$l(\stackrel{}{s},k)l(n_x,n_y,n_z,k)n_i=0,\pm 1,\pm 2,\mathrm{}k=1,2,3$$
(18)
We shall indicate by omitting the arguments $`\stackrel{}{s}`$ and $`k`$ that we mean the configuration as a whole.
The Hamiltonian will be postulated below to be invariant under charge conjugation $`𝒞`$, and under space inversion $`𝒫`$ about any point $`\stackrel{}{s}_0`$. As it follows from the relation (17) to the vector potentials (see also ), $`𝒞`$ and $`𝒫_{\stackrel{}{s}_0}`$ are characterized by their action on the link variables,
$`𝒞l(\stackrel{}{s},k)`$ $`=`$ $`l(\stackrel{}{s},k)`$ (19)
$`𝒫_{\stackrel{}{s}_0}l(\stackrel{}{s},k)`$ $`=`$ $`l(2\stackrel{}{s}_0\stackrel{}{s}\widehat{k},k)`$ (20)
We also postulate invariance under local $`Z(N)`$ gauge transformations. These are characterized by a number $`g(\stackrel{}{s})=0,1,\mathrm{},N1`$ on each lattice site. The link field configuration transforms according to
$$l^{}(\stackrel{}{s},k)=l(\stackrel{}{s},k)+g(\stackrel{}{s}+\widehat{k})g(\stackrel{}{s})$$
The elementary gauge-invariant construct on a time slice is the plaquette variable
$$p(\stackrel{}{s},i,k)=l(\stackrel{}{s},i)+l(\stackrel{}{s}+\widehat{i},k)l(\stackrel{}{s}+\widehat{k},i)l(\stackrel{}{s},k)$$
(21)
Gauge-invariant, too, is any shift of a link variable; in particular,
$$l(\stackrel{}{s},k)l(\stackrel{}{s},k)\pm 1\text{if and only if}l^{}(\stackrel{}{s},k)l^{}(\stackrel{}{s},k)\pm 1$$
The gauge field is quantized by assigning a probability amplitude $`\psi (l,t)`$ to each link-field configuration $`l`$. For this “wavefunction” the general form of a unitary-hopping equation is
$$i\mathrm{}\frac{\mathrm{d}}{\mathrm{d}t}\psi (l,t)=\underset{\mathrm{\Delta }l}{}\kappa (l,\mathrm{\Delta }l)\psi (l+\mathrm{\Delta }l,t)$$
(22)
Gauge invariance of the process requires, in the notation of (21),
$$\kappa (l,\mathrm{\Delta }l)=\kappa (p,\mathrm{\Delta }l)$$
Locality of link interactions is not as uniquely defined—a fact being utilized with the “improved actions” of numerical lattice gauge theories . We shall only consider the simplest realization of locality, assuming
* Link-changing processes are independent on different links.
* A plaquette can influence a change on its own links, at most.
These assumptions correspond to a pre-relativistic, purely spatial notion of locality—no reference whatsoever is made to the phenomenon of light. By the assumption of independence, a change on $`k`$ links within the same time interval $`\mathrm{d}t`$ will come with a factor of $`(\mathrm{d}t)^k`$ and will contribute to the time derivative in equation (22) only for $`k=1`$. Thus the sum over all link-changes $`\mathrm{\Delta }l`$ reduces to a sum over one-link changes. For further simplification, we only consider a change by one unit, corresponding to nearest-neighbour hopping in configuration space. Thus (22) takes the form
$$i\mathrm{}\frac{\mathrm{d}}{\mathrm{d}t}\psi (l,t)=\underset{\genfrac{}{}{0pt}{}{\mathrm{links}}{\stackrel{}{s},i}}{}\underset{\pm }{}\kappa _\pm (p;\stackrel{}{s},i)\psi (l\pm u_{\stackrel{}{s},i},t)\stackrel{\mathrm{def}}{=}H\psi (l,t)$$
(23)
where
$$u_{\stackrel{}{s},i}=\{\begin{array}{cc}1& \text{ on link }\stackrel{}{s},i\hfill \\ 0& \text{ elsewhere}\hfill \end{array}$$
We intend to Taylor-expand the wavefunction. Instead of the derivative $`/l`$ on each link we prefer to use the lattice version of the functional derivative $`\delta /\delta A`$ with respect to the vector potential. $`l`$ and $`A`$ are related through equation (17). Hence, $`/l`$ equals the partial derivative $`(2\pi \mathrm{}/eNa)(/A)`$. Now $`/A`$ can be expressed by the functional derivative $`\delta /\delta A`$ essentially by introducing factors so that in the characteristic relation $`A(\stackrel{}{s},i)/A(\stackrel{}{s}^{},i^{})=\delta _{\stackrel{}{s}\stackrel{}{s}^{}}\delta _{ii^{}}`$ the $`\delta _{\stackrel{}{s}\stackrel{}{s}^{}}`$ is changed into the lattice delta function $`a^3\delta _{\stackrel{}{s}\stackrel{}{s}^{}}`$. Thus,
$$\frac{}{l(\stackrel{}{s},i)}=\frac{2\pi \mathrm{}a^2}{eN}\frac{\delta }{\delta A(\stackrel{}{s},i)}$$
Expanding the wavefunction up to order $`a^4`$ we have
$$\psi (l\pm u_{\stackrel{}{s},i},t)=\psi (l,t)\pm \frac{2\pi \mathrm{}a^2}{eN}\frac{\delta \psi (l,t)}{\delta A(\stackrel{}{s},i)}+\frac{2\pi ^2\mathrm{}^2a^4}{e^2N^2}\frac{\delta ^2\psi (l,t)}{\delta A(\stackrel{}{s},i)^2}$$
(24)
The first-derivative term is immediately discarded if we postulate that the Hamiltonian be invariant under space inversion $`𝒫`$ (cf. (20)). This is because the plaquette variables in the hopping amplitudes $`\kappa ^\pm (p;\stackrel{}{s},i)`$ are invariant under $`𝒫`$ whereas $`l`$ and hence $`/A`$ changes sign.
It remains to discuss the multiplicative terms of (23). To expand the hopping amplitudes in a power series in $`a`$, we note that the magnetic flux density $`B_i=\frac{1}{2}ϵ_{ijk}F_{jk}`$ is related to the plaquette variable by
$$\mathrm{exp}\left(ia^2eF_{jk}(\stackrel{}{s})/\mathrm{}\right)=\mathrm{exp}\left(2\pi ip(\stackrel{}{s},j,k)/N\right)$$
Thus, at a given flux density of $`𝒪(1)`$, the plaquette phase factor deviates from $`1`$ only in $`𝒪(a^2)`$, while the plaquette variable $`p`$ is of $`𝒪(a^2N)`$. To be on the safe side, we therefore expand the hopping amplitude as a function of $`a^2F_{ij}`$ instead of $`p`$. Furthermore, we invoke our locality postulates to restrict plaquettes with an influence on link $`(\stackrel{}{s},i)`$ to the four cases $`p(\stackrel{}{s},i,j)`$ and $`p(\stackrel{}{s}\widehat{j},i,j)`$ with $`ji`$. Thus, expanding $`\kappa _\pm (p;\stackrel{}{s},i)`$ to $`𝒪(a^4)`$ we obtain
$$\kappa _\pm ^{(0)}(\stackrel{}{s},i)+\frac{ea^2}{\mathrm{}}\underset{ji}{}\left(\kappa _\pm ^{(1)}(\stackrel{}{s},i,j)F_{ij}(\stackrel{}{s})+\kappa _\pm ^{(1)}(\stackrel{}{s},i,j)F_{ij}(\stackrel{}{s}\widehat{j})\right)+$$
(25)
$$+\frac{e^2a^4}{\mathrm{}^2}\underset{j,j^{}i}{}\kappa _\pm ^{(2)}(\stackrel{}{s},i,j,j^{})F_{ij}(\stackrel{}{s})F_{ij^{}}(\stackrel{}{s})$$
where in the last term we have discarded any shift of $`\stackrel{}{s}`$ by $`\widehat{j}`$ or $`\widehat{j}^{}`$ as this would lead to an $`𝒪(a^5)`$ contribution.
The $`a^2`$ terms of expression (25) must vanish if the Hamiltonian is to be invariant under charge conjugation. This is because $`𝒞`$ (cf. (19)) reverses the values of both links and plaquettes, hence reverses the sign of the $`a^2`$ term in (25), while all remaining terms of (25) and also of (24) are $`𝒞`$-invariant.
By translation invariance of the hopping process, all $`\kappa `$’s must be independent of the site vector $`\stackrel{}{s}`$. By invariance under reflections about a coordinate plane, $`\kappa _\pm ^{(2)}(\stackrel{}{s},i,j,j^{})`$ in the $`F^2`$ term of (25) must be proportional to $`\delta _{jj^{}}`$. Hence, by cubic rotational invariance, it must be independent of $`i`$. For the same reason, $`\kappa _\pm ^{(0)}(i)`$ as the relevant coefficient of $`\delta ^2\psi (l,t)/\delta A(\stackrel{}{s},i)^2`$ must be independent of $`i`$.
Inserting in (23) the remaining terms of (24) and (25) we identify the Hamiltonian of free QED as
$$H=v+\frac{1}{2}\underset{\stackrel{}{s}}{}a^3\underset{i}{}\left(\frac{\mathrm{}^2}{ϵ_0}\frac{\delta ^2}{\delta A(\stackrel{}{s},i)^2}+\frac{1}{\mu _0}B_i^2(\stackrel{}{s})\right)+𝒪(a^5)$$
where $`v=_{\stackrel{}{s},i}(\kappa _+^{(0)}+\kappa _{}^{(0)})`$ is the vacuum energy and where
$$\frac{1}{ϵ_0}=\frac{4\pi ^2a}{e^2N^2}(\kappa _+^{(0)}+\kappa _{}^{(0)})\frac{1}{\mu _0}=\frac{4e^2a}{\mathrm{}^2}(\kappa _+^{(2)}(1,1)+\kappa _{}^{(2)}(1,1))$$
(26)
In the limit $`a0`$ we put $`\stackrel{}{x}=a\stackrel{}{s}`$ and $`\mathrm{d}^3x=a^3`$ to obtain the familiar form
$$H=v+\frac{ϵ_0}{2}\stackrel{}{E}^2(\stackrel{}{x})\mathrm{d}^3x+\frac{1}{2\mu _0}\stackrel{}{B}^2(\stackrel{}{x})\mathrm{d}^3x$$
(27)
where
$$E_i(\stackrel{}{s})=\frac{i\mathrm{}}{ϵ_0}\frac{\delta }{\delta A(\stackrel{}{s},i)}$$
## 4 Conclusions
We have derived the Schrödinger equation for a nonrelativistic scalar particle and for the free electromagnetic field, starting out from the superposition principle for state vectors, using the statistical interpretation, and exploiting spatial symmetries to a large extent. The ambition was to avoid any use of the distinctly non-quantal concept of trajectories, even in the path-integral sense.
In the case of a free particle, which has all the exploitable symmetries, the approach taken here should be compared with the general, group-theoretical approach to quantum mechanics as exposed, for example, in . The main difference is that we found it unnecessary to consider any classical space-time symmetries (Galilei or Lorentz transformations). Rather, the structure of the dynamics follows from spatial symmetries together with the absence of motional information from states such as $`|\stackrel{}{x},t`$. That absence induces symmetries of the time evolution which, however, can be realized only by way of a superposition.
As we have seen, Taylor expansions led to 2nd order derivatives and, in the case of QED, to the $`B^2`$ magnetic energy in the Hamiltonian. The sign of the Taylor coefficients, though, must be determined by extra arguments. For the mass parameter $`m`$ in equation (7), it is a matter of convention whether kinetic energies are always taken as positive or always negative, so both signs of $`m`$ would seem to make physical sense. A similar remark applies to the case of free QED, except for the relative sign of the parameters $`ϵ_0`$ and $`\mu _0`$ in (26). Here an additional assumption is required, such as the existence of a ground state, to recover the positive phenomenological sign.
For the definition of the mass in (7) it was essential that a free particle find identical hopping conditions on every site of the lattice. But this is also what characterizes the lattice as a cartesian coordinate system. In case of QED, a cartesian structure is comprised in the local $`Z(N)`$ gauge invariance. Thus the unitary-hopping scenario may explain why cartesian coordinates play such a preferred role in a wide range of quantum systems .
Within the “physical” subspace of locally gauge-invariant states, the Hamiltonian dynamics of the electromagnetic field as described by (27) is automatically Lorentz invariant. This is quite remarkable since we derived the dynamics from quantum-mechanical principles in which the roles of space and time are initially very different. A similar observation was made by Bialynicki-Birula with respect to the Weyl equation.
|
no-problem/9907/patt-sol9907002.html
|
ar5iv
|
text
|
# Electric Nusselt number characterization of electroconvection in nematic liquid crystals
## Abstract
We develop a characterization method of electroconvection structures in a planar nematic liquid crystal layer by a study of the electric current transport. Because the applied potential difference has a sinusoidal time dependence, we define two electric Nusselt numbers corresponding to the in-phase and out-of-phase components of the current. These Nusselt numbers are predicted theoretically using a weakly nonlinear analysis of the standard model. Our measurements of the electric current confirm that both numbers vary linearly with the distance from onset until the occurence of secondary instabilities; these instabilities also have a distinct Nusselt number signature. A systematic comparison between our theoretical and experimental results, using no adjusted parameters, demonstrates reasonable agreement. This represents a quantitative test of the standard model completely independent from traditional, optical techniques of studying electroconvection.
Although spontaneous pattern formation within structureless environments pervades nature , a comprehensive understanding of this complex behavior remains elusive. Therefore well-controlled experimental systems exhibiting pattern formation are extensively studied; among these, thermoconvection of a layer of fluid heated from below and electroconvection of a nematic liquid crystal layer are particularly interesting since they allow very large aspect ratio geometries. In both of these systems, convection structures form spontaneously when the applied stress, i.e. the gradient of either the temperature or the electric potential, exceeds a critical value. These inherently non-equilibrium structures can only persist when there is an energy source to overcome the dissipation associated with the flow. Therefore, energy transport studies represent a particularly valuable technique for elucidating the essential nature of the instabilities that lead to these patterns. For example, the first accurate determination of the stress necessary to induce thermoconvection was made by measuring the heating power required to sustain a desired temperature difference across a thin layer of water . This power is customarily expressed as the Nusselt number, defined as the heat flow across a fluid layer relative to the heat flow required in the absence of fluid flow. Nusselt number measurements remain a method of choice for studies not only of the structured states that occur during thermoconvection when the stress is only slightly above its critical value but also of the turbulent flow that occurs when the stress is enormous . By contrast the electroconvection of a planar nematic liquid crystal layer, which represents a similar but fully anisotropic model pattern forming system, has previously been studied only with qualitative or semi-quantitative optical techniques. Reports of energy flow measurement during electroconvection are rare , and no theoretical studies of the energy transport exist for this system. The aim of this Letter is to fill this gap.
Electroconvection is obtained when an a.c. electric potential, $`\sqrt{2}V\mathrm{cos}(\omega t)`$, is applied to two horizontal ($`\widehat{𝐳}`$) electrodes separated by distance $`d`$ confining a nematic liquid crystal. Here we focus on the planar anchoring case where the director field $`𝐧`$ is fixed to $`\widehat{𝐱}`$ at the confining electrode plates. The instability relies on a coupling between $`𝐧`$, the velocity field, $`𝐯`$ and the induced charge density, $`\rho _e`$ or equivalently the induced electric potential $`\varphi `$, such that the full electric field reads $`𝐄=\sqrt{2}V/d[\mathrm{cos}(\omega t)\widehat{𝐳}d\mathbf{}\varphi ]`$. At moderate frequencies $`\omega `$, when $`V`$ exceeds a critical value $`V_c`$ , the instability sets in the form of normal conduction rolls of wavevector $`𝐪=q\widehat{𝐱}`$ ; at large frequencies dielectric rolls are observed but we do not consider this regime in this work. These phenomena are well explained via the standard model (SM) for electroconvection where the charge conduction in the liquid crystal is assumed Ohmic. In addition to linear properties (values of $`V_c`$ and $`q`$ as a function of $`\omega `$), the SM explains several secondary instabilities that are experimentally observed, such as the transitions to zig-zag rolls, stationary and oscillatory bimodal patterns and abnormal rolls . The one phenomenon which the SM has been unable to predict is the traveling roll state. For this, a new approach, the weak electrolyte model, was developed, in which the electrical conductivity is assumed to be due to two species of dissociated ions having different mobilities . With this model a semi-quantitative agreement with experimental results on traveling rolls was demonstrated but fitting parameters were necessary. Because traveling rolls are not encountered at low frequencies where we performed our experiments, and because the SM is much less complicated, we did not use the weak electrolyte model.
The total current $`I`$ through the nematic cell enclosed by the horizontal electrodes of area $`S`$ can be calculated as the circulation of the magnetic induction $`𝐇`$. From the Maxwell-Ampère equation, $`\mathbf{}\times 𝐇=𝐣+_t𝐃`$, $`I`$ is sum of the conduction and displacment currents:
$$I=_S\left(j_z+_tD_z\right)𝑑x𝑑y$$
(1)
where, within the SM, $`𝐣=\sigma _{}𝐄+\sigma _a(𝐧𝐄)𝐧+\rho _e𝐯`$ ; $`𝐃=ϵ_{}𝐄+ϵ_a(𝐧𝐄)𝐧`$ ; $`\sigma _{}`$ ($`\sigma _{}`$) are the conductivities perpendicular (parallel) to $`𝐧`$ , and, $`ϵ_{}`$ ($`ϵ_{}`$) are the dielectric permittivities perpendicular (parallel) to $`𝐧`$ , $`\sigma _a=\sigma _{}\sigma _{}`$ and $`ϵ_a=ϵ_{}ϵ_{}`$ are the corresponding anisotropies. Note that the surface integral in eq. (1) does not depend on the $`z`$-value ($`d/2zd/2`$) chosen, because of the Maxwell-Ampère equation. In the quiescent (no convection) state $`𝐧=\widehat{𝐱}`$, $`𝐯=\mathrm{𝟎}`$ and $`\varphi =0`$, therefore
$$I=I^0=I_r^0\mathrm{cos}(\omega t)I_i^0\mathrm{sin}(\omega t)=\frac{\sqrt{2}VS}{d}[\sigma _{}\mathrm{cos}(\omega t)ϵ_{}\omega \mathrm{sin}(\omega t)].$$
(2)
In the convecting state all fields $`𝐧,𝐯`$ and $`\varphi `$ are modified, as is $`I`$. Within the SM, for homogeneous stationary roll solutions
$$I=I_r\mathrm{cos}(\omega t)I_i\mathrm{sin}(\omega t)+\text{higher temporal harmonics}$$
(3)
where the amplitudes of the higher temporal harmonics are expected to be much smaller than $`I_r`$ and $`I_i`$ , at least at intermediate frequencies $`\omega 1/\tau _0`$ where $`\tau _0=ϵ_{}/\sigma _{}`$ is the charge-diffusion time. We define the real and imaginary reduced Nusselt numbers, $`𝒩_r`$ and $`𝒩_i`$ , as $`I_r/I_r^01`$ and $`I_i/I_i^01`$ , respectively. Thus $`𝒩_r=𝒩_i=0`$ in the quiescent state, while in the convecting state $`𝒩_r`$ measures the excess energy dissipation due to convection of the nematic liquid crystal, that is the time average $`\sqrt{2}V\mathrm{cos}(\omega t)I(t)_t=(1+𝒩_r)V^2\sigma _{}S/d`$. In heuristic terms the effective resistance of the nematic layer is changed by convection from $`R_0=d/(\sigma _{}S)`$ to $`R=R_0/(1+𝒩_r)`$ ; equivalently the imaginary Nusselt number measures the change in the effective capacitance of the nematic layer, $`C_0=ϵ_{}S/d`$ in the quiescent state, $`C=C_0(1+𝒩_i)`$ in the convecting state. When the reduced distance from onset $`ϵV^2/V_c^21`$ is small, the electric Nusselt numbers can be calculated for homogeneous rolls using weakly nonlinear methods. Assuming that the leading convection amplitude, $`A`$, associated with the linear roll mode, remains small, a systematic expansion in powers of $`A`$ is performed. After adiabatic elimination of the slave modes and calculation of the resonant saturating cubic terms approximate roll solutions are obtained together with the relation $`A(ϵ)=a_0\sqrt{ϵ}`$ . The current can then be calculated from eq. (1). For symmetry reasons the first contribution from the convection modes comes at order $`A^2`$, and therefore one expects $`𝒩_rA^2ϵ,𝒩_iA^2ϵ`$ in the weakly nonlinear regime. That is, Nusselt numbers allow a direct measurement of the convection amplitude $`A`$, and therefore a test of the supercritical law $`A(ϵ)=a_0\sqrt{ϵ}`$ . In order to make this clear and to obtain approximate analytic formulae, one can first use the quasi-unidimensional approximation, where all fields are considered at the middle of the layer ($`z=0`$) and only their $`x`$-dependence is kept. The linear normal roll mode then assumes the form $`n_z=AN_z\mathrm{sin}(qx),v_z=A/(q\tau _0)V_z\mathrm{cos}(qx),\varphi =A/(qd)[\mathrm{\Phi }_c\mathrm{cos}(\omega t)+\mathrm{\Phi }_s\mathrm{sin}(\omega t)]\mathrm{cos}(qx)`$ where, as in the rest of our theoretical calculations, we only keep the lowest nontrivial time-mode for each field. We also choose as a normalization condition $`N_z=1`$ ; then $`V_z,\mathrm{\Phi }_c`$ and $`\mathrm{\Phi }_s`$ are calculated at fixed frequency by solving the linear neutral eigenproblem. With $`\rho _e=\mathbf{}𝐃`$, $`j_z+_tD_z`$ can be easily calculated at $`z=0`$. Keeping only the horizontally homogeneous terms because of the surface integral in eq. (1), one obtains to lowest order in $`A`$
$`𝒩_r`$ $`=`$ $`{\displaystyle \frac{A^2}{2}}\left[\sigma _a^{}N_z(N_z\mathrm{\Phi }_c)+ϵ_{}^{}\mathrm{\Phi }_cV_zϵ_a^{}N_z(V_z+\omega \tau _0\mathrm{\Phi }_s)\right]`$ (4)
$`𝒩_i`$ $`=`$ $`{\displaystyle \frac{A^2}{2}}\left[ϵ_a^{}N_z(N_z\mathrm{\Phi }_c)+{\displaystyle \frac{\mathrm{\Phi }_s}{\omega \tau _0}}(\sigma _a^{}N_zϵ_{}^{}V_z)\right]`$ (5)
with $`\sigma _a^{}=\sigma _a/\sigma _{},ϵ_a^{}=ϵ_a/ϵ_{}`$ and $`ϵ_{}^{}=ϵ_{}/ϵ_{}`$ . For standard nematic materials with large positive $`\sigma _a^{}`$ (see e.g. eq. (7)), the leading term in $`𝒩_r`$ eq. (4) is the anisotropic conduction term in $`\sigma _a^{}N_z^2`$, which imposes a positive value of $`𝒩_r`$ . One thus expects that the tilt of the director out of the plane in roll structures will enhance the electrical conduction of the layer and finally the in-phase current. Concerning $`𝒩_i`$ eq. (5) it should be noted that $`\mathrm{\Phi }_s/\omega `$ tends to a finite positive value when $`\omega 0`$. Two terms of eq. (5) control the sign of $`𝒩_i`$ . The first term in $`ϵ_a^{}N_z^2`$ reveals a diminution of the effective capacitance of the cell due to the director tilt, since the dielectric anisotropy $`ϵ_a^{}`$ of the nematic materials used in electroconvection is usually negative. The other important contribution is the positive term in $`\sigma _a^{}N_z\mathrm{\Phi }_s/\omega `$ , which expresses that the potential modulation induced by the convection creates by coupling with the director tilt an out-of-phase current $`I_i>0`$ (see eq. (3)). Since $`\mathrm{\Phi }_s/\omega `$ decreases with $`\omega `$, this positive term can compensate the negative term in $`ϵ_a^{}N_z^2`$ at low frequencies only, and for nematics with large $`\sigma _a^{}`$ . A numerical calculation, using a standard Galerkin technique to expand the $`z`$-dependence of all fields in test functions, can provide a more accurate evaluation of $`𝒩_r`$ and $`𝒩_i`$ . For this purpose we have modified the code developed in to use Tchebyshev polynomials as the test functions in order to accelerate the convergence (typically 4 $`z`$-modes were sufficient), and inserted a procedure to calculate the Nusselt numbers. For convenience the current eq. (1) is evaluated at the lower plate $`z=d/2`$ where, because of the boundary conditions, $`j_z+_tD_z`$ reduces to $`(\sigma _{}+ϵ_{}_t)E_z`$. Thus, since the convection induced potential is even under $`zz`$ at linear order, but odd at quadratic order, one sees that the leading contribution to $`I`$ comes from the potential part of the homogeneous quadratic slave mode noted $`A^2V_2(𝐪,𝐪)`$ in eq. (27) of . Of course the saturation at cubic order needs also to be calculated in order to provide the law $`A=a_0\sqrt{ϵ}`$. We will return to the numerical results (Fig. 3), which confirm the trends found from the analytic formulae eqs. (4), (5), after presenting our experimental results.
We use the “classical” arrangement based on a pre-fabricated liquid crystal cell. The glass plates have no spacers, glue, etc. within the active area; they are separated by $`d=23.4\pm 0.5\mu `$m. With air only between the plates, we measure, using a auto-balancing 1kHz bridge, the capacitance of the cell in order to determine accurately (within 8 ppm) the ratio $`S/d`$ (nominally $`S=5\mathrm{m}\mathrm{m}\times 5\mathrm{m}\mathrm{m}`$). After this measurement the nematic liquid crystal methoxy-benzylidene butyl-aniline (MBBA) , doped with 0.0005% tetrabutyl ammonium bromide, is introduced between the transparent conducting electrodes. The filled cell is placed in a temperature controlled housing, and then introduced between the pole faces of a large electromagnet. As the nematic liquid crystal undergoes the magnetically induced splay Frederiks transition, the capacitance and conductance of the cell are monitored. ¿From these measurements we obtain both electric conductivities and both dielectric constants . Hence, we measure in situ all the electrical transport properties of the specific nematic liquid crystal used. For the experiments reported here, all at $`28^{}`$C, we find
$`\sigma _{}=(8.5\pm 0.8)10^8(\mathrm{\Omega }\mathrm{m})^1`$ $`,`$ $`\sigma _a^{}=\sigma _a/\sigma _{}=0.35\pm 0.04,`$ (6)
$`ϵ_{}=(4.65\pm 0.03)ϵ_0`$ $`,`$ $`ϵ_a^{}=ϵ_a/ϵ_{}=0.080\pm 0.001,`$ (7)
with $`ϵ_0`$ the vacuum dielectric permittivity. After these measurements, the nematic cell is transferred to the stage of a polarizing microscope so that shadowgraph images can be obtained concomitantly with the electric current measurements. A function generator is used to produce a sinusoidal voltage signal which is in turn amplified, and applied to the cell. The path to ground for the current traversing the cell is through a current-to-voltage converter. The output signal from this converter is measured by a lock-in amplifier, whose reference signal is supplied by the original function generator. Before any measurements are taken, the nematic cell is replaced by a purely resistive load and the phase setting on the lock-in is adjusted to zero the out-of-phase current component. The nematic cell is then re-inserted. Then, at a selected frequency $`V`$ is raised in small steps. At each step, after waiting several seconds, $`I_r`$ and $`I_i`$ are recorded . This proceeds until a maximum desired $`V`$ (well above the threshold value $`V_c`$) is reached. Then, the process is reversed, and the currents recorded as $`V`$ decreased. The difference in current for increasing vs decreasing $`V`$ is less than 2%. When $`V`$ is raised above $`V_c`$ , the electric current traversing the liquid crystal measurably deviates from its value in the quiescent state, $`I^0`$. In order to determine $`V_c`$ from either the in-phase or out-of-phase current data, we first determine a baseline for $`I_r^0`$ ($`I_i^0`$) by fitting a straight line to $`I_r`$ ($`I_i`$) vs $`V`$ for $`V`$ much smaller than $`V_c`$ ; see Fig. 1. These values of $`I_r^0/V`$ and $`I_i^0/V`$ provide independent measurement of $`\sigma _{}`$ and $`ϵ_{}`$ (see eq. (2)) that agree within 5% with the direct measurement of these parameters using the Frederiks transition. The Nusselt numbers as functions of $`V`$ are then calculated by subtracting unity from the ratios $`I_r/I_r^0`$ and $`I_i/I_i^0`$ . By fitting another straight line to $`𝒩_r`$ ($`𝒩_i`$) in the region where it deviates from zero, we define $`V_c`$ as where this line crosses zero (see the insets in Figs. 1 and 2). For one ramp of $`V`$, the three values of $`V_c`$ determined from the Nusselt numbers and the traditional shadowgraph technique agree with each other within 0.01%.
Our apparatus did not reach sufficiently large $`V`$ to measure the crossover to the dielectric regime (see e.g. ); we therefore estimated a characteristic “cutoff” frequency $`\omega _c`$ by fitting $`V_c(\omega )`$ to the function $`A/(\omega \omega _c)`$. We found typically $`\omega _c/(2\pi )=645\mathrm{H}\mathrm{z}`$, but during the course of taking the measurements (2-3 months) this quantity varied by $`\pm 10\%`$, probably in connection with variations of the electrical parameters, especially the conductivities.
In Fig. 2 we plot both the real and imaginary Nusselt numbers vs $`ϵ`$. While $`𝒩_r`$ is always observed to be positive, for the data set shown $`𝒩_i`$ is negative. In some cases (discussed subsequently) it becomes positive. Note also that $`𝒩_r`$ is at least ten times larger in magnitude than $`𝒩_i`$.
Close to threshold, i.e. for $`0ϵ0.1`$, both Nusselt numbers are proportional to $`ϵ`$ as shown in the inset for the real Nusselt number: this confirms the supercritical law $`A\sqrt{ϵ}`$. The variations of the corresponding slopes $`𝒩_r/ϵ`$ and $`𝒩_i/ϵ`$ vs $`\omega \tau _0`$ are given on Fig. 3, which represents the results of several ramps in $`ϵ`$ at each frequency. For $`ϵ0.1`$ the curves deviate from straight lines; the “knees” in the curves indicate clearly the onset of secondary instabilities. Specifically, the two arrows shown on Fig. 2 correspond to the onset potential differences for the zig-zag instability and the spontaneous generation of dislocations associated with the so-called “defect chaos” .
To compare our experimental results with the SM calculations, we used the elastic constants and the viscosities measured for MBBA at $`28^{}\mathrm{C}`$ in , and the electric parameters that we determined independently (eq. (7)). Varying those parameters within the stated uncertainties, we calculate (with the fitting procedure defined above) for the “cutoff” frequency $`\omega _c/(2\pi )=730\pm 120\mathrm{H}\mathrm{z}`$, in agreement with the measured value. Systematically varying the electrical parameters within the experimental error bars, we also calculate, with the weakly nonlinear numerical code introduced above, the bands of possible values of $`𝒩_r/ϵ`$ and $`𝒩_i/ϵ`$. These bands are drawn in gray on Fig. 3. The extremal values turn out to be obtained by variation of only $`\sigma _a^{}`$, with the upper (lower) curves for both $`𝒩_r/ϵ`$ and $`𝒩_i/ϵ`$ obtained for the largest (smallest) value of $`\sigma _a^{}`$ . This is consistent with the fact that the leading positive terms controlling the Nusselt numbers as seen in the approximate formulae eqs. (4), (5) are proportional to $`\sigma _a^{}`$ . Note also that for large $`\sigma _a^{}`$ we expect $`𝒩_i`$ to be positive at low frequencies, while $`𝒩_i`$ is always negative for $`\sigma _a^{}`$ small . There is a good agreement between experiments and theory concerning the imaginary Nusselt number $`𝒩_i`$ ; on the other hand the real Nusselt number $`𝒩_r`$ decreases more abruptly in the experiments than in the theory. However the agreement obtained for $`𝒩_i`$ at all frequencies and $`𝒩_r`$ at small frequencies is particularly significant since (contrarily to the standard approach in nematic electroconvection where usually $`\sigma _{}`$ is fitted) no adjusted parameters have been used.
In conclusion, electric Nusselt number measurements are validated as a new and powerful method of characterization of electroconvection. This wholly quantitative technique stands in contrast to traditional optical methods which only become quantitative in certain limit cases. This technique affords a precise determination of the threshold voltage for the onset of electroconvection as well as secondary instabilities. Nusselt number measurements also represent an important quantitative tool for testing competing theoretical descriptions of electroconvection. Here, with only the limitation of relying on tabulated values of some material parameters, we have shown that the standard model for electroconvection gives satisfactory predictions of the Nusselt numbers near onset. One conspicuous explanation for the remaining discrepancies may be that nematic liquid crystals are quite clearly electrolytic conductors, and thus the Ohmic conduction assumed in the standard model introduces an important approximation. Thus, it clearly is of interest to extend the calculations presented here within the so-called weak-electrolyte model . Future directions of this work include also employing liquid crystal materials for which the applicability of the weak electrolyte model has been established, and experiments in the highly nonlinear, dynamical scattering regimes that occur at very large $`ϵ`$ .
ACKNOWLEDGMENTS
J. T. G. and N. G. acknowledge technical assistance from A. R. Baldwin. Their work was supported in part by Kent State University and the Ohio Board of Regents. E. P. thanks B. Dressel and W. Pesch for very fruitful discussions and comparisons with their code.
|
no-problem/9907/nucl-th9907093.html
|
ar5iv
|
text
|
# High 𝑝_𝑇 hadron spectra in high-energy heavy-ion collisions
## 1 Introduction
Large-$`p_T`$ partons or jets are good probes of the dense matter formed in ultra-relativistic heavy-ion collisions . The study of parton energy loss can shed light on the properties of the dense matter in the early stage of heavy-ion collisions. Large $`p_T`$ single-inclusive particle spectra in nuclear collisions are sensitive to parton energy loss. It is also a crucial test whether there is any thermalization going on in the initial stage of heavy-ion collisions.
At low $`p_T`$ the pQCQ parton model becomes invalid and other alternative approaches like thermal fire-ball models have to be used, from which one can extract the freeze-out temperature, collective radial flow velocity and chemical potential . Apparently, these thermal fire-ball models cannot be applied to describe hadron spectra at large $`p_T`$. Therefore, it is very important to investigate how well a pQCD parton model can describe hadron spectra in $`pp`$ collisions and their modification in $`pA`$ and $`AA`$ collisions and where the transition happens between hard and soft hadron production. In particular, the impact-parameter or $`A`$ dependence of the spectra may be unique to distinguish the parton model from other thermal fire-ball or hydrodynamic models. One can then at least make a quantitative conclusion about the validity of different models at different $`p_T`$ range. The values of temperature and flow velocity extracted from a fire-ball model analysis, for example, will have to be looked at with caution and knowledge of limitations.
## 2 Hadron Spectra in $`pp`$ Collisions
In a pQCD parton model, the inclusive particle production cross section in $`pp`$ collisions is given by
$`{\displaystyle \frac{d\sigma _{pp}^h}{dyd^2p_T}}`$ $`=`$ $`K{\displaystyle \underset{abcd}{}}{\displaystyle 𝑑x_a𝑑x_bd^2k_{aT}d^2k_{bT}g_p(k_{aT},Q^2)g_p(k_{bT},Q^2)}`$ (1)
$`f_{a/p}(x_a,Q^2)f_{b/p}(x_b,Q^2){\displaystyle \frac{D_{h/c}^0(z_c,Q^2)}{\pi z_c}}{\displaystyle \frac{d\sigma }{d\widehat{t}}}(abcd),`$
where $`D_{h/c}^0(z_c,Q^2)`$ is the fragmentation function of parton $`c`$ into hadron $`h`$ as parameterized in from $`e^+e^{}`$ data, $`z_c`$ is the momentum fraction of a parton jet carried by a produced hadron. We choose the momentum scale as the transverse momentum of the produced parton jet $`Q=p_T/z_c`$. We also use a factor $`K2`$ (unless otherwise specified) to account for higher order QCD corrections to the jet production cross section.
One normally assumes the initial $`k_T`$ distribution $`g_N(k_T)`$ to have a Gaussian form. In this study we relax the Gaussian form by assuming a variance which depends on $`p_T`$ leading effectively to a non-Gaussian distribution,
$$k_T^2_N(Q)=1(\mathrm{GeV}^2)+0.2\alpha _s(Q^2)Q^2.$$
(2)
The parameters are chosen to reproduce the experimental data at around SPS energies.
Shown in Fig. 2 are our calculated spectra for charged pions as compared to the experimental data for $`p+p`$ collisions at $`E_{\mathrm{lab}}=`$200 GeV. Without the initial $`k_T`$ smearing the calculations significantly underestimate the experimental data. This is because the QCD spectra are very steep at low energies and even a small amount of initial $`k_T`$ could make a big increase to the final spectra. As the energy increases, the QCD spectra become flatter and small amount of initial $`k_T`$ does not change the spectra much. Such parton model calculations fit very well at energies from $`\sqrt{s}=201800`$ GeV for $`pp`$ and $`p\overline{p}`$ collisions . The $`p_T`$ dependence of $`\pi ^{}/\pi ^+`$ ratio is another indication of the dominance of valence quark scattering in large $`p_T`$ hadron spectra at SPS energies.
## 3 $`pA`$ and $`AA`$ Collisions
We assume that the inclusive differential cross section for large $`p_T`$ particle production is still given by hard parton-parton scattering, except that the initial transverse momentum $`k_T`$ of the beam partons is broadened. Assuming that each scattering provide a $`k_T`$ kick which also has a Gaussian distribution, we can in effect just change the width of the initial $`k_T`$ distribution,
$$k_T^2_A(Q^2)=k_T^2_N(Q^2)+\delta ^2(Q^2)(\nu _A(b)1).$$
(3)
The broadening is assumed to be proportional to the number of scattering $`\nu _A(b)`$ the projectile suffers inside the nucleus. We will use the following $`k_T`$ broadening per nucleon-nucleon collision,
$$\delta ^2(Q^2)=0.225\frac{\mathrm{ln}^2(Q/\mathrm{GeV})}{1+\mathrm{ln}(Q/\mathrm{GeV})}\mathrm{GeV}^2/c^2.$$
(4)
The $`p_T`$ dependence of the broadening reflects the fact that the distribution of soft $`k_T`$ kick for each scattering does not necessarily have a Gaussian form.
The above calculation has been shown to reproduce the nuclear modification of the hadron spectra in $`pA`$ data very well . Shown in Fig. 2 are the calculated inclusive spectra for produced $`\pi _0`$ in $`A+B`$ collisions. The pQCD parton model calculations with the $`k_T`$ broadening due to initial multiple scattering (solid lines) agree with the experimental data (WA80 and WA98) well very at $`p_T`$ above 2 GeV/$`c`$. No parton energy loss has been assumed in the calculations. The dashed lines are the spectra in $`pp`$ collisions at the same energy multiplied by the averaged number of binary $`NN`$ collisions given by the nuclear geometrical factor. The difference between the solid and dashed lines is simply caused by effects of $`k_T`$ broadening and nuclear modification of parton distributions inside nuclei.
## 4 $`A`$ Scaling of Hadron Spectra
According to the pQCD parton model, the hadron spectra at large $`p_T`$ should scale with the number of binary nucleon-nucleon collisions if no nuclear effect is included. So if one defines a ratio,
$$R_{AB}(p_T)\frac{d\sigma _{AB}^h/dyd^2p_T}{N_{\mathrm{binary}}d\sigma _{pp}^h/dyd^2p_T}$$
(5)
between spectra in $`AB`$ and $`NN`$ collisions normalized by the averaged number of binary collisions $`N_{\mathrm{binary}}`$, the ratio will be approximately one for spectra from hard parton collisions. Because of absorptive processes, low $`p_T`$ particle production, which can be considered as coherent over the dimension of nuclear size, has much weaker $`A`$-dependence. In the wounded-nucleon model, soft particle production is proportional to the average number of wounded nucleons, the above ratio will become
$$R_{AB}0.5(1/A^{1/3}+1/B^{1/3})$$
(6)
So the ratio as defined in Eq. (5) will be smaller than one at low $`p_T`$ and larger than one at large $`p_T`$. Such a general feature has been found to be almost universal in both $`pA`$ and $`AB`$ collisions. One interesting feature from this analysis is that the transition between soft coherent interaction to hard parton scattering happen roughly around $`p_T`$=1.5 GeV. This is also the place where hadron spectra in $`pp`$ collisions start to deviate from a pure exponential form. One can then expect that for spectra above this value of $`p_T`$ the underlying mechanism of hadron production is dominated by hard processes.
At even higher $`p_T`$, the effect of multiple scattering becomes less important, so the ratio $`R_{AB}`$ will approach to 1 again (higher twist effect should be suppressed by $`1/p_T^2`$), as shown by the $`pA`$ data . However, at SPS energy, such a feature cannot be be fully revealed because of the kinetic limit. One will then only see the initial increase of the ratio due to the transition from soft to hard processes. Such a change of spectra from $`pp`$ to $`pA`$ and $`AA`$ collisions in a limited kinetic range looks very similar to the effect of collective flow in a hydrodynamic model. However, models motivated by parton scattering have definite $`A`$-dependence of such a nuclear modification. Therefore one should take caution about the values of temperature and flow velocity extracted from such a fire-ball analysis of the spectra, especially if one has to reply on the shape of the spectra in the $`p_T`$ region around 1 GeV.
As one can also observe that the comparison of parton model calculation and the experiment data does not shown any evidence of parton energy loss . This implies that the life-time of dense partonic matter could be shorter than the mean free path of the propagating parton.It also shows that the dense hadronic matter which has existed for a period of time in the final stage of heavy-ion collisions does not cause any apparent parton energy loss or jet quenching. If one observes a dramatic suppression of high $`p_T`$ hadron spectra at the BNL RHIC energy as predicted , then it will clearly indicate an initial condition very different from what has been reached at the CERN SPS energy.
|
no-problem/9907/math9907025.html
|
ar5iv
|
text
|
# Area preservation in computational fluid dynamics
## 1 Introduction
When a smooth field $`\omega (x,y)`$ is advected by an area-preserving flow, the area within each contour of $`\omega `$ is preserved. This is seen in pure advection and in the Euler equations, for example, and is important in the numerical solution of two-phase free boundary problems, where the total volume of each fluid should be preserved. Yet, although the advection problem has been addressed in probably thousands of papers, and very accurate, stable, and efficient methods are known, no existing numerical methods take the area-preservation property into account. In this Letter we present an initial study containing two methods which do preserve (a discrete analog of) area. Although they are not, presumably, competitive with the best existing methods for advection, the results are extremely promising.
The configuration space of an inviscid incompressible fluid is the group $`𝒟_\mu `$ of volume-preserving diffeomorphisms of the fluid’s domain; the ‘Arnold’ picture, in which the Euler equations are geodesic equations on this group equipped with the kinetic energy ($`L_2`$) metric, is treated in . The configuration at any time is a volume-preserving rearrangement of the initial condition. Existing Eulerian numerical methods do not preserve any discrete analogue of this property. This is particularly relevant in two dimensions, where area preservation leads to an infinite number of conserved quantities, the generalized enstrophies.
We consider a two-dimensional fluid with divergence-free velocity field $`𝐮=(u,v)`$, stream function $`\psi `$ (i.e. $`u=\psi _y`$, $`v=\psi _x`$), and some quantity $`\omega `$, which we call the vorticity, which is advected by the fluid:
$$\dot{\omega }+𝐮\omega =\dot{\omega }+J(\omega ,\psi )=0,$$
(1)
where the Jacobian
$$J(a,b)=\frac{(a,b)}{(x,y)}.$$
(2)
In the two situations we shall consider, this is a Hamiltonian system with Poisson bracket
$$\{F,G\}=\omega J(\delta F,\delta G)𝑑x𝑑y$$
(3)
and Hamiltonian
$$H=\frac{1}{2}\psi \omega 𝑑x𝑑y$$
where the stream function $`\psi `$ is either a given function $`\psi =\psi (x,y,t)`$, in which case Eq. (1) is the Liouville (advection) equation, or is determined by the Poisson equation $`^2\psi =\omega `$, in which case (1) is the the two-dimensional Euler equation. Other two-dimensional flows such as the shallow water and semi-geostrophic equations also possess a quantity $`\omega `$, called a potential vorticity, that is advected according to Eq. (1). There are also applications to level-set methods, in which $`\omega `$ is not a physical variable but is introduced so that the curve $`\omega (x,y)=c`$ can indicate a free boundary.
The Casimirs of the Poisson bracket (3) are conserved quantities of the PDE (3). These can be variously written as
$$C_f=f(\omega )𝑑x𝑑y$$
for any function $`f`$ such that $`C_f`$ exists, as
$$C_n=\omega ^n𝑑x𝑑y,$$
called the generalized enstrophies ($`C_2`$ is the usual enstrophy), or as the areas enclosed by each vorticity contour
$$A(c)=_{\omega c}1𝑑x𝑑y.$$
They all reflect the fact that $`\omega `$ is being advected by an area-preserving vector field and can only reach states which are area-preserving rearrangements of its initial state. That is,
$$\omega (x,t)=\omega (\phi _t^1(x),0),$$
where $`\phi _t`$ is the time-$`t`$ flow of the vector field $`𝐮`$.
The famous Arakawa Jacobian is an Eulerian finite difference approximation of Eq. (2) which preserves discrete analogues of the energy $`H`$ and the enstrophy $`C_2`$ . It is known to preserve the mean of the energy spectrum and to prevent some nonlinear instabilities. However, the other conserved quantities are not preserved and their role in the dynamics is not known .
Area preservation can also be studied in a Lagrangian framework—for example, point vortex methods could be said to be area-preserving—but Lagrangian schemes carry a lot of extra information (the particle paths) which should be decoupled from the dynamics. The dimension of the phase space is halved in Eulerian form, and further reduced by preserving (discrete analogues of) the Casimirs. For ODEs, it is well established that the best long-time results are obtained by working in the smallest possible phase space .
The Hamiltonian picture has been described by Marsden and Weinstein . The configuration space is the group $`𝒟_\mu `$. The Euler equations in Lagrangian form are a canonical Hamiltonian system on $`T^{}𝒟_\mu `$, and in Eulerian form are a Lie-Poisson system on the dual of the Lie algebra of $`𝒟_\mu `$, which is identified with the space of vorticities. The coadjoint orbits of this space are the level sets of the Casimirs, each of which is a symplectic manifold. Discretizations of the Eulerian form are not, in general, Hamiltonian systems, nor do they have conserved quantities corresponding to the Casimirs (although there is one interesting Hamiltonian discretization, the sine-Euler equations ).
Therefore we forget about the Hamiltonian structure and study the Casimirs—the area-preservation—and present two models in which a discrete analogue of the areas $`A(c)`$ is preserved. The first (Section 2), based on a literal rearrangement of cells, is interesting in that it gives a fully-discrete, cellular-automata-like model of an incompressible fluid. It does not preserve smoothness of the vorticity field (although filamentation and turbulence mean that it can’t usually stay very smooth anyway). A smooth version (Section 3) is based on computing an approximation of $`A(c)`$ which is smooth as a function of $`c`$, and relabelling the vorticity field so that $`A(c)`$ is constant in time. It is tested on the Liouville equation and prevents the appearance of large spurious maxima and minima in the vorticity field during its evolution.
## 2 The cell rearrangement model
Both of the models presented here are projection schemes. The vorticity is evolved by any sensible scheme for some short time $`t`$ (e.g., 1–10 time steps), and then projected onto some space of rearrangements of the original vorticity.
In this section we consider the vorticity field to be piecewise constant on a set of fixed cells, which for convenience we take to be squares with side $`h`$. A (minuscule!) subset of the rearrangements of the initial condition is given by the permutations of the cells. However, these can be naturally associated with the fluid flow. For, consider the area-preserving map $`\phi `$ which is the time-$`t`$ flow of the fluid. According to a theorem of Lax , there is a mapping $`P`$ which permutes cells and which satisfies
1. $`P(C)\phi (C)\mathrm{}`$ for all cells $`C`$; and
2. $`P(x)\phi (x)sup_{y,zC}\phi (y)\phi (z)+\sqrt{2}hxC.`$
The dynamics of such lattice maps are often studied. For example, if the continuous map $`\phi `$ is iterated on a computer, it will not be exactly a bijection or exactly area-preserving, due to round-off error. By replacing it with a lattice map and examining the limit $`h0`$ the effects of roundoff error can be studied.
The easiest way to construct lattice maps is as a composition of shears $`x_i^{}=x_i`$ for $`i=1,\mathrm{},d`$, $`x_j^{}=f_j(x_1,\mathrm{},x_d)`$ for $`j=d+1,\mathrm{},n`$, where $`x`$ is the nearest lattice point to $`x`$. This would be suitable, for example, if $`\phi `$ itself were approximated by a product of shears, as is for example the flow of separable Hamiltonians $`H=H_1(p)+H_2(q)`$ (the flow of the Hamiltonian vector field corresponding to each $`H_i`$ is a shear). This is very fast and the permutation need not be constructed explicitly.
However, in the present case $`\phi `$ can only be obtained by integrating the Lagrangian particle paths for a short time $`t`$, and an explicit approximating lattice map seems to be unobtainable. Scovel suggested using maps of the form, e.g., $`x^{}=x+JS((x+x^{})/2)`$ for a suitable Poincaré generating function $`S`$ (here $`S=(\mathrm{\Delta }t)\psi `$ would give a good approximation of the time-$`\mathrm{\Delta }t`$ flow of the stream function $`\psi `$). However, this nonlinear, discrete equation does not seem to have solutions in general.
Thus, it seems that one must laboriously construct a table of the permutation. An algorithm which does this is described in . Its running time is $`𝒪(N^3)`$, where $`N=𝒪(1/h^2)`$ is the number of cells. One must construct lists of candidate cells (e.g., all those that intersect $`\phi (C)`$) and make successive choices from these lists, backtracking when no choices remain. While practical for moderate $`N`$ when the dynamics of the lattice map are going to be studied intensively, in the present application $`\phi `$ changes at every time step; searching for a completely new permutation every time is too expensive. This approach has been explored by Turner .
Luckily, there is a way out of this impasse, using the extra physical information attached to each cell: the vorticity itself. The only use of the permutation $`P`$ is to update the vorticity field $`\omega `$ by $`\omega \stackrel{~}{\omega }`$, $`\stackrel{~}{\omega }P=\omega `$, in order that the distribution of vorticity values remains constant. This can be achieved directly, without actually constructing a $`P`$ which approximates $`\phi `$, by the following algorithm. Let $`\mathrm{rank}_t(c)`$ be the number of cells with vorticities greater than or equal to $`\omega `$ at time $`t`$, i.e.,
$$\mathrm{rank}_t(c)=\mathrm{\#}\{j:\omega (x_j,t)c\},$$
with ties broken arbitrarily to make $`\mathrm{rank}_t`$ an invertible function onto $`\{1,\mathrm{},N\}`$. Then:
1. Update the field $`\omega `$ for time $`t`$ any standard Eulerian method; and
2. let $`\stackrel{~}{\omega }_j=\mathrm{rank}_0^1(\mathrm{rank}_t(\omega _j))`$.
The new field $`\stackrel{~}{\omega }`$ can be constructed in time $`N\mathrm{log}N`$ by sorting the two lists of vorticity values at times $`0`$ and $`t`$. The largest current value is replaced by the largest original value, and so on. (Other updates, based on minimizing $`\stackrel{~}{\omega }\omega `$, are also possible.)
This algorithm can be regarded as constructing a permutation, albeit a permutation that has no relationship to the flow $`\phi `$. This is a truly finite-state model of an incompressible fluid: the state space is the permutation group $`S_N`$ and the fluid dynamics reduces to the dynamics of the map $`S_NS_N`$, defined above, parameterized by the initial distribution of vorticity values. It is an almost cellular-automata-like fluid model, although lacking the local update property of CAs. It has the aesthetic appeal of capturing the vorticity-rearrangement property perfectly in a naturally discrete way, of constructing a “discrete coadjoint orbit”, and it is very cheap.
However, these advantages are offset by a practical disadvantage of lack of smoothness. The new vorticity values are selected somewhat arbitrarily from the sorted list, and the new field may be rougher than the original. This is probably unavoidable, given the chosen fully discrete state space. The noise of this imposed roughness may swamp any gains from preserving the coadjoint orbits. However, in a turbulent flow with highly filamented vorticity, the loss of smoothness may not be significant. A second consequence of the lack of smoothness is that if $`|\omega (x,t)\omega (x,0)|`$ is too small, then $`\stackrel{~}{\omega }\omega `$—the field cannot be updated at all. The remapping interval $`t`$ must be large enough to allow some change in the configuration. For example, the flow map $`\phi `$ should move each cell across at least 2 cells so that the algorithm has some scope for finding a suitable permutation.
## 3 The vorticity relabelling model
The cell rearrangement model produces an area function $`A(c)`$ which is discontinuous—in fact, it is piecewise constant. To improve it, we need to
* produce a smoother approximation of $`A(c)`$. If $`\omega (x,y)`$ is a smooth function, we want an approximation of $`A(c)`$ which is as smooth and accurate as possible, using only the grid values $`w(x_i,y_j)`$; and
* project the vorticity function so that its area function $`A(c)`$ at time $`t>0`$ equals (or closely approximates) the initial area function.
### 3.1 Computing the areas enclosed by vorticity contours
We consider a compact domain $`\mathrm{\Omega }`$ with area 1, usually a square or torus, on which $`\omega `$ is bounded with range $`[\omega _{\mathrm{min}},\omega _{\mathrm{max}}]`$, and of smoothness $`C^r`$. It may be degenerate, e.g., constant on open sets.
###### Definition 1
The area function of the field $`\omega `$ is the area enclosed by the set $`\{(x,y):\omega (x,y)c\}`$, i.e.,
$$A_\omega :[\omega _{\mathrm{min}},\omega _{\mathrm{max}}][0,1],A_\omega (c)=_{\omega (x)c}1𝑑x𝑑y$$
$`A(c)`$ is strictly decreasing with respect to $`c`$. It is $`C^r`$ at regular (noncritical) values $`c`$, $`C^0`$ at nondegenerate critical values, and discontinuous at $`c`$ if the set $`\{x:\omega (x,t)=c\}`$ has positive area. (Lack of differentiability at critical values can be seen by studying $`\omega =(x^2+y^2)`$, for which $`A(c)=\pi c`$ for $`c0`$ and $`0`$ for $`c>0`$.) Thus, its inverse $`A^1`$ exists and is nonincreasing (i.e., more area must be enclosed by a lesser value of $`\omega `$.)
Let $``$ be an interpolation or approximation operator mapping grid functions to fields, i.e. functions on $`\mathrm{\Omega }`$.
###### Definition 2
The area function of the grid function $`\omega `$ is defined to be the area function of its interpolant, i.e.,
$$A_\omega :=A_\omega .$$
It automatically inherits the monotonicity properties of $`A`$. Let $``$ be a restriction operator mapping fields to grid functions, usually by evaluating on the grid. Let $`\stackrel{~}{\omega }=\omega `$. The crucial observations are the following:
1. Choice of $``$ can lead to $`A_{\stackrel{~}{\omega }}`$ being as smooth as $`A_\omega `$, and of any order of accuracy as an approximation;
2. If $`\stackrel{~}{\omega }`$ is piecewise linear, its contours are polygons, whose area can be found quickly for any contour topology;
3. If $`\stackrel{~}{\omega }`$ has polygonal contours, $`A_{\stackrel{~}{\omega }}`$ can be second order accurate and as smooth as $`A_\omega `$.
Item (1) is obvious, and is a consequence of existence of $`C^r`$ approximations to functions. An algorithm for finding the area enclosed by (unions of) polygons is given below. The most important point is (3), as it says that smooth interpolants, which are expensive in two dimensions, are not needed to compute a smooth area function.
Consider a grid function on a triangulation of $`\mathrm{\Omega }`$. Interpolating by piecewise polynomials along edges only, and constructing the interpolant whose contours are line segments within each triangle (whose graph is a “ruled surface”), yields an area function which is as smooth as the interpolant at vertex (grid point) values and analytic elsewhere. Thus, only smooth one-dimensional interpolation is needed, which is relatively cheap (e.g., $`C^1`$ can be achieved using local cubics).
Piecewise linear interpolation yields a $`C^0`$, second-order-accurate area function. However, it is better than its mere continuity might make it appear, since its derivative jumps at vertex values are only $`𝒪(h^2)`$ on a grid with spacing $`h`$. So, numerically, it is indistinguishable from a $`C^1`$ function. In practice, the most glaring jumps are in its second derivative at vertex values, not in the function itself. (See Fig. 2.) Piecewise linear interpolation seems to be suitable in practice and this is what we use in the tests below. (If the main computational grid is square, we triangulate using an extra vertex at the center of each cell, whose value is assigned by linear interpolation.) However, true $`C^1`$ area functions have been tested as well.
The great advantage of polygonal contours is that the area of a simple polygon with vertices $`𝐱_i`$, $`i=1,\mathrm{},n`$, is very easy to compute: it is
$$\frac{1}{2}\underset{i=1}{\overset{n}{}}𝐱_i\times 𝐱_{i+1},$$
where $`x_{n+1}:=x_1`$. This can be seen by deriving it for a triangle, triangulating the polygon against a fixed point, and then using independence with respect to the fixed point. It can also be viewed as a discretization of
$$_\mathrm{\Omega }1𝑑x𝑑y=\frac{1}{2}_\mathrm{\Omega }d(xdyydx)=\frac{1}{2}_\mathrm{\Omega }𝐱\times 𝑑𝐱.$$
However, it would be expensive to chase contours around the domain and construct a list of simple polygons. Instead, one can simply scan each triangle for occurrence of a contour, find its endpoints $`𝐱_1`$, $`𝐱_2`$, and accumulate $`𝐱_1\times 𝐱_2`$ with a sign determined by the sense of the triangle when its vertices are visited in order of increasing function values. This handles arbitrary contour topology. (Exception handling is needed when two vertices and the contour all have equal values.)
We are not sure if there is a similarly simple method with higher order contours. In practice, to get more than second order accuracy, we use Richardson extrapolation from a coarser grid.
For a list of contour values, the above algorithm involves scanning the cells once and accumulating areas of the relevant contours. If $`N_c`$ values of the area function are needed, and the grid size is $`𝒪(h)`$, then each cell will contain $`O(hN_c)`$ contours on average, so the computation takes time $`𝒪(N_c/h)`$. In practice, we take $`N_c=𝒪(1/h)`$ and build a function table, which is later interpolated as needed. Thus computing the areas takes $`𝒪(1/h^2)`$, i.e., it is linear in the number of grid points.
If $`\omega `$ is nearly constant on large areas, then $`A(c)`$ can be very steep, so it should be tabulated using adaptive stepping in $`c`$. An example of the $`C^0`$ estimate of $`A(c)`$ given by piecewise linear interpolation is shown in Fig. 2, together with the piecewise constant estimate given by simply counting the number of vertices where $`\omega >c`$. Its (numerical) derivative indicates its smoothness. Some care must be taken when interpolating to maintain monotonicity.
We have also computed smooth approximations of $`A_\omega (c)`$ for random $`\omega `$ fields whose contours have complicated topology.
### 3.2 Projecting the the space of rearrangements
After evolving $`\omega `$ for a short time $`t`$ with Eulerian method, we have two grid functions, the vorticity at time $`0`$, $`\omega (0)`$, and at time $`t`$, $`\omega (t)`$. We wish to project $`\omega (t)`$ so that it is an (approximation of) a rearrangement of $`\omega (0)`$. The projection should be small and should not destroy smoothness. Traditional methods for enforcing constraints, such as steepest descents, appear to be completely infeasible because of the global and sensitive dependence of $`A(c)`$ on the vertex values of $`\omega `$. Our proposed method is a continuous version of the sorted-assignment used in the cell rearrangement model of Section 2. In words, we compute the area enclosed by the contour through each vertex value and replace it by the value that originally enclosed that much area. The contour shapes and topologies do not change: only the values associated with each contour change.
###### Definition 3
The relabelling projection on grid functions $`\omega _i\omega (x_i,t)`$ is defined by $`\omega (t)\stackrel{~}{\omega }(t)`$, where
$$A_{\omega (0)}(\stackrel{~}{\omega }_i)=A_{\omega (t)}(\omega _i)$$
(4)
for each vertex $`i`$.
It is well defined by monotonicity of $`A(c)`$. It has an obvious continuum analog (replacing $`i`$ by $`x`$ in Eq. (4)), which if applied to every value of $`\omega `$ taken by a smooth vorticity field, with $`\omega (0)`$ and $`\omega (t)`$ both $`C^r`$, yields a new field $`\stackrel{~}{\omega }`$ that is $`C^r`$ away from critical points of $`\omega _0`$ and $`\omega _t`$ and $`C^0`$ at such critical points.
To compute a good approximation of this projection quickly, the current area function is tabulated and interpolated at the vertex values, and then $`A_{\omega (0)}^1`$ (which, of course, does not change during the run) is evaluated by interpolation. Of course, we do not have a true projection in that $`\stackrel{~}{\stackrel{~}{\omega }}\stackrel{~}{\omega }`$, because interpolation errors in the contours do change the contour shapes by a small amount when the vertex values are changed. We do not quite get $`A_{\omega (0)}(c)=A_{\stackrel{~}{\omega }(t)}(c)`$ for all $`c`$. However, these errors can be controlled independently of the discretization error in $`\omega `$, for example, by using a higher order approximation of $`A`$. In a numerical test, one application of Richardson extrapolation to the areas enclosed by piecewise linear contours gave $`|A_{\omega (0)}A_{\stackrel{~}{\omega }(t)}|10^4`$ on a relatively coarse $`20\times 20`$ grid. By contrast, without the relabelling projection, errors in the area function rapidly reach order 1.
If the vorticity is evolved for a short time $`t`$, with a method of spatial order $`p`$, spatial errors dominate the error in the area function which are $`𝒪(th^p)`$. Thus, with $`t=o(1)`$, the projection only alters the field by $`o(h^p)`$, and the overall method (after evolution and projection), is still consistent of the same order $`p`$. The projection cannot correct any errors in the shapes of the contours, but it can stop those errors growing further by propagation of the false distribution of vorticity values, which is particularly bad for the 2D Euler equations, where those values determine the velocity field itself.
## 4 Numerical tests
Here we illustrate some short tests to validate our approach and show that it is indeed possible to compute and preserve area in an Eulerian method. We use a coarse ($`20\times 20`$) grid which barely resolves the solution, and a crude (second order) finite difference approximation to the spatial differences, in order to test whether the method can correct the large oscillations and area errors that result.
We solve the Liouville equation in $`\mathrm{\Omega }=[0,1]^2`$ with initial field $`\omega =\mathrm{exp}(45(x\frac{3}{4})^215(y\frac{1}{2})^2)`$ advected by the velocity field with stream function $`\psi =\mathrm{sin}(\pi x)\mathrm{sin}(\pi y)`$. (See Figure 1.) This velocity field has shear, so $`\omega `$ rapidly rolls up into a tight spiral, mimicking the filamentation of vorticity in the Euler equations. The spatial derivatives in Eq. (1) are approximated by the Arakawa Jacobian, which is second order and preserves discrete analogues of energy and enstrophy. Although the discrete enstrophy $`\omega _i^2`$ is preserved, this does not help the scheme preserve areas any better than (nonconservative) central differences do.
Particles at the maximum of $`\omega `$ have a period of about 0.75. We integrate with a second order method for 400 times steps of $`\mathrm{\Delta }t=0.003`$, or total time $`1.2`$, during which this maximum rotates 1.6 times around the centre of the square $`\mathrm{\Omega }`$. Spatial errors completely dominate the total error at $`t=1.2`$.
Without any projection, oscillations rapidly develop and the distribution of vorticity values is not maintained at all well (see Figure 3(a)). A large minimum of $`\psi =0.69`$ forms, next to a spurious local maximum of $`\psi =0.46`$. The initial maximum of 1 has not been preserved but has decayed to 0.87. The comparison between the initial and final area functions (see Figure 2) shows that the area within most vorticity contours is not preserved at all.
The area-preserving methods both involve periodically remapping the vorticity. If this period is too short (e.g. one time step), then the cell rearrangement model cannot update the vorticity at all. If it is too long, then not only the area but also the topology of the level sets can alter, which can not be corrected by the present methods. Once a small island of vorticity has been created, for example, it must be advected by the flow.
We first consider the cell rearrangement model of section 2. Suppose the remapping is applied every $`N_r`$ time steps. This needs a large $`N_r`$ to yield a reasonably smooth remapped vorticity field; but if $`N_r`$ is too large then spurious maxima can evolve which are not removed by the remapping. With no remapping, this maximum reaches $`0.46`$. With $`N_r=50`$, it reaches $`0.26`$. With $`N_r=20`$, there is no isolated spurious maximum, but oscillations start to appear within the main island. These grow worse at $`N_r=10`$. Therefore, $`N_r=20`$ seems a reasonable balance, and the final field is shown in Figure 3(b). In one remapping period, the central peak moves across about 2 cells.
This remapping is very fast, but it does not maintain smoothness of $`\omega `$, as can be seen here. In fact, it is surprising that it works even as well as it does in this example. However, the lack of smoothness would not be a problem in problems involving poorly-resolved turbulent fields.
We consider now the vorticity relabelling model of section 3. In this model we are free to decrease the remapping interval $`N_r`$ as desired: we still obtain smooth results with $`N_r=1`$, for example. As $`N_r`$ is decreased, the results progressively improve. For $`N_r=\mathrm{}`$, 20, 10, and 5, the peak of the spurious maximum is at $`\omega =0.46`$, $`0.09`$, $`0.02`$, and $`0.002`$, respectively. (Because of its smooth interpolation, it cannot completely eliminate this maximum, as the cell rearrangement model does.) Results for $`N_r=10`$ are shown in Figure 3(d). The final field is very smooth, considering the coarse $`20\times 20`$ grid, and very plausibly represents an element of the original state composed with an area-preserving diffeomorphism. One contour of the exact solution (found by particle tracking) is shown in the background. The computed solution has clearly suffered far too much diffusion, a result of using diffusive, non-upwinded second differences to approximate the advection term. Nevertheless, it is impressive that such information can be extracted from the same method that produced Figure 3(a), by merely imposing some conservation laws.
Finally, Figure 3(c), shows the vorticity relabelling model applied to an even simpler spatial discretization, namely ordinary central differences. It is in fact more accurate that the Arakawa Jacobian (Fig. 3(d)), being slightly less diffusive. Thus, preserving areas lets one use much simpler finite differences and still maintain smooth, non-oscillatory solutions.
Any of the techniques presented here can be combined with a more sophisticated underlying Eulerian scheme. If we used a high-order, low-diffusion upwinding scheme, for example, then area errors would have been much less than in Fig. 3(a); but they would still increase over time. Applying the vorticity relabelling would still improve the solution.
## 5 Discussion
The methods discussed here take into account one large family of conservation laws. This possibility raises many questions. What is the effect of using these methods for very long times? What is their effect on other conservation laws such as energy and symplecticity? How well do they work on larger applications such as the shallow water equations? (For level-set applications, a simpler update, adding a constant to the advected field so that the area inside one particular level set is preserved, may be preferable.)
More theoretically, is it possible to regard the ‘equal area’ functions as defining a discrete phase space in which consistent approximations can be directly derived, instead of using brute force modification of existing methods? While desirable, this looks difficult, since we are not projecting to any well-defined manifold. Consider the subset of $`\text{}^{N^2+1}`$ defined by
$$A_\omega (\omega _i)=A_0(c),i=1,\mathrm{},N^2.$$
This does have dimension 1 in general, but is formidably curled up on itself. It may be better to think of the constrained phase space as the configurations lying within some small distance of a manifold of dimension $`\text{}^{N^2𝒪(N)}`$, as we are enforcing one curve’s worth of constraints.
Acknowledgements I am extremely grateful to Tom Hou and Arieh Iserles for bringing this problem to my attention, to Reinout Quispel for useful discussions and for providing the reference , and to Paul Turner who studied the cell rearrangement model in the course of his M.Sc. thesis . This research is supported by the Marsden Fund of the Royal Society of New Zealand. Part of it was undertaken when the author enjoyed the support of the MSRI, Berkeley. MSRI wishes to acknowledge the support of the NSF through grant no. DMS–9701755.
|
no-problem/9907/astro-ph9907289.html
|
ar5iv
|
text
|
# The origin of the relativistic wind in gamma-ray bursts: MHD flow from a disk orbiting a stellar mass black hole?
## 1 Introduction
Among the sources which have been proposed to explain cosmic gamma-ray bursts (GRBs) the most popular are mergers of compact objects (neutron star binaries or neutron star – black hole systems) or massive stars which collapse to a black hole (hypernovae) . In all cases, the resulting configuration is a stellar mass black hole surrounded by a thick torus made of stellar debris or of infalling stellar material partially supported by centrifugal forces.
If black hole + thick disk configurations are indeed at the origin of GRBs the released energy will ultimately come from the accretion of disk material by the black hole or from the rotational energy of the hole itself extracted by the Blandford-Znajek mechanism. In a first step the energy must be injected into a relativistic wind. The second step consists in the conversion of a fraction of the wind kinetic energy into gamma-rays via the formation of shocks, probably inside the wind itself . In the last step the wind is decelerated when it interacts with the interstellar medium and the resulting (external) shock is responsible for the afterglow observed in the X-ray, optical and radio bands .
The origin of the relativistic wind is certainly the more complex of the three steps. A few possible ideas have been proposed but none is presently fully conclusive. If the burst energy comes from matter accretion by the black hole, the annihilation of neutrino-antineutrino pairs emitted by the hot disk could be a way to inject energy along the system axis, in a region which can be expected to be essentially baryon free due to the effect of centrifugal forces. The low efficiency of this process however requires high neutrino luminosities and therefore short accretion time scales . Another possibility is to suppose that disk energy is extracted by a magnetic field amplified by differential rotation to very large values ($`B10^{15}`$ G). A magnetically driven wind could then be emitted from the disk with a fraction of the Poynting flux being eventually transferred to matter. An alternative to accretion energy could be to directly tap into the rotational energy of the black hole via the Blandford-Znajek mechanism. The available power then depends on the rotation parameter $`a`$ of the black hole and on the intensity of the magnetic field pervading the horizon . The purpose of this paper is to present an exploratory study of the case where a magnetically driven wind is emitted by the disk. Our approach will be extremely simplified in comparison to the complexity of the real problem so that our conclusions will have to be considered as indicative only. We nevertheless expect that we can identify the key parameters which control the baryonic load of such a wind and put constraints on the final values of the Lorentz factor which can be obtained.
## 2 Dynamics of the wind from the disk to the sonic point
To compute the mass loss rate and therefore estimate the amount of baryonic pollution we only need to follow the wind dynamics from the disk up to the sonic point. We write the wind equations with a number of simplifying assumptions: i) we assume that the disk is thin and that the field is poloidal with the most simple geometry, i.e. straight lines making an angle $`\theta (r)`$ with the plane of the disk, $`r`$ being the distance from the foot of the line to the disk axis (Fig. 1). The flow of matter is then guided along the magnetic field lines; ii) we use non relativistic equations since even at the sonic point $`v_s/c<0.1`$ but we adopt the Paczynski-Wiita potential for the black hole; iii) we consider that a stationary regime has been reached in the wind.
We then write the three flow equations in a frame corotating with the foot of the line:
* Conservation of mass
$$\rho vs(y)=\dot{m}$$
(1)
* Euler equation
$$v\frac{dv}{dy}=\gamma (y)r\frac{1}{\rho }\frac{dP}{dy}$$
(2)
* Energy equation
$$v\frac{de}{dy}=\dot{q}(y)r+v\frac{P}{\rho ^2}\frac{d\rho }{dy}$$
(3)
where $`y=\mathrm{}/r`$ and $`\mathrm{}`$ is the distance along the field line; $`e`$ is the specific internal energy, $`\gamma (y)`$ the total acceleration (gravitational + centrifugal) and $`\dot{q}(y)`$ the power deposited in the wind per unit mass. Different sources of heating can be present such as neutrino captures on nucleons (if the disk is optically thick to neutrinos), neutrino-antineutrino annihilation and dissipation of kinetic or magnetic energy. Because the field and stream lines are coincident the function $`s(y)`$ is easily related to the field geometry through the conservation of magnetic flux. Finally, $`\dot{m}`$ is the mass loss rate from the disk per unit surface. Our equation of state which includes nucleons, relativistic electrons and positrons and photons is computed from the expressions given by .
As long as the inclination angle $`\theta `$ remains larger than $`\theta _160^{}`$ ($`\theta _1`$ is exactly $`60^{}`$ if a newtonian instead of a Paczynski-Wiita potential is used for the black hole) the acceleration $`\gamma (y)`$ is negative up to $`y=y_1`$ after which the centrifugal force dominates. The sonic point of the flow is located just below $`y_1`$ (the relative difference $`(y_1y_s)/y_1`$ never exceeds 1%). To solve the wind equations we first fix trial values of the temperature and density $`T_s`$ and $`\rho _s`$ at the sonic point from which we get $`v=v_s`$. The position of the sonic point is obtained from the condition that the solution remains regular at $`y=y_s`$. The mass loss rate $`\dot{m}`$ is then fixed and the inward integration along a field line can be started. We observe that at some position $`y=y_{\mathrm{crit}}`$ the velocity begins to fall off rapidly while the temperature reaches a maximum $`T_{\mathrm{max}}T_\mathrm{D}(r)`$, where $`T_\mathrm{D}(r)`$ is the disk temperature. We adjust the values of $`T_s`$ and $`\rho _s`$ with the requirement that $`y_{\mathrm{crit}}`$ should be as close as possible to 0 and $`T_{\mathrm{max}}`$ to $`T_\mathrm{D}(r)`$.
The results presented below assume that the disk is optically thick to neutrinos between $`R_{\mathrm{in}}=3r_g`$ and $`R_{\mathrm{max}}=10r_g`$. No term for kinetic or magnetic energy dissipation have been included so that $`\dot{q}(y)`$ is limited to neutrino processes: capture on free nucleons, scattering on electrons and positrons, neutrino-antineutrino annihilation (heating) and neutrino emission by nucleons, annihilation of electron-positron pairs (cooling). The assumption that the disk is optically thick to neutrinos is probably justified for NS + NS or NS + BH mergers. It is much more questionable in the hypernova scenario, except for very high accretion rate or low values of the viscosity parameter ($`\alpha <0.01`$) as shown from the disk models computed by . Our detailed results therefore only concern a specific case but we also obtain below a simple and general rough estimate of the mass loss rate for any kind of heating mechanism.
The adopted temperature distribution $`T_\mathrm{D}(r)`$ corresponds to a geometrically thin, optically thick disk
$$T_\mathrm{D}(r)=T_{}\left(\frac{r_{}}{r}\right)^{3/4}\left(\frac{1\sqrt{\frac{r_{\mathrm{in}}}{r}}}{1\sqrt{\frac{r_{\mathrm{in}}}{r_{}}}}\right)^{1/4}$$
(4)
where $`T_{}`$ is the temperature at $`r=r_{}`$. The mass of the black hole is $`M_{\mathrm{BH}}=2.5`$ M.
## 3 The mass loss rate
The solution for mass loss rate as a function of $`r`$, $`T_\mathrm{D}(r)`$ and $`\theta (r)`$ takes the form
| $`\dot{m}_{13}(x)3.8\mu _{\mathrm{BH}}\left[{\displaystyle \frac{T_\mathrm{D}(x)}{2\mathrm{MeV}}}\right]^{10}f[x,\theta (x)]`$ |
| --- |
| $`3.8\mu _{\mathrm{BH}}\left[{\displaystyle \frac{T_{}}{2\mathrm{MeV}}}\right]^{10}\left({\displaystyle \frac{r_{}}{r}}\right)^{15/2}\left({\displaystyle \frac{1\sqrt{\frac{r_{\mathrm{in}}}{r}}}{1\sqrt{\frac{r_{\mathrm{in}}}{r_{}}}}}\right)^{5/2}f[x,\theta (x)]`$ |
(5)
where $`\dot{m}_{13}`$ is the mass loss rate in units of $`10^{13}`$ g.cm<sup>-2</sup>.s<sup>-1</sup>, $`x=r/r_g`$ and $`\mu _{\mathrm{BH}}=M_{\mathrm{BH}}/2.5M_{}`$. The geometrical function $`f[x,\theta (x)]`$ is normalized in such a way that it is equal to unity for $`x=4`$ and $`\theta =85^{}`$. The mass loss rate is extremely sensitive to the value of the disk temperature. The tenth power dependence is in agreement with what is found for neutrino driven winds in spherical geometry . The dependence of $`\dot{m}`$ on inclination angle is also very strong as shown in Fig. 2 where $`\dot{m}`$ is represented (with $`T_{}=2`$ MeV and $`r_{}=4r_g`$) for two geometries of the field lines: constant $`\theta =85^{}`$ and $`\theta `$ decreasing from $`90^{}`$ to $`80^{}`$ between $`r=3r_g`$ and $`r=10r_g`$.
Since additional sources of heating can be present in the wind (viscous dissipation, reconnection of field lines, etc) we have also obtained a very simple and general analytical expression for the mass loss rate
$$\dot{m}\frac{\dot{e}}{\mathrm{\Delta }\mathrm{\Phi }}g$$
(6)
where $`\dot{e}`$ is the rate of thermal energy deposition (in erg.cm<sup>-2</sup>.s<sup>-1</sup>) between the plane of the disk ($`y=0`$) and the sonic point at $`y_sy_1`$; $`\mathrm{\Delta }\mathrm{\Phi }=\mathrm{\Phi }_1\mathrm{\Phi }_0`$ is the difference of potential (gravitational + centrifugal) between $`y=0`$ and $`y=y_1`$. The $`g`$ factor, which is of the order of unity, depends on the distribution of energy injection between $`y=0`$ and $`y=y_1`$.
## 4 Average Lorentz factor of the wind
To estimate the Lorentz factor which can be reached by the wind one must be able to relate the injected energy to the mass loss rate. This can be done in the following way: we suppose that we observe a burst power in gamma-rays
$$\dot{}_\gamma =\frac{10^{51}}{4\pi }ϵ_{51}\mathrm{erg}.\mathrm{s}^1.\mathrm{sr}^1$$
(7)
Then, the power injected into the wind was
$$\dot{E}=210^{51}\frac{f_\mathrm{\Omega }^{0.1}}{f_\gamma ^{0.05}}ϵ_{51}\mathrm{erg}.\mathrm{s}^1$$
(8)
where $`f_\mathrm{\Omega }^{0.1}`$ and $`f_\gamma ^{0.05}`$ are respectively the fraction $`\frac{\mathrm{\Delta }\mathrm{\Omega }}{4\pi }`$ of solid angle covered by the wind (in unit of 0.1) and the efficiency for the conversion of kinetic energy into gamma-rays (in unit of 0.05). Accretion by the black hole powers the wind but at the same time viscous dissipation heats the disk which cools by the emission of neutrinos. If neutrino losses represent a fraction $`\alpha `$ of the energy $`\dot{E}`$ injected into the wind we have (for an optically thick disk)
$$\dot{E}_\nu =\alpha \dot{E}=210^{51}\frac{f_\mathrm{\Omega }^{0.1}}{f_\gamma ^{0.05}}\alpha ϵ_{51}=2_{r_{\mathrm{in}}}^{r_{\mathrm{out}}}\frac{7}{8}\sigma T_\mathrm{D}^4(r)2\pi r𝑑r$$
(9)
Substituting in (9) Eq. (4) for $`T_\mathrm{D}(r)`$ we obtain for the temperature at $`r_{}=4r_g`$
$$T_{}=1.72\mu _{\mathrm{BH}}^{1/2}\left(\frac{f_\mathrm{\Omega }^{0.1}}{f_\gamma ^{0.05}}\alpha ϵ_{51}\right)^{1/4}\mathrm{MeV}$$
(10)
The value of $`T_{}`$ being known $`\dot{m}`$ can be computed as a function of of $`r`$ for a given field geometry. The total mass loss rate from the disk is then
| $`\dot{M}`$ | $`=2{\displaystyle _{r_{\mathrm{in}}}^{r_{\mathrm{out}}}}\dot{m}(r)\mathrm{\hspace{0.17em}2}\pi r𝑑r`$ |
| --- | --- |
| | $`=2.610^{26}\mu _{\mathrm{BH}}^3\left({\displaystyle \frac{T_{}}{2\mathrm{MeV}}}\right)^{10}`$ |
(11)
where
$$=_{r_{\mathrm{in}}/r_g}^{r_{\mathrm{out}}/r_g}f[x,\theta (x)]x𝑑x$$
(12)
is a function of the field geometry. The average Lorentz factor is finally given by
$$\overline{\mathrm{\Gamma }}=\frac{\dot{E}}{\dot{M}c^2}=\frac{8500}{}\mu _{\mathrm{BH}}^2ϵ_{51}^{3/2}\alpha ^{5/2}\left(\frac{f_\gamma ^{0.05}}{f_\mathrm{\Omega }^{0.1}}\right)^{3/2}$$
(13)
The value of $``$ is 56 for a constant inclination angle $`\theta =85^{}`$ and 250 if $`\theta `$ decreases from $`90^{}`$ to $`80^{}`$ between $`r=3`$ and $`10r_g`$. These numbers shows that large Lorentz factors can be reached but only under quite restrictive conditions: quasi vertical field lines, low $`\alpha `$ values, i.e. good efficiency for energy injection into the wind with little dissipation and necessity of beaming.
More generally, Eq. (6) can also provide a simple and useful constraint on the terminal Lorentz factor. If the power $`\dot{e}`$ deposited below the sonic point represents a fraction $`x`$ of the total power $`\dot{e}_{\mathrm{tot}}`$ which is finally injected into the wind we get
$$\mathrm{\Gamma }\frac{\dot{e}_{\mathrm{tot}}}{\dot{m}c^2}\frac{\mathrm{\Delta }\mathrm{\Phi }/c^2}{gx}$$
(14)
Considering a line anchored at $`r=4r_g`$ with an inclination angle $`\theta =85^{}`$ we obtain $`y_1=2.182`$ and $`\mathrm{\Delta }\mathrm{\Phi }/c^2=0.18`$ which implies that $`x`$ should not exceed $`10^3`$ to have $`\mathrm{\Gamma }>100`$! This is clearly a very strong constraint on any mechanism of energy injection. The wind however remains relativistic for $`x\mathrm{}<0.1`$ but its Lorentz factor is then much too low to produce a cosmic GRB.
## 5 Discussion
We have computed the mass loss rate in a magnetically driven wind emitted by a disk orbiting a stellar mass black hole. Detailed results are given for the case where the disk is optically thick to neutrinos but we have also obtained an approximate analytical expression, valid for any heating mechanism. From the mass loss rate the terminal Lorentz factor of the wind can be estimated. Large values of $`\mathrm{\Gamma }`$ require that severe constraints on the field geometry and the amount of heating below the sonic point should be satisfied. Another potential problem which was not addressed in the present paper is that relativistic MHD winds, at least in their simplest version , can be quite inefficient in transferring magnetic into kinetic energy.
An optimistic view of the situation would be to consider that the difficulty to obtain high Lorentz factors could just be a way to explain the apparent discrepancy between the birthrate of the sources in the hypernova scenario $`10^3`$ yr<sup>-1</sup>/galaxy (for very massive stars) and the observed GRB rate $`\mathrm{}<10^6`$ yr<sup>-1</sup>/galaxy. Beaming alone cannot account for the difference which implies that the collapse of a massive star most generally fails to give a GRB.
The pessimistic view naturally consists to conclude that the baryonic load of the wind emitted by the disk is so large that the Lorentz factor can never reach values of $`10^2`$ and more. The next step is then to rely on the Blandford-Znajek mechanism to produce the relativistic wind . One should be careful however that magnetic field lines coming from the disk and trapped by the black hole will also carry frozen in wind material leading to a “contamination” of the Blandford-Znajek process by the disk. A possible loophole could be that the accretion time scale to the black hole is so short that the stationary wind solutions we have obtained are not valid. The amount of material extracted from the disk could therefore be much smaller still allowing a highly relativistic outflow to develop.
## References
|
no-problem/9907/astro-ph9907371.html
|
ar5iv
|
text
|
# New Herbig–Haro Objects and Giant Outflows in Orion
## 1 Introduction
The process of star formation is a highly disruptive event where both infall and outflow of material occur simultaneously in the production of a protostellar core. The outflow phase is characterised by the impact of high velocity winds with the surrounding interstellar medium which manifest as bipolar molecular outflows, Herbig–Haro (HH) objects and jets. Multi–wavelength observations have shown HH objects and jets to be regions of shock–excited gas emitting H$`\alpha `$ ($`\lambda `$6563), \[Oi\] ($`\lambda `$$`\lambda `$6300,6363) and \[Sii\] ($`\lambda `$$`\lambda `$6716,6731) in the visible and H<sub>2</sub> (2.12$`\mu `$m) in the infrared. Their energy sources range from deeply embedded protostars to optical T–Tauri and Herbig Ae/Be stars.
With the introduction of large format CCD detectors, wide–field imaging has shown HH flows are more abundant and an order of magnitude larger than previously thought. Recent narrow band imaging of the NGC 1333 star–forming region (SFR) by Bally, Devine & Reipurth (1996) found a high concentration of HH objects within a 1/4 square degree region. A similar result was found by Yu, Bally & Devine (1997) who conducted a near-infrared H<sub>2</sub> (2.12$`\mu `$m) survey of the OMC–2 and OMC–3 regions in Orion. Based on well–studied flows such as HH 1/2, HH 34 and HH 46/47, it was generally thought their extent ($``$ 0.3 pc) was typical of outflows from low–mass stars. Bally & Devine (1994) were the first to question this view with their suggestion that the HH 34 flow in Orion is actually 3 pc in extent. Their idea was confirmed with deep CCD imaging and proper motion studies of individual knots (Devine et al. 1997). To date, around 20 giant ($`>`$1 pc) HH flows have been associated with low–mass stars (Eislöffel & Mundt 1997; Reipurth et al. 1997).
A large number of giant HH flows (and their small–scale counterparts) may have dramatic effects on the stability and chemical composition of a giant molecular cloud (GMC). It has been suggested that outflows may provide a mechanism for self–regulated star–formation and large–scale bulk motions within GMCs (Foster & Boss 1996). It is therefore important to gain information on the distribution of outflows and particularly giant flows within SFRs. The new Anglo–Australian Observatory (AAO) and United Kingdom Schmidt Telescope (UKST) H$`\alpha `$ Survey of the Southern Galactic Plane (Parker & Phillipps 1998a) will be beneficial for such studies as it provides an unbiased search for new HH objects over entire SFRs with its wide–field and high resolution capabilities.
In this paper we concentrate on a search for new HH objects in the first, deep H$`\alpha `$ film of the Orion SFR. The distance to the Orion region lies between 320 to 500 pc (Brown, De Geus & De Zeeuw 1994). Here we adopt a distance of 470 pc based on known HH objects in the region (Reipurth 1999). Strong emission and reflection nebulosity in the region makes searching for HH objects difficult. Previous attempts at surveys for faint red nebulosities in L1630 and L1641 have used standard broad band IIIaF R plates (IIIaF emulsion and RG630 filter), which were limited to subregions clear of high background emission (Reipurth 1985; Malin, Ogura & Walsh 1987; Reipurth & Graham 1988; Ogura & Walsh 1991). The new, deep fine resolution H$`\alpha `$ films enable us to conduct a more complete survey for emission–line nebulosities for consequent follow–up observations.
In Section 2 we present a brief introduction to the specifics of the H$`\alpha `$ survey and details on observations and data reduction. Results are presented in Section 3 where individual objects are discussed. In Section 4 we make some general conclusions and references to future work.
## 2 Observations and Data reduction
### 2.1 The AAO/UKST H$`\alpha `$ survey
Under the auspices of the AAO, the UKST has recently embarked on a new H$`\alpha `$ survey of the Southern Galactic Plane, Magellanic Clouds and selected regions. No systematic high resolution H$`\alpha `$ survey has been carried out in the southern hemisphere since the pioneering work of Gum (1955) and Rodgers, Campbell & Whiteoak (1963). With the increase in resolution and sensitivity of differing wavelength technologies, there has been the need to perform an H$`\alpha `$ survey with similar attributes.
The unusually large, single–element H$`\alpha `$ interference filter is centred on 6590Å with a bandpass of 70Å. It is probably the largest filter of its type in use in astronomy. Coated onto a full field 356mm $`\times `$ 356mm RG610 glass substrate, the 305mm clear circular aperture provides a 5.5° field–of–view. Further details of the filter properties and specifications are given by Parker & Bland–Hawthorn (1998). The detector is the fine grained, high resolution Tech Pan film which has been the emulsion of choice at the UKST for the last 4 years. This is due to its excellent imaging, low noise and high DQE (e.g. Parker, Phillipps & Morgan 1995; Parker et al. 1998). Tech Pan also has a useful sensitivity peak at H$`\alpha `$ as it was originally developed for solar patrol work. Though electronic devices such as CCDs are the preferred detector in much of modern astronomy, they cannot yet match the fine resolution and wide–field coverage of the Tech Pan film and UKST combination.
Typical deep H$`\alpha `$ exposures are of 3 hours duration, a compromise between depth, image quality and survey progress as the films are still not sky–limited after this time. The Southern Galactic Plane survey requires 233 fields on 4 degree centres and will take 3 years to complete. Initial survey test exposures have demonstrated that the combination of high quality interference filter and Tech Pan film are far superior for the detection and resolution of faint emission features near the sky background than any previous combination of filter and photographic plate used for narrow band observations (Parker & Phillipps 1998a). It is the intention that the original films will be digitised using the Royal Observatory Edinburgh’s SuperCOSMOS facility (Miller et al. 1992). It is planned to release a calibrated atlas of digital data to the wider astronomical community as soon as possible.
### 2.2 Photographic astrometry and image reduction
For the Orion region, a deep 3–hour H$`\alpha `$ exposure was obtained on 1997 December 2nd during a period of good seeing. The plate (HA 17828) was centred at 05<sup>h</sup>36<sup>m</sup>, -0400’ (1950) and designated grade A based on standard UKST visual quality control procedures by resident UKST staff. Three independent visual scans of the film were carefully made by QAP, SLM and WJZ using an eyepiece and later a 10$`\times `$ binocular microscope. HH objects display a wide range of morphologies including knots, arcs and jets. A combined list of such features was produced and served as the basis for subsequent astrometry. The new H$`\alpha `$ images were then compared with deep non–survey UKST IIIaJ, IIIaF and IVN broad band copy plates of the same field to confirm the objects as true emission–line sources. The plates used and their characteristics are presented in Table 1.
Crude positions for each object were first determined using simple XY positions from the film and transformed to B1950 coordinates by use of the UKST program PLADAT. Accurate positions were then obtained by using SkyView FITS files of the surrounding region. This resulted in a positional accuracy within 2″ for each object. Digitised images of each source were then made using a video digitising system (Zealey & Mader 1997; 1998). This enabled us to process images via un–sharp masking and histogram enhancement to recover the original detail as seen on the TechPan film.
### 2.3 CCD observations
#### 2.3.1 Optical
As the Orion region shows highly structured background emission, it is important we distinguish between photo–ionised filamentary structures and bona fide HH objects. This can be accomplished with H$`\alpha `$ and \[Sii\] images by noting that HH objects usually have \[Sii\]/H$`\alpha `$ ratios $`>`$ 1 compared to \[Sii\]/H$`\alpha `$ $`<`$ 1 for emission associated with Hii regions. We obtained narrow and broad band images of HH candidates at the Australian National University 1.0m telescope at Siding Spring Observatory during various periods in January–April 1998. Imaging was done with a 2048 $`\times `$ 2048 TEK CCD mounted at the f/8 Cassegrain focus. The 0$`\stackrel{}{.}`$6 per pixel gave a field–of–view of 20$`\stackrel{}{.}`$48 $`\times `$ 20$`\stackrel{}{.}`$48. The seeing conditions during usable time was typically $`<`$ 3″. Narrow band filters used were \[Oiii\] ($`\lambda `$5016; $`\mathrm{\Delta }\lambda `$ 25Å), H$`\alpha `$ ($`\lambda `$6565; $`\mathrm{\Delta }\lambda `$ 15Å), \[Sii\] ($`\lambda `$6732; $`\mathrm{\Delta }\lambda `$ 25Å) and red continuum ($`\lambda `$6676; $`\mathrm{\Delta }\lambda `$ 55Å). The H$`\alpha `$ filter also transmits the \[Nii\] ($`\lambda \lambda `$6548/6584) lines. We used a standard Kron–Cousins filter for the I band observations.
Typical exposure times were 300s and 900s for broad and narrow band frames respectively. Flat fields were obtained by illuminating the dome with a halogen lamp. All frames were reduced in a similar fashion with IRAF<sup>1</sup><sup>1</sup>1IRAF is distributed by the National Optical Astronomy Observatories, which are operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation., where 25 median combined bias frames were subtracted from source frames prior to flat fielding. Individual source frames were median combined to produce the final images. In several instances, we were not able to obtain corresponding continuum frames to our CCD H$`\alpha `$ and \[Sii\] images. As major HH emission–lines do not fall within the spectral response curve of the RG715 + IVN filter/emulsion combination ($`\mathrm{\Delta }\lambda `$ = 6900Å–9100Å), we use photographic IVN images to serve as continuum images where needed.
#### 2.3.2 Near–infrared
In January 1993 several HH complexes (including Ori I–2) were imaged using IRIS, the AAO infrared camera and low resolution spectrograph. The 128 $`\times `$ 128 format array has 60$`\mu `$m pixels which when used in the f/15 imaging mode provided a spatial resolution of 1$`\stackrel{}{.}`$94 per pixel and a 4$`\stackrel{}{.}`$1 $`\times `$ 4$`\stackrel{}{.}`$1 field–of–view. Each source was observed through a 1% bandpass filter centred on the H<sub>2</sub> $`v=10S(1)`$ transition at 2.12$`\mu `$m. Continuum images were made using a 4% bandpass filter at 2.24$`\mu `$m. Individual frames were linearised, flat fielded against a dome flat and sky subtracted before being combined and calibrated using the IRIS image reduction package known as YOGI-FIGARO. A mosaic of twelve frames, each of five minutes in length were combined to form the final images.
## 3 Results
In Table 2 we list new HH objects identified by our narrow band CCD imaging of candidates identified from the Orion H$`\alpha `$ plate. Several of the new objects were identified by Reipurth (1985) as candidate HH objects from his ESO R film survey of the Orion region. Objects independently discovered by the CCD imaging of Reipurth, Bally & Devine (1998; hereafter R98) are indicated. In addition to brief comments about their nature and location, Table 2 also suggests possible energy sources based on evidence presented.
### 3.1 New objects in L1630
Our survey region of the southern portion of L1630 is shown in Fig. 1. The rest of the cloud complex extends several degrees to the north–east of the figure. A diffuse shell of H$`\alpha `$ emission and a network of bright–rimmed cometary globules surrounds the multiple OB system $`\sigma `$ Ori. Ogura & Sugitani (1998) list many of these globules as remnant clouds which may be sites of retarded star formation. The bright Hii regions NGC 2024 and IC 434 (which includes The Horsehead Nebula) outlines an ionisation front between the southern portion of L1630 and $`\sigma `$ Ori. This ionisation front extends towards the open cluster NGC 1981 which approximately marks the division between L1630 and L1641. The position of the new HH flows in the region are indicated.
#### 3.1.1 HH 289 (Figs 13)
Located on the north–western outskirts of IC 434, the bright rimmed cometary globule Ori I–2 is host to the low–luminosity (L<sub>bol</sub> = 13 L) IRAS source 05355-0416, which drives both a bipolar CO and near–infrared molecular hydrogen outflow (Sugitani et al. 1989; Cernicharo et al. 1992; Hodapp 1994). The IRAS source is also associated with a H<sub>2</sub>O maser (Wouterloot & Walmsley 1986; Codella et al. 1995).
A comparison between our scanned H$`\alpha `$ and IVN images (Figs 2a,b) identifies a chain of emission–line objects (objects 2–5) extending to the east of the globule. To the west, we see another emission–line feature (object 1) which appears as an extension of faint emission seen in the IVN image. The H$`\alpha `$+\[Sii\] images (Figs 3a,b) confirms the presence of a HH flow, designated here as HH 289. In the central part of the globule, our H$`\alpha `$+\[Sii\] and H<sub>2</sub> images (Figs 3b,c) show two faint \[Sii\] knots (HH 289 A/B) which mirror the position of the H<sub>2</sub> emission. With the exception of knot C, all knots appear \[Sii\]–bright. Knots D–F show large arc–like morphologies which open towards the IRAS source. This gives the impression of a bubble surrounding the eastern side of the globule which may represent an interface between the outflow and the UV radiation field from $`\zeta `$ Ori, which is 42′ to the east of Ori I–2.
From the distribution of optical and near–infrared emission about the IRAS source (Figs 3b,c), we suggest it is the driving source of the HH 289 outflow. The chain extends 551″ from the IRAS source making the lobe 1.23 pc in projection. This puts the Ori I–2 flow in the class of parsec–scale flows from low–mass stars (Reipurth, Bally & Devine 1997). As HH objects typically display tangential velocities in the order of 150 km $`\mathrm{s}^1`$ (i.e., Mundt 1988), the age span of the optical knots ranges from 530 yr (knot A), to 8100 yr (knot F). The projected lengths of the (redshifted) CO, H<sub>2</sub> and optical HH flows are 40″ (0.09 pc), 80″ (0.18 pc) and 551″ (1.23 pc) respectively. Apart from knot A, we do not see any evidence of HH emission associated with the blueshifted CO lobe, which we expect will be very faint due to the tenuous medium on the western side of the globule. Deeper \[Sii\], \[Oii\] and/or \[Oiii\] images of the western side of the globule may reveal fainter emission.
In Fig. 3b, we note the appearance of a tube–like feature extending out of the western side of the globule (object 1 in Fig. 2a). It is well aligned and mirrors the inner \[Sii\] and H<sub>2</sub> knots with respect to the IRAS source. As this feature is visible on our Schmidt images, the emission is most probably scattered light reflected off the walls of the cavity formed by the outflow as it bores its way out of the globule. Using AAO/UKST H$`\alpha `$ material, we have identified a similar feature associated with the cometary globule and outflow complex CG30/HH 120 (Zealey et al. 1999). The H$`\alpha `$ streamer extends to the south–west of the globule and appears to be the optical counterpart of an extensive H<sub>2</sub> filament associated with the infrared source CG30–IRS1. The tube–like feature in Ori I–2 and the streamer in CG30 may represent limb–brightened cavities.
#### 3.1.2 HH 444 (Figs 1 & 4)
Located in the vicinity of $`\sigma `$ Ori (Fig. 1), V510 Ori (= HBC 177) was first classified as a T–Tauri star based on an objective–prism survey of the Orion region by Sanduleak (1971). Cohen & Kuhi (1979) list the star as a classical T–Tauri star (cTTs) with W(H$`\alpha `$) $`>`$ 10Å. The H$`\alpha `$ emission–line survey of Wiramihardja et al. (1991) found the source to be a strong H$`\alpha `$ emitter with V = 14.6 mag as opposed to V = 13.54 mag found by Mundt & Bastian (1980).
By use of H$`\alpha `$ material, the first optical detection of the V510 Ori jet (Parker & Phillipps 1998b; this paper) is shown in Fig. 4a. The jet has previously been identified by long–slit spectroscopic studies (Jankovics, Appenzeller & Krautter 1983; Hirth, Mundt & Solf 1997). The scanned H$`\alpha `$ image (Fig. 7a) reveals a highly collimated jet. Several faint knots (A–C) are located 57″, 84″and 194″ from V510 Ori. The flow terminates at the large bow shock structure HH 444D, which displays wide wings which sweep back towards to position of V510 Ori.
The H$`\alpha `$+\[Sii\] image (Fig. 4b) clearly identifies the HH 444 jet extending from V510 Ori. Due to the seeing conditions at the time ($``$ 3″), we can only confirm the presence of knots B and D in the H$`\alpha `$+\[Sii\] image. For the continuum frame (Fig. 4c), conditions were slightly better and based on the scanned H$`\alpha `$ and continuum image, knots A–C are considered as pure emission–line features. The jet appears as two separate parts, with the first section appearing as a dense region extending 10″ from V510 Ori, while a second, more fainter part extends a further 6″. This change may represent several individual condensations not resolved by our images. The total projected length of the optical flow is 0.6 pc in length.
The small separation between V510 Ori and its jet implies the jet is still active today and coupled with the fact that we do not see an obvious counter flow suggests an evolved case of a one–sided jet (Rodríguez & Reipurth 1994). High resolution optical and near–infrared studies of the jet and energy source will be beneficial in determining the nature of this unusual outflow complex.
### 3.2 New objects in L1641
As shown in Fig. 5, the northern border of L1641 is approximated by the bright ionisation front near the open cluster NGC 1981. The cloud extends several degrees south of the figure. The H$`\alpha `$ emission surrounding the bright Hii region M42 shows remarkable substructure. The southern portion of the image is bounded by the bright reflection nebulosity NGC 1999. In contrast to the L1630 region, we have identified 15 HH complexes within the outlined region shown in Fig. 5. The region is shown in more detail in Fig. 6, where the new objects and features of note are indicated. Several strings of objects appear to extend to the north and north–east of the figure. The outlined region towards the centre of Fig. 6 contains a cluster of objects surrounding the high–luminosity source IRAS 05338–0624 (L<sub>bol</sub> $``$ 220 L).
#### 3.2.1 HH 292 (Figs 6 & 7)
Located in the south–east portion of Fig. 6, BE Ori (= HBC 168; IRAS 05345-0635) is a classical T–Tauri star with W(H$`\alpha `$) $`>`$ 10Å (Cohen & Kuhi 1979; Strom, Margulis & Strom 1989a). No molecular outflow was detected by Levreault (1988). The near–infrared photometry of Strom et al. (1989a) indicates excess emission suggesting the presence of a remnant circumstellar disk.
In Fig. 7, our scanned H$`\alpha `$ and CCD images clearly show a highly collimated flow originating from BE Ori. The flow has also been identified by Reipurth (1999; private communication). On the H$`\alpha `$ scan (Fig. 7a), knots B–D appear to be linked by a stream of H$`\alpha `$ emission which could be interpreted as a jet. BE Ori itself is surrounded by diffuse H$`\alpha `$ emission which extends towards knot A, which is to the south–west of the source. All these features are confirmed by our H$`\alpha `$+\[Sii\] and continuum images (Figs 7b,c). All knots appear H$`\alpha `$–bright with knot B displaying a combination of emission and continuum emission. Designated HH 292, the flow extends along PA = 45° with knot A located 114$`\stackrel{}{.}`$7 to the south–west and knots B–D located 21$`\stackrel{}{.}`$2, 47$`\stackrel{}{.}`$3 and 64$`\stackrel{}{.}`$4 to the north–east of BE Ori respectively, making the total flow length 0.4 pc. In their survey of L1641, Stanke, McCaughrean & Zinnecker (1998; hereafter SMZ98) identified compact H<sub>2</sub> emission associated with knots A and D (SMZ 25), which may represent the terminal working surfaces of the flow where the wind is encountering dense material.
It is interesting to note the asymmetry in HH emission with respect to BE Ori. The lack of optical counterparts to knots B–D to the south–west of the source suggests BE Ori has either undergone highly irregular outbursts in the past, or has a one–sided jet (Rodríguez & Reipurth 1994). Assuming a tangential flow velocity of 150 km $`\mathrm{s}^1`$, knots B–D have ages approximately 300, 700 and 1000 yr respectively, suggesting periodic outbursts every 300–400 yr whereas knot A has an age of 1700 yr. As the seeing during our observations of BE Ori was $``$ 3″, deeper imaging may reveal further HH emission and constrain the ejection history of the source.
#### The L1641–N region
In Fig. 8, we present scanned H$`\alpha `$, IIIaF and IVN images of the outlined region in Fig. 6 where a cluster of faint red nebulous objects was found by Reipurth (1985). The region has been mapped in <sup>12</sup>CO by Fukui et al. (1986, 1988) who found a bipolar outflow, L1641–N, centred on the bright far–infrared source IRAS 05338–0624. Near–infrared imaging of the region by Strom et al. (1989b), Chen et al. (1993) and Hodapp & Deane (1993), revealed a dense cluster of approximately 20 members surrounding the IRAS source. Davis & Eislöffel (1995; hereafter DE95) and SMZ98 identified a multitude of H<sub>2</sub> (2.12$`\mu `$m) emission which outlines a cavity bored out by the CO outflow and multiple jet and bow shock features which extend at least 2 pc to the south of the embedded cluster.
In the following, we present our CCD images of the region shown in Fig. 8 which confirm many of the Reipurth nebulosities as bona fide HH objects. Scanned H$`\alpha `$ images for several of these objects are also presented in Parker & Phillipps (1998b). Independent CCD imaging of the region has also been presented by R98. Candidate energy sources for these flows are presented based on their location with respect to the optical and near–infrared emission (DE95, SMZ98).
#### 3.2.2 HH 301/302 (Figs 8 & 9)
Extending to the east of Fig. 8, the combined H$`\alpha `$+\[Sii\] image of these two objects (Fig. 9a) shows HH 301 consists of three bright knots (A–C) which form an U–like structure with several fainter knots (D–F) trailing to the south–west. Likewise, HH 302 consists of one bright knot (A) with a fainter one (B) extending to the south–west. Both objects are brighter in \[Sii\] with faint H$`\alpha `$ emission. This property is apparent from Figs 8a and 8b, where HH 301/302 are prominent on the IIIaF, but faint in the H$`\alpha `$ image. R98 suggest HH 301/302 are related based on their elongation towards the L1641–N embedded cluster where the presumed driving source is located. A line of \[Sii\] emission can be seen to the south which mirrors the position of HH 301/302 and coincides with H<sub>2</sub> emission (SMZ 17/18). The bright knot HH 298A (R98) can also be seen in Fig. 9a. Although R98 list HH 298 being 70″ in extent with an east–west orientation, our H$`\alpha `$+\[Sii\] image shows HH 298 extends even further to the east of HH 298A with several knots which we label as HH 298 D–F. This makes the HH 298 flow 340″, or 0.76 pc in length from knots A to F. It is interesting that together with HH 301/302, HH 298 produces a V–type structure with the apex pointing back towards the infrared cluster.
DE95 and SMZ98 identified a chain of H<sub>2</sub> knots (I/J and SMZ 16 A/B respectively) which extend east from the embedded cluster with a morphology reminiscent of a jet. In fact, HH 298A appears directly between SMZ 16 A and B. As HH 298 and HH 301/302 contain both optical and near–infrared emission, we suggest they are tracing the walls of a cavity outlined by the V–type structure. The presence of a jet (SMZ 16A) and counterflow (HH 298A and SMZ 16B) suggests we are seeing a single outflow complex. As the jet extends directly between HH 298 and HH 301/302, we do not rule out the possibility of 3 separate flows, although we draw a comparison with the outflow source L1551–IRS5, where HH 28/29 are not located along the jet axis, but close to the walls of a cavity identified by optical, near–infrared and CO observations (see Davis et al. 1995 and references therein).
Chen et al. (1993) identified a K′ band source (their N23) in the direction of DE95 I/SMZ 16B which is not visible in our I band image (Fig. 9c). Based on the alignment of optical and near–infrared emission, we propose this source as the driving agent for both HH 298 and HH 301/302. Further spectroscopic studies are needed to clarify its nature.
#### 3.2.3 HH 303 (Figs 8 & 10)
The HH 303 flow consists of two groupings of knots aligned along a north–south direction. The H$`\alpha `$+\[Sii\] image in Fig. 10a shows the northern–most group (knots A-F) outlines a bow–shock with a sheath of H$`\alpha `$ emission overlaying clumpy \[Sii\] emission. Several more \[Sii\]–bright knots (I–K) extend towards the south. A fainter knot, HH 298A (R98) is seen to the south–west of knot K. However, Fig. 5 of R98 shows HH 298A at a different location to that shown in Fig. 10a. Therefore, we identify this knot as HH 303L in continuation of R98. R98 suggests HH 303L may be associated with HH 303, but deviates too much from the well defined axis and may represent a separate flow. We suggest knots I–K and L represent a remnant bow shock with the former and latter representing the eastern and western wings respectively.
At first glance, HH 303 could be interpreted as a highly collimated flow originating from the variable star V832 Ori (Fig. 10b). The optical and near–infrared photometry of this source (source N2 of Chen et al. 1993) shows a spectral energy distribution which declines rapidly for $`\lambda >`$ 1$`\mu `$m, suggesting a lack of circumstellar material. A comparison of our optical images with the near–infrared data of SMZ98 shows the majority of HH 303 displays both optical and H<sub>2</sub> emission, thereby suggesting HH 303 is behind V832 Ori and unrelated to the star. Knots HH 303 B, F and I are coincident with the H<sub>2</sub> knots SMZ 8A, 8B, and 14P respectively, with the H<sub>2</sub> emission displaying bow shock morphologies which open towards the south in the direction of L1641–N.
As knots HH 303 I–K lie within the blue lobe of L1641–N, it has been suggested the CO, near–infrared and optical flows derive from a common source (Strom et al. 1989b; SMZ98; R98). Chen et al. (1993) identified a bright M band source (their N15) $``$ 8″ to the east of the IRAS position. Chen, Zhao & Ohashi (1995) detected this source with the VLA at 2.0mm, 7.0mm and 1.3cm, while SMZ98 identified a 10$`\mu `$m source coincident with N15 and the VLA source. As N15, the 10$`\mu `$m source and the 1.3cm source represent the same object, we follow R98 and label it as the “VLA source” which they suggest is the driving source for HH 303 and the illuminator of the reflection nebulosity seen to the north–east in our I band image (HD93; Fig. 10b).
However, it is important to mention that the L1641–N region is a highly clustered environment where identifying outflow sources requires the highest resolution possible. Anglada et al. (1998) identified two radio continuum sources, VLA2 and VLA3, which are 0$`\stackrel{}{.}`$8 and 0$`\stackrel{}{.}`$2 to the west and east respectively from the nominal position of the VLA source. Further observations of the region reveal a fainter source within 1″ of VLA2 (Anglada 1998, private communication). The CO data of Fukui et al. (1986; 1988) clearly indicates the L1641–N molecular outflow is more complex than a simple bipolar outflow. Higher resolution studies of these sources are needed to determine which source is driving the optical and H<sub>2</sub> emission. In particular, it would be interesting to see if the VLA source displays an elongated radio jet with its long axis pointing in the direction of HH 303.
In addition to HH 303, R98 suggest the VLA source also drives HH 61/62, which are located 46$`\stackrel{}{.}`$8 (6.5 pc) to the south of L1641 (see Fig. 18). If their assumption is correct, the HH 61/62/303 flow is 7 pc in length, with the northern lobe only 5% the length of the southern lobe. Any shocks associated with the northern lobe will be extremely faint due to the lack of molecular material as the flow moves away from L1641.
#### 3.2.4 HH 304 (Figs 8 & 11)
Located to north–east of the VLA source, the \[Sii\] image of HH 304 (Fig. 11a) shows several compact knots which are \[Sii\]–bright. Knot B is compact with a bow shock structure (knot A) extending towards the north–east and then curls back to the north–west. Knots C and D display an opposing bow shock structure, with knots C and D connected by faint \[Sii\] emission. The overall morphology of the system suggests the energy source is located between knots A/B and C/D. The I band image (Fig. 11b) shows a compact reflection nebulosity with a tail which mimics part of the \[Sii\] emission associated with knots A and B. A reddened source (which we denote as HH 304IRS) appears where the reflection emission is most compact.
The HH 304 complex is also seen in the H<sub>2</sub> mosaic of SMZ98, who label it SMZ 5. HH 304A is seen as a bright bar which extends 6″ along an east–west direction. At the position of the compact reflection nebulosity, a bright H<sub>2</sub> knot is seen, with a trail of H<sub>2</sub> emission extending from HH 304IRS towards HH 304C. The appearance of the optical and near–infrared emission suggests we are seeing two lobes with knots A and C representing the north–eastern and south–western working surfaces respectively. HH 304IRS appears midway between these two opposing working surfaces. There are no IRAS or H$`\alpha `$ emission–line stars at the location of the reflection nebulosity, which implies a deeply embedded source.
#### 3.2.5 HH 305 ( Figs 8 & 12)
The HH 305 outflow appears aligned along a north–south axis centred on the bright (V $``$ 11.3 mag) star PR Ori. With the exception of knots A and F, all objects are H$`\alpha `$–bright, with knot B displaying an inverted V–type structure only visible in H$`\alpha `$. Knot A shows a bow shock structure which opens towards PR Ori. It is interesting to note that HH 305E represents the brightest nebulosity in the flow. The increased brightness could be attributed to the flow encountering an obstacle of some sort, perhaps in the form of a molecular clump. The dark lane seen in Figs 6, 8 and 12 represents a change in the molecular distribution in this part of L1641. At the position of HH 305E, the flow impacts the molecular cloud and then deflects to where we see HH 305F. Based on their separation from PR Ori, R98 suggest knots C/D represent an HH pair located 16″ from the source. Similarly, knots B/E and A/F represent HH pairs located 65″ and 108″ from PR Ori respectively, making the total flow length 0.54 pc.
At present, it is unknown if HH 305 is being driven by PR Ori or a more embedded source behind it (R98). In a major study of Einstein X–ray sources in L1641, Strom et al. (1990) identified PR Ori as a low–luminosity (13 L) source with a spectral type of K4e$`\alpha `$ and W(H$`\alpha `$) = 0.5Å. Their JHKLM photometry indicates a lack of infrared colour excess normally attributed to a circumstellar disk. Based on their data, PR Ori appears to be a weak–lined T–Tauri star (wTTs). Its location with respect to the L1641 molecular cloud shows it lies in a region of low obscuration and in addition to the fact that SMZ98 did not detect any H<sub>2</sub> emission associated with HH 305 rejects the notion of an embedded, more younger source located behind PR Ori.
If PR Ori is the energy source of HH 305, it would present a major discrepancy in star formation theory as wTTs are not thought to be associated with circumstellar disks and/or outflow phenomenon. Magazzu & Martin (1994) identified what was thought to be a HH flow associated with the wTT, HV Tau. Woitas & Leinert (1998) suggested the HH object is actually a companion T–Tauri star with strong forbidden emission lines whose presence originally led Magazzu & Martin to their conclusions. How do we reconcile the fact that PR Ori is a wTTs with an outflow? The answer may lie in Table 2 of Strom et al. (1990), who list PR Ori as an optical double. Our CCD images also show PR Ori as an extended source, in which case it seems more plausible the companion (PR Ori–B) is the driving source of HH 305. Clearly, further studies of this HH complex are needed.
#### 3.2.6 HH 306–309 (Figs 6 & 1316)
Figs 6 and 13 show scanned H$`\alpha `$ and IVN images of a string of emission–line objects (HH 306–309) extending away from the VLA source and up into the main reflection nebulosity of M42. A large arcuate structure (HH 407) can be seen near the bright stars towards the western border. The large rim of H$`\alpha `$ emission identified in Fig. 6 is seen orientated at PA = 55° and appears to surround all objects in the figure. A comparison of the H$`\alpha `$ and IVN images confirms all objects are pure emission–line features.
#### The HH flows
In conjunction with the IVN image (Fig. 13b), our H$`\alpha `$+\[Sii\] images confirm all as bona fide HH objects. In Fig. 14, the H$`\alpha `$+\[Sii\] image shows HH 306 consists of two bright compact knots (B and F) with a trail of emission extending to the south. A further knot, HH 306G, lies to the west which may be unrelated, or part of an older fragmented shock. HH 307 consists of several bright knots which mark the apexes of large arcs or wings which sweep out and open towards L1641–N. R98 suggest HH 308 appears as a fragmented bow shock with knots A and B representing the eastern and western wings respectively. Located between HH 308A and B, we note the presence of a third knot not identified by R98 which we denote here as HH 308C. HH 309 (Fig. 15) shows a similar structure to HH 308, with knots A and B representing the first fragmented bow shock, knot C the second and knots D/E the third. The reverse bow shock morphology of HH 309B can be explained by noting the distribution of H$`\alpha `$ emission on the scanned H$`\alpha `$ and CCD H$`\alpha `$+\[Sii\] images. The knot appears to have curled around the background emission which may have been responsible to creating the fragmented appearance of HH 309.
In searching for further emission north of HH 309, R98 discovered several bow shock structures, designated HH 310, within the main nebulosity of M42 (see Fig. 6). The objects are brighter in \[Sii\] than in H$`\alpha `$, thus discounting the possibility they might be photo–ionised rims. We have also imaged these structures and for completeness, present our H$`\alpha `$, \[Sii\] and continuum images in Fig. 16. Our \[Oiii\] frame (not shown) does not detect the bow shocks associated with HH 310, thereby suggesting the flow is moving with a velocity less than 100 km $`\mathrm{s}^1`$. Our \[Sii\] and continuum images (Figs 16a,b) identify several bow shock structures to the north–west of HH 310 which are \[Sii\]–bright and absent in the continuum frame. Assuming for the moment these features are bona fide HH objects, their apparent deviation from the axis defined by HH 310 can be explained if the flow is being redirected by an obstacle, possibly the long tongue–like feature which extends from the top of the images. An alternative explanation is that they form part of a separate flow, perhaps from the L1641–N region. Spectroscopic observations of these features are needed to determine if they are HH shocks.
#### The embedded counterflow
To the south of L1641–N, SMZ98 discovered a long chain of bow shocks. Designated SMZ 23, the chain consists of at least 7 bow shocks (A–G) which may represent the redshifted counterflow to HH 306–310 (this paper, R98). From the <sup>13</sup>CO data of Bally et al. (1987), the integrated moment map (Fig. 17) shows evidence of a cavity created by SMZ 23. What is interesting about this cavity is its size and orientation with respect to L1641–N, HH 306–310 and the large cavity dubbed by R98 as the “L1641–N chimney”, which they suggest has been excavated by the repeated passage of bow shocks associated with HH 306–310. The location of individual knots associated with SMZ 23 appears to trace the western wall of the southern cavity, suggesting the flow impacts with the cavity wall which produces the observed emission. We suggest this southern cavity is being excavated by SMZ 23 as the redshifted flow propagates into and away from L1641–N. The <sup>13</sup>CO velocity structure of the southern cavity is evident from 5–8 km $`\mathrm{s}^1`$, with the L1641–N molecular core and the “L1641–N chimney” appearing around 8 and 8–11 km $`\mathrm{s}^1`$ respectively. This gives further evidence that the southern cavity and the “L1641–N chimney” represent expanding red and blueshifted lobes centred on the L1641–N region.
Following similar arguments in R98, we find the dimensions of this southern cavity to be 5′$`\times `$ 12′ in length, giving a total area of $``$ 1 $`\times `$ 10<sup>37</sup> cm<sup>2</sup>. Assuming the intensity in the cavity lies within 3–5 K/km $`\mathrm{s}^1`$, the total mass excavated by the SMZ23 flow is $``$ 37–62 M. In comparison, R98 find HH 306–310 has removed $``$ 190 M of gas from L1641. Apart from obvious errors in estimating the <sup>13</sup>CO intensity and cavity size, we should point out we have not taken into account the possibility the southern cavity may have been formed by the combined action of more than one outflow.
SMZ 23, HH 306–309 and HH 310 all display large bow shock structures which open towards the L1641–N region where the presumed energy source lies. As mentioned for HH 303, the high degree of clustering about the VLA source confuses identifying specific energy source(s). However the principal components HH 306B, HH 307A, HH 308C and HH 309A are located 806″, 1152″, 1331″, 1955″ away from the position of the VLA source. In addition to HH 310A (2764″), the HH 306–310 lobe is 6.3 pc in length. As the SMZ 23 flow appears to extend further south from SMZ23G (Stanke 1999; private communication), the geometry of HH 306–310 and SMZ 23 about the VLA source and VLA2/VLA3 strongly favours at least one of them as the energy source of the optical and near–infrared emission. Whichever of these sources is responsible for the observed emission, the combined length of HH 306–310 and SMZ23 lobes is 10.5 pc. High–resolution radio studies will be beneficial for identifying radio jets and their orientation with respect to the optical and near–infrared emission.
#### The southern L1641 region
In a search for optical counterparts to HH 306–310, our deep IIIaF plate of the southern region of L1641 identifies several features reminiscent of large bow shocks. The IIIaF image of these features is shown in Fig. 18, where object A appears as a diffuse feature and object B appears as a bright nebulosity with a long curve which extends 16′ to the north near object A. At first glance, object B and HH 61/62 (the counterlobe to HH 303; R98) appear to outline the eastern and western wings of a large fragmented bow shock structure. Objects C and D appear as large arc–like structures which open to the north and are 3–4′ in extent. As C and D are located well away from the main cloud, our line–of–sight increases which may suggest they are not physically associated with L1641. We should also note that many of the terminal bow shocks associated with parsec–scale HH flows show substantial substructure which is lacking from the IIIaF image. In order to resolve the nature of features C and D, we obtained H$`\alpha `$ and \[Sii\] images, but due to variable cloud cover, we were not able to classify these objects as bona fide HH objects. Deeper images and/or spectra of objects A–D are required to determine if they are photo–ionised regions or HH objects.
#### 3.2.7 HH 403–406 (Figs 6 & 19)
To the north–east of Fig. 6, a second string of objects extends away from the L1641–N cluster. HH 403 and HH 404 are located well clear of the eastern edge of the L1641 molecular cloud. Although seeing at the time of observing was $`>`$ 3″, our H$`\alpha `$ and \[Sii\] CCD images (not shown) did allow us to classify these features as genuine HH objects. In Fig.19, the scanned H$`\alpha `$ and IVN images show HH 403 consists of a large number of emission–line knots in addition to a curved (HH 403G) and amorphous feature (HH 403H) to the south–west. The CCD images of R98 clearly shows HH 403 as a highly fragmented object which is very similar in appearance to HH 262 (López et al. 1998). A further 9′ to the north–east, HH 404 displays a sickle–like structure not too dissimilar from the HH 47 jet (Heathcote et al. 1996). As these features are H$`\alpha `$–bright, R98 raised the question as to whether or not HH 403/404 are bow shocks or bright rims. However, based on morphological grounds, they suggest HH 403/404 are highly fragmented bow shock structures which point back towards L1641–N where the presumed energy source lies. Our contrast–enhanced scanned H$`\alpha `$ image of the region (Fig. 19a) appears to confirm their suspicion as we see a lack of background H$`\alpha `$ emission in the direction of HH 403/404 which has probably been removed by the action of the flow as it propagates away from L1641.
The scanned H$`\alpha `$ image identifies several large–scale bow shocks with HH 403 and HH 404 at their apexes. R98 do not detect these features on their CCD images. Originally thought to be bright rims, comparison of the H$`\alpha `$ emission with the <sup>13</sup>CO data of Bally et al. (1987), indicates these “rims” do not outline the L1641 molecular cloud, or any other well–defined <sup>13</sup>CO ridge. The first bow shock is defined by the arc–like object HH 403G and HH 404H representing the eastern and western wings respectively. The eastern wing trails 7′ to the south before it blends into the background H$`\alpha `$ emission. The second bow shock appears as an extended feature similar in appearance to HH 403G. The third bow shock only displays the western wing which extends northward from the second bow to the apex of HH 404, which shows a bright arc with faint H$`\alpha `$ emission which combine to form an inverted U–type structure.
North–east of HH 404, a faint object HH 405 displays H$`\alpha `$ emission extending along PA = 45°. R98 suggest the emission is reminiscent of a jet. A further 6′ to the north–east, HH 406 is a large diffuse object. Are HH 405 and H 406 related to HH 403/404? The IVN image (Fig. 19b) shows a reddened source (denoted HH 405IRS) at the position of HH 405. A reflection nebulosity is also seen nearby. The position of the nearest IRAS source, 05347-0545, is shown in our IVN image. It is a 60 and 100$`\mu `$m source only, indicating it is heavily obscured and may be related to HH 405 and/or HH 406. Based on the location of HH 405IRS with respect to HH 405/406 and the reflection nebulosity, we suggest this source is the driving agent for HH 405 and HH 406 thereby making the flow length 0.78 pc in extent. Near–infrared polarimetry and imaging will be useful for determining if HH 405IRS or IRAS 05347-0545 is the illuminator of the reflection emission.
Located to the far south–west of L1641–N, R98 noted HH 127 mirrors the position of HH 404 with L1641–N positioned at the centre (see Fig. 18). Although HH 127 lies at an angle of 10° from the HH 403/404 and L1641–N axis, they suggest HH 403/404 and HH 127 represent the blue and redshifted lobes respectively of a 10.6 parsec–scale flow centred on the VLA source. Given the clustered nature of potential outflow sources about the VLA source, proper motion studies of HH 127 and HH 403/404 are highly desirable to constrain the location of their energy source(s).
#### 3.2.8 HH 407 (Figs 6, 13 & 20)
Located 28$`\stackrel{}{.}`$3 north–west of L1641–N and within close proximity to HH 306–310, Figs 6 and 13 identify a large, highly fragmented structure located in the direction of several bright stars. The H$`\alpha `$+\[Sii\] image (Fig. 20) confirms it as a bona fide HH object as it emits predominately in \[Sii\]. Knots A and B display bow shock structures with a streamer (knots C/D) extending to the south–east. In Figs 6 and 13, fainter H$`\alpha `$ emission extends a further 6′ to the south–east of knots C/D.
As the streamers of HH 407 point towards the L1641–N region, it seems probable the energy source lies in that direction. An examination of the H<sub>2</sub> data of SMZ98 does not reveal any emission extended towards HH 407. After re–examining our H$`\alpha `$ plate, we noticed the presence of a large loop–like structure (hereafter loop A) extending out of the reflection nebulosity NGC 1999 and in the direction of HH 407. Comparison of our scanned H$`\alpha `$, IIIaF and IVN images (Fig. 21) indicates loop A is a pure emission–line feature. Although faintly seen on the IIIaF image, the scanned H$`\alpha `$ image clearly distinguishes loop A from background emission.
In a recent study of the NGC 1999 region, Corcoran & Ray (1995; hereafter CR95) discovered a second loop (hereafter loop B) of H$`\alpha `$ emission extending west of the NGC 1999 which delineates a poorly collimated outflow associated with HH 35 and represents the counterflow to the redshifted molecular CO outflow discovered by Levreault (1988). CR95 suggest the Herbig Ae/Be star V380 Ori (which illuminates NGC 1999) drives HH 35, loop B and the molecular outflow. The presence of loops A and B suggests the presence of a quadrupole outflow in NGC 1999. Using similar arguments as CR95, we suggest loop A delineates an optical outflow which, in conjunction with HH 407, represents a 6.2 pc lobe at PA = -23° with respect to V380 Ori.
In a search for optical counterparts to HH 407, our deep IIIaF plates do not reveal any clear candidates, although if we assume loop A and HH 407 are propagating out and away from L1641, the southern counterflow may not yet have emerged from the far side of the molecular cloud. Stanke (1999; private communication) has identified a large H<sub>2</sub> feature to the south of NGC 1999 which may represent an embedded counterflow to loop A and HH 407 (see Fig. 17). HH 130 is a large bow shock structure located 8$`\stackrel{}{.}`$5 south–east of NGC 1999 and has been linked to HH 1/2 (Ogura & Walsh 1992) and V380 Ori (Reipurth 1998). CR95 suggest the energy source of HH 130 is located to the north–east of knot H (see Fig. 21). If HH 130 and/or the H<sub>2</sub> feature represents the counterflow to loop A and HH 407, the outflow axis would be bent by up to 10°. A similar situation is seen in HH 127/403/404 (R98), HH 110/270 (Reipurth, Raga & Heathcote 1996) and HH 135/136 (Ogura et al. 1998). Proper motion and spectroscopic studies of HH 130, HH 407 and the H<sub>2</sub> feature are needed to determine if their motion and radial velocities are directed away from the V380 Ori region.
Is V380 Ori the driving source of loop A? In addition to V380 Ori, CR95 found two K band sources, V380 Ori–B and V380 Ori–C, within NGC 1999. By means of speckle–interferometry, Leinert, Richichi & Hass (1997) identified V380 Ori as a binary consisting of a Herbig Ae/Be (V380 Ori) and T Tauri star. High resolution mm–interferometry of NGC 1999 will help clarify which source is driving the optical emission associated with loop A.
As shown in Fig. 17, HH 306–310, HH 407 and the $``$–shaped filament (Bally et al. 1987; Johnstone & Bally 1999) lie within the rim of H$`\alpha `$ emission identified in Figs 6 and 13. Approximated by an ellipse 13$`\stackrel{}{.}`$6 $`\times `$ 4′ (3.6 $`\times `$ 0.54 pc) in size, we suggest the ellipse has formed due to the combined action of the HH 306–310 and HH 407 flows expelling molecular gas from the main cloud core. The UV radiation from the nearby bright stars excites the outer edge of the expanding molecular material which we see as the H$`\alpha `$ ellipse. Such a large–scale movement of molecular gas by parsec–scale HH flows has been suggested for HH 34 and HH 306–310 (Bally & Devine 1994; R98).
## 4 Conclusions & Future Work
By use of a single AAO/UKST H$`\alpha `$ film of the Orion region, we have identified emission–line nebulosities which resemble bow shocks, jets and extensive alignments of arc–shaped nebulae indicating possible giant molecular flows. Subsequent narrow and broad band CCD imaging has confirmed these features as genuine HH objects tracing outflows ranging in size from a fraction of a parsec to over 6 pc in length. In addition to the 3 pc wide H$`\alpha `$ rim surrounding HH 306–310 and HH 407, the H$`\alpha `$ loop (loop A) extending out of the NGC 1999 reflection nebulosity have not been identified in previous studies. Although these features are faintly visible in our IIIaF images, the excellent contrast of the H$`\alpha `$ films with respect to IIIaF and published CCD images of these regions clearly distinguishes these features from background emission, thereby allowing a thorough investigation of how outflows from young stars affect the surrounding interstellar medium. The lack of optical and molecular emission associated with HH 403/404, the presence of the H$`\alpha `$ rim and the identification of large <sup>13</sup>CO cavities associated with HH 34 (Bally & Devine 1994), HH 306–310 (R98) and the SMZ 23 counterflow (this paper) suggests that, in the absence of massive star formation, parsec–scale flows are the dominating factor in disrupting molecular gas in GMCs. They may also be responsible for the continuation of star formation beyond the current epoch. The creation of large–scale cavities seen in <sup>13</sup>CO maps (R98; this paper) may produce highly compressed regions which collapse to form a new wave of star formation. In order to test this idea, high resolution sub–millimetre observations in conjunction with near–infrared H<sub>2</sub> (2.12$`\mu `$m) imaging will identify and determine the distribution of newly–forming Class 0 protostars with respect to the CO cavities.
Although we have suggested candidate energy sources for many of the new HH flows, only a few (Ori I–2, BE Ori and V510 Ori) can be considered as certain. The identification of at least 4 sources within an arcminute of the VLA source warrants subarcsecond CO mapping of the region to determine which source is driving the optical and near–infrared emission associated with HH 306–310, HH 403/404, HH 407 and SMZ 23. Near–infrared spectroscopy of proposed outflow sources for HH 298/301/302, HH 304, HH 305 and HH 405 will be useful in classifying their nature for comparison with other HH energy sources. To varying degrees, the optical sources BE Ori and V510 Ori exhibit optical variability and multiple–ejection events (HH objects). The fact these sources still posses highly collimated, one–sided jets well after they have emerged from their parental molecular cloud may provide important insights into jet evolution.
In relation to the newly discovered parsec–scale flows, high resolution spectroscopy and proper motion studies of individual knots associated with HH 61/62/303, HH 306–310, HH 127/403/404, HH 407 and features A–D to the far south of L1641–N will determine velocities, excitation conditions and confirm points of origin.
Due to the success of the Orion H$`\alpha `$ film, the Carina, Cha I/II, Sco OB1, $`\rho `$ Oph, R Cra and CMa OB1 star–forming regions are to be surveyed in a similar fashion to that presented in this paper. The majority of these cloud complexes lie within 500 pc and maximise the detection of faint, large–scale flows for comparative studies with the Orion region where we hope to address the following questions:
* What is the nature of the energy source? Parsec–scale flows are associated with Class 0, Class I and optically–visible T–Tauri stars. Is the parsec–scale phenomenon due to inherent properties of the energy source?
* How does the flow remain collimated over such large distances? Does the nature of the surrounding environment have a collimating effect?
* To what extent do parsec–scale outflows affect star formation within molecular clouds? Is there any evidence for self–regulated star formation?
## Acknowledgements
We thank the staff of the AAO and particularly the UKST for the teamwork which makes the H$`\alpha `$ survey possible. Thanks also go to the Mount Stromlo Time Allocation Committee for the generous allocation of time on the 40inch telescope. SLM acknowledges John Bally for the use of the Bell Labs 7m <sup>13</sup>CO data and Thomas Stanke for supplying his H<sub>2</sub> data of the L1641–N region. Thanks also go to David Malin at the AAO for providing unsharp–mask prints of the Orion film. SLM acknowledges the support of a DEET scholarship and an Australian Postgraduate Award. We thank the anonymous referee for comments and suggestions which strengthened the paper. This research has made use of the Simbad database, operated at CDS, Strasbourg, France and the ESO/SERC Sky Surveys, based on photographic data obtained using the UKST which is currently operated by the AAO.
|
no-problem/9907/astro-ph9907123.html
|
ar5iv
|
text
|
# THE METAGALACTIC IONIZING RADIATION FIELD AT LOW REDSHIFT
## 1. INTRODUCTION
The ionizing background that permeates intergalactic space is of fundamental interest for interpreting QSO absorption lines and interstellar high-latitude clouds. Produced primarily by quasars, Seyfert galaxies, and other active galactic nuclei (AGN), these Lyman-continuum (LyC) photons photoionize the intergalactic medium (IGM), set the neutral hydrogen fraction in the Ly$`\alpha `$ forest absorbers, and help to determine the ion ratios in metal-line absorbers in QSO spectra. Ionizing radiation may control the rate of evolution of the Ly$`\alpha `$ absorption lines at $`z<2`$ (Theuns, Leonard, & Efstathiou 1998; Davé et al. 1999), and it may affect the formation rate of dwarf galaxies (Efstathiou 1992; Quinn, Katz, & Efstathiou 1996). The hydrogen photoionization rate, $`\mathrm{\Gamma }_{\mathrm{HI}}(z)`$, is an important component of N-body hydrodynamic modeling of the IGM. Because of the large photoionization corrections to the observed H I absorption, the inferred baryon density of the IGM and metal abundance ratios also depend on the intensity and spectrum of this radiation. Within the Milky Way halo, the ionizing background can affect the ionization state of high-velocity clouds located far from sources of stellar radiation (Bland-Hawthorn & Maloney 1999).
The ionizing background intensity at the hydrogen ionization edge ($`h\nu _0=13.6`$ eV) is denoted $`I_0`$, in ergs cm<sup>-2</sup> s<sup>-1</sup> Hz<sup>-1</sup> sr$`^1`$, hereafter denoted “UV units” or understood in context. In an optically thin environment, the background spectrum reflects that of the sources, QSOs and Seyfert galaxies, which appear to have steep EUV spectra of the form $`F_\nu (\nu /\nu _0)^{\alpha _s}`$, with $`\alpha _s=1.77\pm 0.15`$ from 350–1050 Å (Zheng et al. 1997), or starburst galaxies with $`\alpha _s1.92.2`$ (Sutherland & Shull 1999). At high redshift, the IGM is optically thick, owing to the numerous Ly$`\alpha `$ absorbers that ionizing photons must traverse. Thus, the background spectrum is strongly modified by absorption and re-emission (Haardt & Madau 1996; Fardal, Giroux, & Shull 1998, henceforth FGS).
Estimates of $`I_0`$ at high redshift are usually obtained from the “proximity effect” (Bajtlik, Duncan, & Ostriker 1988), the observed paucity of Ly$`\alpha `$ absorbers near the QSO emission redshift. Recent measurements give values of $`I_010^{21}`$ UV units at $`z3`$: $`I_0=1.0_{0.3}^{+0.5}\times 10^{21}`$ (Cooke, Espey, & Carswell 1997), $`I_0=(0.5\pm 0.1)\times 10^{21}`$ (Giallongo et al. 1996), and $`I_0=0.75\times 10^{21}`$ (Scott et al. 1999). At low redshift, the lower comoving density of QSOs and their diminished characteristic luminosities suggest that the metagalactic background is reduced by about a factor of $`10^2`$ to $`I_010^{23}`$. Using an optical QSO luminosity function (cf. Boyle 1993) and an empirical model of IGM opacity (Miralda-Escudé & Ostriker 1990), Madau (1992) estimated that $`I_0=6\times 10^{24}`$ at $`z=0`$. However, this is an uncertain estimate, which now appears low compared with several local determinations. Theoretical extrapolations of $`I_0`$ to low $`z`$ are uncertain because they depend sensitively on the assumed AGN luminosity function and on the IGM opacity model (Giallongo, Fontana, & Madau 1997; FGS). As we will show, the low-$`z`$ IGM opacity appears to be dominated by Ly$`\alpha `$ absorbers in the range $`14<\mathrm{log}N_{HI}<18`$, for which we are just starting to obtain statistically reliable information from the Hubble Space Telescope (HST). A future key project on Ly$`\alpha `$ absorbers with HST and a Ly$`\beta `$ survey with the Far Ultraviolet Spectroscopic Explorer (FUSE) should be even more enlightening.
Observational estimates of or upper limits on $`I_0`$ at low redshift have been made by a variety of techniques, as described in Table 1. These methods include studies of the proximity effect at $`z0.5`$ (Kulkarni & Fall 1993), edges of H I (21 cm) emission in disk galaxies (Maloney 1993; Dove & Shull 1994a), and limits on H$`\alpha `$ emission from high-latitude Galactic clouds (Vogel et al. 1995; Tufte, Reynolds, & Haffner 1998) and extragalactic H I clouds (Stocke et al. 1991; Donahue, Aldering, & Stocke 1995). Since all these techniques are based on the integrated flux of LyC radiation, it is convenient to define $`\mathrm{\Phi }_{\mathrm{ion}}`$ (photons cm<sup>-2</sup> s<sup>-1</sup>), the normally incident photon flux through one side of a plane. For an isotropic, power-law intensity, $`I_\nu =I_0(\nu /\nu _0)^{\alpha _s}`$, we can relate the integral quantity $`\mathrm{\Phi }_{\mathrm{ion}}`$ to the specific intensity $`I_0`$:
$$\mathrm{\Phi }_{\mathrm{ion}}=2\pi _0^1\mu 𝑑\mu _{\nu _0}^{\mathrm{}}\frac{I_\nu }{h\nu }𝑑\nu =\left(\frac{\pi I_0}{h\alpha _s}\right)=(2630\mathrm{cm}^2\mathrm{s}^1)I_{23}\left(\frac{1.8}{\alpha _s}\right),$$
(1)
where $`\mu =\mathrm{cos}\theta `$ is the angle relative to the cloud normal and $`I_{23}`$ is the value of $`I_0`$ expressed in units of $`10^{23}`$ UV units. Most of the upper limits on $`I_0`$ translate into values of $`\mathrm{\Phi }_{\mathrm{ion}}`$ in the range $`10^410^5`$ photons cm<sup>-2</sup> s<sup>-1</sup>. For an assumed EUV spectral index $`\alpha _s1.8`$, and the approximate form, $`\sigma _\nu \sigma _0(\nu /\nu _0)^3`$, for the H I photoionization cross section, the hydrogen photoionization rate due to this metagalactic intensity is,
$$\mathrm{\Gamma }_{\mathrm{HI}}\frac{4\pi I_0\sigma _0}{h(3+\alpha _s)}=(2.49\times 10^{14}\mathrm{s}^1)I_{23}\left(\frac{4.8}{3+\alpha _s}\right),$$
(2)
where $`\sigma _0=6.3\times 10^{18}`$ cm<sup>2</sup> is the hydrogen photoionization cross section at $`h\nu _0`$.
These H$`\alpha `$ measurements and limits are improving with better Fabry-Perot techniques (Bland-Hawthorn et al. 1994; Tufte et al. 1998). In addition, we now have more reliable HST measurements of the opacity from the low-redshift Ly$`\alpha `$ clouds (Weymann et al. 1998; Shull 1997; Penton, Stocke, & Shull 1999). Therefore, better computations of the metagalactic radiation field are timely. In this paper, we compute the contribution of Seyfert galaxies, QSOs, and starburst galaxies to the low-redshift ionizing background using three ingredients: (1) a Seyfert/QSO luminosity function; (2) AGN fluxes at $`\lambda <912`$ Å from extrapolated IUE spectra; and (3) an improved IGM opacity model, based on recent HST surveys of Ly$`\alpha `$ clouds at low redshift. In § 2 we describe these ingredients. In § 3 we give the results for $`I_0`$ and $`\mathrm{\Phi }_{\mathrm{ion}}`$ at $`z0`$, together with error estimates. In § 4 we summarize our results and discuss future work that could improve estimates of $`I_0`$.
## 2. METHODOLOGY
The solution to the cosmological radiative transfer equation (Peebles 1993) for sources with proper specific volume emissivity $`ϵ(\nu ,z)`$ (in ergs cm<sup>-3</sup> s<sup>-1</sup> Hz<sup>-1</sup>) yields the familiar expression (Bechtold et al. 1987) for the mean specific intensity at observed frequency $`\nu _{obs}`$ as seen by an observer at redshift $`z_{obs}`$:
$$I_\nu (\nu _{obs},z_{obs})=\frac{1}{4\pi }_{z_{obs}}^{\mathrm{}}\frac{d\mathrm{}}{dz}\frac{(1+z_{obs})^3}{(1+z)^3}ϵ(\nu ,z)\mathrm{exp}(\tau _{\mathrm{eff}})𝑑z.$$
(3)
Here, $`\nu =\nu _{obs}(1+z)/(1+z_{obs})`$ is the frequency of the emitted photon (redshift $`z`$) observed at frequency $`\nu _{obs}`$ (redshift $`z_{obs}`$), $`d\mathrm{}/dz=(c/H_0)(1+z)^2(1+\mathrm{\Omega }_0z)^{1/2}`$ is the line element for a Friedmann cosmology, and $`\tau _{\mathrm{eff}}`$ is the effective photoelectric optical depth due to an ensemble of Ly$`\alpha `$ absorption systems. For Poisson-distributed clouds (Paresce, McKee, & Bowyer 1980),
$$\tau _{\mathrm{eff}}(\nu _{obs},z_{obs},z)=_{z_{obs}}^z𝑑z^{}_0^{\mathrm{}}\frac{^2𝒩}{N_{HI}z^{}}\left[1\mathrm{exp}(\tau )\right]𝑑N_{HI},$$
(4)
where $`^2𝒩/N_{HI}z^{}`$ is the bivariate distribution of Ly$`\alpha `$ absorbers in column density and redshift, and $`\tau =N_{HI}\sigma (\nu )`$ is the photoelectric (LyC) optical depth at frequency $`\nu `$ due to H, He I, and He II through an individual absorber with column density $`N_{HI}`$. For purposes of assessing the local attenuation length, it is useful (Fardal & Shull 1993) to use the differential form of eq. (4), marking the rate of change of optical depth with redshift,
$$\frac{d\tau _{\mathrm{eff}}}{dz}=_0^{\mathrm{}}\frac{^2𝒩}{N_{HI}z}\left[1\mathrm{exp}(\tau )\right]𝑑N_{HI}.$$
(5)
The attenuation length, in redshift units, is given by the reciprocal of $`d\tau _{\mathrm{eff}}/dz`$. At low $`z`$, since $`d\tau _{\mathrm{eff}}/dz1`$ at the hydrogen threshold, its frequency dependence is significant, and the attenuation length can extend to $`z2`$. In the past few years, more sophisticated solutions to the cosmological transfer have been developed (Haardt & Madau 1996; FGS) taking into account cloud emission and self-shielding. Figure 1 illustrates our group’s recent calculation of the ionizing background spectrum, computed in full cosmological radiative transfer with a new IGM opacity model based on high-resolution Keck spectra of the Ly$`\alpha `$ forest and local continua. These models include cloud self-shielding and emission. We have connected this high-redshift opacity model with our new model from HST studies (discussed below) for the low-redshift opacity at a transition redshift $`z=1.9`$. By redshift $`z=0`$, the intensity has declined to $`I_01.3\times 10^{23}`$, corresponding to $`\mathrm{\Phi }_{\mathrm{ion}}3000`$ photons cm<sup>-2</sup> s<sup>-1</sup> for sources with $`\alpha _s2`$.
In the work that follows, we compute $`I_\nu `$ using both our detailed cosmological radiative transfer code and an approximate solution to equation (3). In this approximation, we neglect the effects of emission from attenuating absorbers and approximate the opacity with a simple power-law fit that neglects the effects of He absorption. We will discuss the accuracy of this approximation in more detail in § 3. Because the opacity of the IGM is much smaller at low redshift, this more rapid calculation is adequate for estimating the present-day level of radiation just above $`\nu _0`$. The primary ingredients for the computation of $`I_0`$ at low redshift are the source emissivity, $`ϵ(\nu ,z)`$, and the opacity model for $`\tau _{\mathrm{eff}}(\nu _{obs},z_{obs},z)`$. In the following sub-sections, we describe how we determined these quantities.
### 2.1. AGN Luminosity Function and Spectra
The distribution of AGN luminosities is typically described by a rest-frame, B-band luminosity function. In order to estimate the total emissivity of AGN at the Lyman limit, we must know both the luminosity function and the average spectrum of the AGN. In addition, we must know the assumptions about the spectrum that were made to construct the luminosity function.
To estimate the intrinsic AGN quasar spectrum, we begin with the Seyfert optical sample of Cheng et al. (1985), based on Seyfert 1 and 1.5 galaxies covered by the first nine Markarian lists. Their sample was corrected for incompleteness, and the contribution from the host galaxy was subtracted out. The separation of nuclear and host galaxy luminosity becomes increasingly challenging at the faint end of the luminosity function. Ideally, careful, small-aperture photometry would be used for these estimates. Cheng et al. (1985) relied instead on two independent methods to separate the contribution from the nucleus. In the first method, they assumed a template host galaxy and corrected this for orientation and internal extinction. In the second method, they assumed that all nuclei had the same intrinsic colors and determined the nuclear contribution via the color-given method of Sandage (1973). They found these two methods to give consistent results. In addition, they compared the color-given method with nuclear magnitudes derived by careful surface photometry for 11 Seyfert 1 galaxies (Yee 1983). They assigned a total uncertainty of $`0.5`$ mag in the nuclear $`M_B`$ to the sample. On re-examination of the sample, we found that the errors most likely decrease with the luminosity of the galaxy. We assume that the errors on the specific luminosity, $`L_B`$ (ergs s<sup>-1</sup> Hz<sup>-1</sup>), decrease linearly from 0.24 dex at $`\mathrm{log}L_B=28`$ to 0.16 dex at $`\mathrm{log}L_B=30`$.
From this sample of Seyferts, we chose 27 objects observed repeatedly by the International Ultraviolet Explorer (IUE) satellite. Together with many other AGN, these Seyfert galaxies are part of the Colorado IUE-AGN database (Penton, Shull, & Edelson 1996), which gives both mean and median spectra. Since these AGN are subject to flux variability, the distribution in flux is a skewed distribution with a tail that includes short flares studied by various IUE campaigns. To provide a conservative estimate of the ionizing fluxes, we have therefore used median spectra to derive correlations; however the differences in the correlations are only a few percent. The line-free regions of the median IUE spectra were fitted to power-law continua and extrapolated to 912 Å (rest-frame), from which we derive the specific luminosity, $`L_{912}`$ (ergs s<sup>-1</sup> Hz<sup>-1</sup>). We also convert from $`M_B`$ to $`L_B`$, at $`\nu =6.81\times 10^{14}`$ Hz (4400 Å) by the formula derived from Weedman (1986, eqs. 3.15 and 3.16), $`\mathrm{log}L_B=0.4(51.79M_B)`$. Figure 2 shows the correlation between $`L_{912}`$ and $`L_B`$. The error bars are only shown for $`L_B`$, since the errors in $`L_{912}`$ are much smaller. We find that $`L_{912}=(2.60\pm 0.22)\times 10^{28}(L_B/10^{29})^{(1.114\pm 0.081)}`$ erg s<sup>-1</sup> Hz<sup>-1</sup>. This translates to a UV-optical spectral slope that depends on $`L_B`$ as
$$\alpha _{\mathrm{UV}}=(0.86\pm 0.05)(0.16\pm 0.12)\mathrm{log}(L_B/10^{29}).$$
(6)
The evidence for $`L_B`$ dependence is marginal, however, and our basic model will simply assume a constant slope $`\alpha _{\mathrm{UV}}=0.86`$ between the B band and the Lyman limit. This agrees quite well with the average QSO spectrum derived by Zheng et al. (1997).
The second ingredient in the computation of the AGN emissivity is the luminosity function. The function that matches observations over the broadest redshift and luminosity range is the comoving analytic form given by Pei (1995, eqs. 6–8),
$$\mathrm{\Phi }(L,z)=\frac{\mathrm{\Phi }_{}/L_z}{(L/L_z)^{\beta _l}+(L/L_z)^{\beta _h}},$$
(7)
where the characteristic “break luminosity” is given by
$`L_z`$ $`=`$ $`L_{}(1+z)^{(1\alpha _{\mathrm{UV}})}\mathrm{exp}\left[(zz_{})^2/2\sigma _{}^2\right]`$ (8)
$`=`$ $`L_0(1+z)^{(1\alpha _{\mathrm{UV}})}\mathrm{exp}\left[z(z2z_{})/2\sigma _{}^2\right].`$
Here, $`\alpha _{\mathrm{UV}}`$ is the UV-optical spectral index and $`L_0=L_{}\mathrm{exp}(z_{}^2/2\sigma _{}^2)`$ is the “break luminosity”, $`L_z`$, at the present epoch. Note that we define our spectral index with the opposite sign to that assumed by Pei, i.e., we adopt $`L_\nu \nu ^{\alpha _{\mathrm{UV}}}`$ where most data suggest that $`0.5<\alpha _{\mathrm{UV}}<1`$. We present results for luminosity functions based on the two sets of assumptions about the cosmology and optical spectral index $`\alpha _{\mathrm{UV}}`$ derived by Pei. The “open model” has $`h=0.5`$, $`\mathrm{\Omega }_0=0.2`$, and $`\alpha _{\mathrm{UV}}=1.0`$ and yields $`\beta _l=1.83`$, $`\beta _h=3.70`$, $`z_{}=2.77`$, $`\mathrm{log}(L_{}/L_{})=13.42`$, $`\sigma _{}=0.91`$, and $`\mathrm{log}(\mathrm{\Phi }_{}/\mathrm{Mpc}^3)=6.63`$. The “closed universe” model has $`h=0.5`$, $`\mathrm{\Omega }_0=1`$, and $`\alpha _{\mathrm{UV}}=0.5`$ and yields $`\beta _l=1.64`$, $`\beta _h=3.52`$, $`z_{}=2.75`$, $`\mathrm{log}(L_{}/L_{})=13.03`$, $`\sigma _{}=0.93`$, and $`\mathrm{log}(\mathrm{\Phi }_{}/\mathrm{Mpc}^3)=6.05`$. Figure 3 shows these two models at $`z=0`$. After correcting for the different spectral indices, there is no physical reason why these two models should differ in the ionizing intensity they imply or in their value as $`z0`$. However, the intensity estimates from these models differ by up to $`40\%`$ (see FGS). This points to substantial uncertainties in the Pei fit that are not reflected in the small formal errors.
We integrate the ionizing luminosity density over the luminosity function from $`L_{\mathrm{min}}`$ to $`L_{\mathrm{max}}`$. In our standard model, we assume that these limits are $`0.01L_z`$ and $`10L_z`$, respectively, and we explore the sensitivity of the results to $`L_{\mathrm{min}}`$. If we define $`x=L/L_z`$, we can write the comoving specific volume emissivity as
$$ϵ(\nu ,z)=\mathrm{\Phi }_{}\left(\frac{\nu _0}{\nu _B}\right)^{\alpha _{\mathrm{UV}}}\left(\frac{\nu }{\nu _0}\right)^{\alpha _s}L_z_{x_{\mathrm{min}}}^{x_{\mathrm{max}}}\frac{x}{x^{\beta _l}+x^{\beta _h}}𝑑x,$$
(9)
where we assume an EUV power-law spectral index $`\alpha _s`$ for the AGN and integrate from $`x_{\mathrm{min}}=L_{\mathrm{min}}/L_z`$ to $`x_{\mathrm{max}}=L_{\mathrm{max}}/L_z`$. With $`x_{\mathrm{min}}=0.01`$ and $`x_{\mathrm{max}}=10`$, our results are insensitive to $`x_{\mathrm{max}}`$ and moderately insensitive to $`x_{\mathrm{min}}`$. An increase (decrease) of $`x_{\mathrm{min}}`$ by a factor of 3 leads to a decrease (increase) of the calculated emissivity by $`15\%`$. A similar change of $`x_{\mathrm{max}}`$ changes the emissivity by only 2%. From several trials, we find an empirical scaling relation, $`I_0^{\mathrm{AGN}}(L_{\mathrm{min}}/0.01L_{})^{0.17}(\alpha _s/1.8)^{0.97}`$. To better assess the uncertainty from the extrapolation to bright and faint sources, we use Figure 2 of Pei (1995), which shows the range of B luminosities that contribute to the luminosity function fit. We define the “completeness” as the integral of the emissivity over this range, compared with the integral for the standard range $`0.01<x<10`$. This completeness rises from 20% at $`z=0`$ to 80% at $`z2`$, but falls to 20% at $`z4`$. The low-$`z`$ results can be made more robust by considering additional surveys of Seyferts and QSOs.
Figure 3 shows the B-magnitude luminosity function from the Cheng et al. (1985) sample, as well as the results from the survey of Köhler et al. (1997) for Seyferts and QSOs with $`z<3`$. These results are also compatible with the results of the local optical luminosity function based on an X-ray selected sample of AGN (Della Ceca et al. 1996). There are some discrepancies, however, between these results and the Pei luminosity function. There are more QSOs at the bright end of the luminosity function at $`z<1`$ than in the Pei form, an effect that has been noted by several authors (Goldschmidt et al. 1992; Goldschmidt & Miller 1998; La Franca & Cristiani 1997). This appears to have its origin in systematic errors in the Schmidt & Green (1983) QSO survey. In addition, there appears to be a slight deficit at the knee of the Pei function in both the Köhler et al. and Cheng et al. samples. In fact, the Köhler results are fitted adequately with a single power law, as shown in Fig. 3.
It turns out that these two effects cancel to within $`5\%`$ when we compute the total B-band emissivity over the range $`0.01<L_B/L_z(0)<10`$. However, the fact that the luminosity function appears to be changing shape weakens the assumptions leading to the Pei function, and the convergence at the faint end becomes even slower.
Hence, our best estimate of the emissivity is unchanged from the Pei (1995) model, but we suspect that there are still substantial uncertainties in the AGN luminosity function. In Fig. 4 we plot the emissivity from the Pei “open” model. To find a reasonable range of emissivity models, we multiply (or divide) this by the square root of the “completeness” of the Pei model as derived above, and we also change the spectral slope $`\alpha _{\mathrm{UV}}`$ and its dependence on $`L_B`$ by one standard deviation to minimize (or maximize) the ionizing emissivity. We consider the band thus defined to be a conservative 1$`\sigma `$ range for the emissivity. Note that the Pei “closed” model, adjusted for the different spectral index and cosmology, lies well within this range.
The average UV QSO spectrum derived by Zheng et al. (1997) implies an EUV spectral index $`\alpha _s=1.77\pm 0.15`$ for radio-quiet QSOs and $`\alpha _s=2.16\pm 0.15`$ for radio-loud quasars. Since radio-quiet QSOs are much more numerous, we select $`\alpha _s=1.8\pm 0.15`$ as a representative measure of the EUV spectral index.
### 2.2. Stellar Contributions to the Ionizing Emissivity
The contribution of stars within galaxies to the ionizing background remains almost completely unknown at all epochs. It has been realized (cf. Madau & Shull 1996) that the amount of ionizing photons associated with the production via supernovae of the observed amount of metals in the universe might easily exceed the ionizing photons produced by AGNs. The problem has been to estimate the average fraction, $`f_{esc}`$, of ionizing photons produced by O and B stars that escape the galaxies into the IGM. Thus far, observational limits on $`f_{esc}`$ exist from only a few nearby galaxies (Leitherer et al. 1995; Hurwitz et al. 1997). On theoretical grounds, Dove & Shull (1994b) concluded that an escape fraction of order 10% might be possible, while more recent models of the escape of ionizing photons through supershell chimneys (Dove, Shull, & Ferrara 1999) suggest fractions of 3–6%, which are compatible with the observational limits. Recently, Bland-Hawthorn & Maloney (1999) used measurements of H$`\alpha `$ from the Magellanic Stream to infer an escape fraction $``$ 6% from the Milky Way.
Deharveng et al. (1997) argued that even the present indirect upper limits on $`I_0`$ must limit the present-day escape fraction to below 1%. They depart radically, however, from the work of Dove & Shull (1994b) and Dove et al. (1999) in their treatment of the nature of the escape fraction. Deharveng et al. assume that stellar ionizing photons are prevented from escaping their host galaxies by the opacity of neutral hydrogen, effectively in the limit that the hydrogen forms an unbroken sheet. Consequently, the opacity varies approximately as $`(\nu /\nu _0)^3`$, so that the escape of higher-energy photons is dramatically enhanced. For example, the optical depth can decrease from $`\tau 50`$ at the Lyman edge to $`\tau <1`$ at 4 rydbergs. For photons emitted at high redshift that survive to contribute to $`I_0`$ at $`z=0`$, the sharp increase in the escape fraction outweighs the effects of redshifting. In the Deharveng picture, the contribution of high-$`z`$ galaxies to the present-day mean intensity at $`\nu =\nu _0`$ is strongly enhanced. Thus, if $`f_{esc}0.001`$, starbursts may match the contribution of quasars to the present-day ionizing mean intensity.
However, we believe the Deharveng model for photon escape is physically unrealistic. Our alternative view is that the internal galactic opacity to all stellar photons is large, and photons may only escape from isolated regions of high star formation whose H II regions or attendant supershells are able to break through this high opacity layer (Dove et al. 1999). Within this view, a constant escape fraction with frequency is more appropriate.
Our estimate of the stellar ionizing photons is made in the following way. Gallego et al. (1995) performed an H$`\alpha `$ survey of galaxies and fitted their derived luminosity function to a Schechter function. By integrating this, they were able to estimate a total H$`\alpha `$ luminosity per unit volume at low redshift, $`L_{H\alpha }=10^{39.1\pm 0.2}`$ erg s<sup>-1</sup> Mpc<sup>-3</sup>. In the usual fashion, we relate this to the number of ionizing photons by dividing the number of H$`\alpha `$ photons by the fraction of the total H recombinations that produce an H$`\alpha `$ photon. We multiply their estimate of the ionizing photon production by $`f_{esc}`$ to give the total ionizing photons in the IGM. The representative models assume $`f_{esc}=0.05`$ and are shown in Figure 5. Preliminary results are now available from the KPNO International Spectroscopic Survey (KISS) (Gronwall et al. 1998), which probes to fainter magnitudes than the Gallego et al. survey. Gronwall (1998) quotes a value $`L_{H\alpha }=10^{39.03}`$ erg s<sup>-1</sup> Mpc<sup>-3</sup>, but notes that these results are preliminary and represent a lower limit to the true H$`\alpha `$ density, since even their deeper survey is incomplete for galaxies with H$`\alpha `$ emission-line equivalent widths less than 25 Å.
The spectrum of ionizing photons from clusters of hot stars differs from that of AGN in that stars emit relatively few photons more energetic than 4 Ryd. Sutherland & Shull (1999) have shown that, between 1 and 4 Ryd, the spectrum of a starburst may be approximated as a power law with spectral index $`\alpha _s=1.92.2`$.
We have tied the redshift evolution in ionizing photon production rate to the star formation evolution observations of Connolly et al. (1997) based upon the Hubble Deep Field (HDF). The effects of dust extinction remain a major uncertainty in the determination of the star formation rate at high redshift (cf. Pettini et al. 1998). Many high-$`z`$ star-forming galaxies appear to be dust-obscured, based on recent sub-millimeter studies of the HDF (Hughes et al. 1998; Barger et al. 1998). In addition, a survey of Lyman-break galaxies at $`z34`$ (Steidel et al. 1999), covering a much larger angular extent than the HDF, finds no significant difference in the star formation rate at $`z=3`$ and $`z=4`$. With corrections for dust extinction, the star formation could remain constant from $`z=1.5`$ out to $`z>4`$. As a result, we have also considered a case in which the star formation remains constant after reaching its peak at $`z2`$. However, despite the many uncertainties of high-$`z`$ star formation rates, the effects on the present-day level of ionization are minimal, of order a few percent.
### 2.3. Absorption Model for the IGM
At $`z<2`$, the redshift densities of the Ly$`\alpha `$ clouds and Lyman limit systems decline steeply with cosmic time. Morris et al. (1991) and Bahcall et al. (1991) used HST observations to show that this decline could not be extrapolated to the present, as far too many Ly$`\alpha `$ absorbers were observed toward 3C 273. The large data set gathered by the Hubble Key Project with the Faint Object Spectrograph (FOS) shows a sharp break in the evolving redshift density at $`z1.52`$ (Weymann et al. 1998). Ikeuchi & Turner (1991) showed that the cessation in this steep decline was a natural consequence of the falloff in the ionizing emissivity from $`z=2`$ down to 0. This conclusion has been borne out by detailed cosmological simulations (Davé et al. 1999), which indicate that the effect is insensitive to the specific cosmological model.
In our calculations, we will base our absorption model on observations. Our analysis follows the traditional “line-counting” method, where spectral lines are identified by Voigt profile-fitting and the opacity is calculated by assuming a Poisson distribution of these lines. At high redshift, FGS used this method to estimate the opacity based on high-resolution observations of QSO Ly$`\alpha `$ absorption lines from Keck and other large-aperture telescopes in the redshift range $`2z4`$. It is not appropriate to extrapolate this model to lower redshifts, owing to the rapid evolution rate of the Ly$`\alpha `$ forest.
Our method of determining $`d\tau _{\mathrm{eff}}/dz`$ considers Ly$`\alpha `$ lines in three ranges of column density: from $`12.5<\mathrm{log}N_{HI}<14.0`$ (HST/GHRS survey), from $`14.0<\mathrm{log}N_{HI}<16`$ (HST/FOS survey), and for $`\mathrm{log}N_{HI}>17`$ (HST/FOS Lyman-limit survey). The HST/FOS survey forms the core of our standard opacity model. We combine these results with HST/GHRS measurements of weak lines and with the HST/FOS survey of Lyman-limit systems, extrapolated downward from the Lyman limit by two different methods. At redshifts $`z<1.5`$, the most extensive study of strong Ly$`\alpha `$ absorbers ($`10^{1416}`$ cm<sup>-2</sup>) is the QSO Absorption Line Key Project with HST/FOS (Januzzi et al. 1998; Weymann et al. 1998). Weaker Ly$`\alpha `$ lines, which contribute a small amount to the opacity and serve as a constraint on the column density distribution, were studied by Shull (1997), Shull, Penton, & Stocke (1999), and Penton et al. (1999) using HST/GHRS spectra. The distribution of Lyman limit systems with $`N_{HI}>10^{17}`$ cm<sup>-2</sup> are discussed by Stengler-Larrea et al. (1995) and Storrie-Lombardi et al. (1994). Each of these surveys suffers from incompleteness or saturation effects in various regimes. Therefore, extrapolations outside the range of $`N_{HI}`$ and comparisons in regimes of overlap are helpful. Extensive future UV surveys with HST and FUSE will also reduce some of the uncertainties (see discussion in § 4).
HST/GHRS studies of Ly$`\alpha `$ absorbers in the range $`12.5\mathrm{log}N_{HI}14.0`$ (Penton et al. (1999) find a column density distribution, $`d𝒩/dN_{HI}N_{HI}^{1.74\pm 0.26}`$. The cumulative opacity of these weak lines (up to $`10^{14}`$ cm<sup>-2</sup>) is relatively small: $`d\tau _{\mathrm{eff}}/dz=0.025\pm 0.005`$ for the low-redshift range ($`0.003<z<0.07`$). However, a small number of higher column density systems produce a steady rise in the cumulative opacity for $`\mathrm{log}N_{HI}>14`$. Extending the HST/GHRS distribution up to $`\mathrm{log}N_{HI}=15`$ gives $`d\tau _{\mathrm{eff}}/dz0.09\pm 0.02`$. Above this column density, saturation effects and small-number statistics make the number counts more imprecise. In the range $`15<\mathrm{log}N_{HI}<16`$, Penton et al. (1999) estimate an additional contribution, $`d\tau _{\mathrm{eff}}/dz0.10.3`$.
The HST/FOS Key Project spectra have insufficient resolution to determine line widths or to resolve velocity components. As a result, the conversion from equivalent width, $`W_\lambda `$, to column density, $`N_{HI}`$, is difficult for saturated lines. In the absence of other lines (e.g., Ly$`\beta `$), one can only estimate the conversion from $`W_\lambda `$ to $`N_{HI}`$ by assuming a curve of growth and doppler parameter. This difficulty was noted by Hurwitz et al. (1998) in their comparison of unexpectedly strong ORFEUS Ly$`\beta `$ absorption compared to predictions from HST Ly$`\alpha `$ lines toward 3C 273. For unsaturated lines, $`W_\lambda =(54.4\mathrm{m}\mathrm{\AA })N_{13}`$, where $`N_{HI}=(10^{13}\mathrm{cm}^2)N_{13}`$ and where the line-center optical depth is $`\tau _0=(0.303)N_{13}b_{25}^1`$ for a doppler parameter $`b=(25\mathrm{km}\mathrm{s}^1)b_{25}`$. The HST/FOS Key Project lines with $`W_\lambda 240`$ mÅ are highly saturated in the range ($`14\mathrm{log}N_{HI}17`$) that dominates the Ly$`\alpha `$ forest’s contribution to the continuum opacity.
As a first attempt to incorporate the Key Project information, we focus on the statistical frequency of Ly$`\alpha `$ absorbers, $`d𝒩/dz=30.7\pm 4.2`$, for lines with $`W_\lambda >240`$ mÅ. For $`b=25\mathrm{km}\mathrm{s}^1`$, this corresponds roughly to $`\mathrm{log}N_{HI}>14`$. We compute opacities based upon sample 5 of Weymann et al. (1998), which includes 465 absorption lines ($`W_\lambda >240`$ mÅ) that could not be matched with corresponding metal lines. This sample was intended to remove high column density lines that may evolve more rapidly with redshift, consistent with the results of Stengler-Larrea et al. (1995) for Lyman limit systems. This segregation has little effect on the opacity. The Key Project became incomplete at column densities well below the Lyman limit. Therefore, to determine an IGM opacity, we needed to extrapolate to $`\mathrm{log}N_{HI}=17`$ using assumptions about the distribution.
The bivariate distribution of Ly$`\alpha `$ absorbers per unit redshift and unit column density can be expressed as $`^2𝒩/zN_{HI}=A(N_{HI}/10^{17}\mathrm{cm}^2)^\beta (1+z)^\gamma `$. Improvements on this form have been suggested, notably the addition of one or more breaks in the power-law distribution (Petitjean et al. 1993; FGS). Here, we parameterize our uncertainty by assuming just one power-law index, but we vary the upper limit on column density to which we integrate. Figure 6 shows a set of curves, corresponding to $`\beta =1.5\pm 0.2`$, of the differential effective opacity, $`d\tau _{\mathrm{eff}}/dz`$, evaluated at $`z=0`$ and at the hydrogen threshold. Assuming $`\beta =1.5`$ and no break in the distribution, we find $`d\tau _{\mathrm{eff}}/dz0.2`$ for the FOS range $`10^{1416}`$ cm<sup>-2</sup> and $`d\tau _{\mathrm{eff}}/dz0.5`$ for the expanded range $`10^{1417}`$ cm<sup>-2</sup>. Because $`\gamma =0.15\pm 0.23`$ for this FOS sample, the opacity does not change greatly with redshift. We choose a standard value $`d\tau _{\mathrm{eff}}/dz0.5`$, corresponding to $`\beta =1.5`$ and $`N_{\mathrm{max}}=10^{17}\mathrm{cm}^2`$.
The Lyman-limit data (Stengler-Larrea et al. 1995) can be fitted to the form
$$\frac{d\tau _{\mathrm{eff}}}{dz}=0.263(1+z)^{1.50},$$
(10)
assuming that $`^2𝒩/zN_{HI}N_{HI}^{1.5}(1+z)^{1.50}`$. In absorption model 1, we took the lower limit on $`N_{HI}`$ for partial LL systems to be $`N_l=10^{17}`$ cm<sup>-2</sup>. If this limit is extended down to $`10^{16}`$ cm<sup>-2</sup> or $`10^{15.5}`$ cm<sup>-2</sup>, the coefficient 0.263 in eq. (10) increases to 0.382 and 0.411, respectively; the latter choice becomes our Model 2.
We summarize our three opacity models in Table 2. We use equation (5) to calculate the opacity, but include an approximation of the frequency dependence of the opacity over the range 1–3 Ryd, which dominates the H I photoionization rate. This approximation takes the form
$$\frac{d\tau _{\mathrm{eff}}}{dz}=c_i\left(\frac{\nu }{\nu _0}\right)^{s_i}(1+z)^{\gamma _i}.$$
(11)
In Figure 7, we compare $`d\tau _{\mathrm{eff}}/dz`$ at $`\nu =\nu _0`$ for the three models described above, as well as low-redshift extrapolations of the opacities of FGS and Haardt & Madau (1996). It can be seen that the poorly determined column-density distribution of the Ly$`\alpha `$ forest leaves a large uncertainty in the total opacity, even though the evolution of the number density is tightly constrained by the HST observations. The partial LL systems ($`16\mathrm{log}N_{HI}17.5`$) probably dominate the IGM opacity at low redshift, but they are so rare that statistical fluctuations from sightline to sightline are quite large. It will require many high signal-to-noise spectra along low redshift lines of sight to reduce the uncertainty in their contribution to the opacity.
## 3. RESULTS: THE IONIZING RADIATION FIELD
### 3.1. The Contribution from AGN
Our best-estimate model for the present day intensity $`I_0`$ makes the following four assumptions: (1) The AGN distribution is described by the Pei (1995) QSO luminosity function with $`h=0.5`$ and $`\mathrm{\Omega }_0=0.2`$, modified by the assumption that the optical-UV spectral index $`\alpha _{\mathrm{UV}}=0.86`$, but with no correction for intervening dust. (2) The lower (upper) cutoffs to the luminosity function are $`L_{\mathrm{min}(\mathrm{max})}=0.01(10)L_{}`$. (3) The opacity below $`z1.9`$ is our “standard model,” itself based upon HST observations of Ly$`\alpha `$ forest and Lyman Limit systems. Above $`z1.9`$, it is Model A2 from FGS. (4) The ionizing spectrum ($`\nu \nu _0`$) has spectral index $`\alpha _s=1.8`$. Using the full radiative transfer calculation outlined in FGS, we find
$$I_0=1.3_{0.5}^{+0.8}\times 10^{23}\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1\mathrm{Hz}^1\mathrm{sr}^1.$$
(12)
As we discuss below, most of the uncertainties in our estimate are essentially multiplicative. The uncertainties quoted above represent the addition in quadrature of the uncertainties in the log of the factors discussed below. Direct addition in quadrature of the relative uncertainties gives a similar range of uncertainty. Strictly speaking, although none of the uncertainties is independent, we treat them as if they were in computing the total uncertainty.
Figure 4 summarizes the uncertainties in the emissivity due to AGN that pertain to assumptions (1) and (2). The corresponding 1 $`\sigma `$ uncertainty in the calculated specific intensity for the $`\mathrm{\Omega }_0=0.2`$ case is $`\pm 0.19`$ dex. As FGS noted, the intensity should be independent of the choice of $`H_0`$ or $`\mathrm{\Omega }_0`$ if the emissivity is actually determined observationally. The AGN luminosity function is propagated from low to high redshift, assuming pure luminosity evolution, a parameterization that depends only on $`z`$, and not explicitly on appropriate cosmological distances and luminosity evolution parameters. The difference in the calculated mean intensity for a closed universe ($`\mathrm{\Omega }_0=1`$) is not an independent source of uncertainty. However, we estimate an uncertainty of $`\pm 0.01`$ dex, which is negligible.
The uncertainties inherent in our choice of opacity model appear both in the degree of attenuation of ionizing photons due to the absorbers, and in the contribution of diffuse ionizing radiation from the absorbers (mainly He II ionizing photons reprocessed to He II Ly$`\alpha `$ and two-photon radiation). Using Models 1 and 2 instead of our standard model, we calculate slightly higher levels of $`I_0`$, by factors of $`0.0068`$ and $`0.0329`$ dex. A sample standard deviation of the models is $`\pm 0.017`$ dex and is probably an adequate assessment of the $`1\sigma `$ uncertainty in $`I_0`$ due to the opacity. A more complete picture of the full (at least several $`\sigma `$) uncertainty may be obtained from the following extreme cases. The diffuse ionizing radiation from absorbers contributes 20% of the ionizing intensity at $`z=0`$ (see Fig. 8a), so eliminating this contribution reduces $`I_0`$ by $`0.1`$ dex. If there is no opacity at all, $`I_0`$ increases by $`0.26`$ dex. If the column density distribution for the absorbers in our standard model is assumed have an unbroken power law with $`\beta =1.3`$ rather than 1.5, the reduction in $`I_0`$ is $`0.21`$ dex. The shape of the spectrum of AGN shortward of $`912`$ Å is a relatively small source of uncertainty. As $`\alpha _s`$ is varied between 1.5 and 2.1, representative of the 2 $`\sigma `$ uncertainty in $`\alpha _s`$, the relative specific intensity varies by $`\pm 0.1`$ about the standard value $`\alpha _s=1.8`$. We therefore adopt a conservative uncertainty of $`\pm 0.1`$ dex in $`I_0`$.
There remains an additional systematic uncertainty in the contribution of AGN to $`I_0`$. As discussed by FGS, the Pei luminosity function used here produces insufficient ionizing photons to account for the level of the ionizing background at $`z>3.5`$ implied by the proximity effect. Further, there are not enough ionizing photons to reionize the universe by $`z5`$ (Madau 1998), the epoch of the highest redshift quasars. While the latter point remains a problem for scenarios in which only AGN ionize the IGM, the former difficulty is ameliorated by the suggestion that, especially at higher redshifts, the number of AGN are being undercounted due to obscuration by dust-laden absorbers (Heisler & Ostriker 1988; Fall & Pei 1993; Pei 1995). Using the dust-corrected luminosity function of Pei (1995), we find that $`I_0`$ at $`z=0`$ is increased by $`0.08`$ dex.
As shown in Fig. 1, the level of the mean intensity at $`z2`$ calculated by FGS is $`I_07\times 10^{22}`$. For this model, ionizing radiation due to sources at $`z>2`$ produces 20% of the mean intensity at $`z=0`$ (see Fig. 8a). As Fig. 1 shows, redshifted He II Ly$`\alpha `$ diffuse emission is still substantial at $`z=2`$. Because He II $`\lambda 304`$ emitted at $`z>2`$ is redshifted below the H I threshold by $`z=0`$, this emission contributes less than 10% at $`z=0`$. To give a specific example, suppose that the number of AGN at high redshift was severely underestimated, so that the metagalactic background due to AGN at $`z23`$ was quintupled, while retaining the same spectral shape. Then, the value of $`I_0`$ at $`z=0`$ would be increased by only $`0.15`$ dex. Even for AGN, the strong attenuation of high-energy photons by He II absorption greatly limits any contribution to the present-day ionizing background from sources at $`z>3`$. A significantly larger population of AGN at $`z=23`$ will not augment $`I_0`$ by more than 40% at $`z=0`$. This systematic effect has not been included in the uncertainty in eq. (12).
### 3.2. The Possible Contribution from Hot Stars
An estimate of the ionizing radiation contributed by hot stars is complicated by its dependence on factors such as $`f_{esc}`$, for which a good estimate of its magnitude and uncertainty does not exist. As a result, we include these factors explicitly in our results. Our model for the possible contribution of stars to the present day specific intensity $`I_0`$ makes the following assumptions: (1) The production of ionizing photons by stars at the present time may be calibrated by the density of H$`\alpha `$ photons in the extragalactic background. (2) The star formation rate is proportional to that from the observations of Connolly et al. (1997) and assumes $`h=0.5`$ and $`\mathrm{\Omega }_0=0.2`$. (3) The IGM opacity is our “standard model.” (4) The average spectrum of the OB associations that provide the ionizing photons has spectral index $`\alpha _s=1.9`$ (1 – 4 Ryd) with no photons above 4 Ryd. Using a radiative transfer calculation that neglects the diffuse radiation contributed by intervening absorbers (see discussion below), we find
$$I_0=1.1_{0.7}^{+1.4}\times 10^{23}\left(\frac{f_{esc}}{0.05}\right)\left(\frac{L_{H\alpha }}{10^{39.1}}\right)\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1\mathrm{Hz}^1\mathrm{sr}^1,$$
(13)
where we have scaled the H$`\alpha `$ luminosity density, $`L_{H\alpha }`$, to the Gallego et al. (1995) standard value, $`10^{39.1}`$ erg s<sup>-1</sup> Mpc<sup>-3</sup>, and adopted a probable LyC escape fraction, $`f_{\mathrm{esc}}=0.05`$.
The uncertainties for which we have not directly parameterized our ignorance are mainly mutiplicative, as in the case for AGN. The uncertainty in the numerical coefficient quoted above again represents the addition in quadrature of the uncertainties in the log of the factors discussed below. We make the same conservative assumption that the individual uncertainties are independent. The uncertainty in the H$`\alpha `$ luminosity density suggested by Gallego et al. (1995) is $`\pm 0.2`$ dex, while the uncertainty in the preliminary KISS result, is $`\pm 0.1`$ dex. We adopt an uncertainty of $`\pm 0.2`$ dex for the H$`\alpha `$ calibration, but it is possible that this uncertainty will be reduced substantially when full details of KISS are released. As long as $`f_{esc}`$ is small, so that recombinations within the galaxies provide a fair accounting of the number of ionizing photons produced, this is the uncertainty we associate with assumption (1). The conversion of H$`\alpha `$ photons to ionizing photons is nearly temperature independent, so uncertainties in the temperature of ionized regions in galaxies are negligible. The H$`\alpha `$ emissivity is parameterized directly in units of the luminosity density suggested by Gallego et al. (1995), but the uncertainty discussed above is included in the numerical coefficient.
The uncertainties in the stellar emissivity (assumption 2) are summarized in Fig. 5. The error bars in the points from the Connolly et al. data are formulated in a different way from our evaluation of the AGN uncertainty. These authors corrected for survey incompleteness using a Schechter function with three different power laws to extrapolate to low luminosity. We show a rough $`1\sigma `$ range in the emissivity based upon this estimate of the uncertainties. Our emissivity depends upon a fit to the data points in Fig. 5, and has small variations with the assumed cosmology. This introduces a small uncertainty of 0.01 dex in the calculation of $`I_0`$.
The uncertainty inherent in our choice of opacity model, assumption (3) above, appears primarily in the degree of IGM attenuation of ionizing photons. If we focus only on the stellar contribution to the ionizing background, the number of He II ionizing photons produced is negligible. As a result, the contribution of diffuse ionizing radiation from the absorbers may be neglected. (Because H I recombination radiation is closely confined to the ionization edge, this diffuse radiation is quickly redshifted below $`\nu _0`$.) Using Models 1 and 2 for the opacity instead of our standard model, we calculate slightly higher levels of $`I_0`$, by factors of 0.0068 and 0.0329 dex. A sample standard deviation of the models is $`\pm 0.017`$ dex and is probably an adequate assessment of the $`1\sigma `$ uncertainty in $`I_0`$ due to the opacity. If there is no opacity at all, $`I_0`$ increases by $`0.26`$ dex. If the column density distribution for the absorbers in our standard model has an unbroken power law, with $`\beta =1.3`$ rather than 1.5, the reduction in $`I_0`$ is $`0.21`$ dex.
Our assumption (4), that the stellar ionizing radiation has a spectrum $`\nu ^{1.9\pm 0.2}`$ (Sutherland & Shull 1999), yields a relatively small source of uncertainty. As $`\alpha _s`$ is varied between $`1.72.1`$, the specific intensity increases by factors of $`0.045`$ to $`0.043`$ dex. We therefore assign an uncertainty of $`\pm 0.045`$ dex.
As shown in Fig. 8b, the contribution from stars at $`z>2`$ to the present ionizing radiation background just above the Lyman edge is less than 3%. This assumes that the emissivity peaked at $`z2`$ and falls off at high redshift. If, as suggested by Pettini et al. (1998) and Steidel et al. (1999), there is little or no falloff in the star formation rate out to $`z4`$, the mean intensity at $`z=0`$ would increase by only $`10\%`$. This is a small effect because of redshifting and the fact that stars emit few photons more energetic than 4 rydbergs. Thus, stars at $`z>3`$ would make little contribution to $`I_0`$ at $`z=0`$. Also, because stellar radiation does not doubly ionize He, diffuse He II Ly$`\alpha `$ and two-photon emission from absorbers make little contribution to $`I_0`$ at $`z=0`$.
## 4. CONCLUSIONS
In this paper, we have endeavored to make accurate estimates of the low-redshift intensity of ionizing radiation, arising from QSOs, Seyfert galaxies, and starburst galaxies. In performing this calculation, we found that we require accurate values of AGN emissivity and IGM opacity out to substantial redshifts. In other words, this is a global problem.
Our new estimates of the ionizing emissivities of Seyfert galaxies and low-redshift QSOs were constructed by extrapolating ultraviolet fluxes from IUE to the Lyman limit. For starburst galaxies, we used recent H$`\alpha `$ surveys together with an educated guess for the escaping fraction of ionizing radiation. The IGM opacity was derived from HST surveys of the low-redshift Ly$`\alpha `$ absorbers and a new opacity model from Keck high-resolution spectra of high-redshift QSOs. By incorporating these ingredients into a cosmological radiative transfer code, we find that the ionizing intensity at $`z0`$ has approximately equal contributions from AGN and starburst galaxies:
$`I_0^{\mathrm{AGN}}`$ $`=`$ $`(1.3\times 10^{23}\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1\mathrm{Hz}^1\mathrm{sr}^1)\left[{\displaystyle \frac{L_{\mathrm{min}}}{0.01L_{}}}\right]^{0.17}\left[{\displaystyle \frac{\alpha _s}{1.8}}\right]^{0.97}`$ (14)
$`I_0^{\mathrm{Star}}`$ $`=`$ $`(1.1\times 10^{23}\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1\mathrm{Hz}^1\mathrm{sr}^1)\left[{\displaystyle \frac{f_{\mathrm{esc}}}{0.05}}\right]\left[{\displaystyle \frac{L_{H\alpha }}{10^{39.1}}}\right]`$ (15)
Taking into account uncertainties in the various parameters of the model ($`L_{\mathrm{min}}`$, $`\alpha _s`$, QSO luminosity function, H$`\alpha `$-determined star-formation history) we estimate uncertainties in these coefficients of $`1.3_{0.5}^{+0.8}`$ (for AGN) and $`1.1_{0.7}^{+1.4}`$ (for starbursts). For a spectral index $`\alpha _s1.8`$ (from 1 – 4 Ryd), these values of $`I_0`$ each correspond to one-sided ionizing fluxes $`\mathrm{\Phi }_{\mathrm{ion}}3000`$ photons cm<sup>-2</sup> s<sup>-1</sup>. Allowing for statistical uncertainties, the total ionizing photon flux at low redshift probably lies in the range $`\mathrm{\Phi }_{\mathrm{ion}}=200010,000`$ photons cm<sup>-2</sup> s<sup>-1</sup>, which is consistent with a number of recent estimates and measurements (see Table 1).
The redshift evolution of the hydrogen photoionization rate, $`\mathrm{\Gamma }_{\mathrm{HI}}(z)`$ is shown in Figure 9 for three cases: AGN only, starbursts only, and combined (AGN plus starbursts). These rates were computed using our standard emissivity models for AGN and starburst galaxies, except that the starburst emissivities were held constant at $`z>1.7`$ to simulate recent measurements at high $`z`$. The starburst emissivities also assume $`f_{\mathrm{esc}}=0.05`$. Although it is beyond the scope of this paper, the potentially dominant role of starburst galaxies in photoionization at $`z>4`$ is apparent.
Because the values in eqs. (14) and (15) include unacceptably large range for such an important quantity, it is worth discussing what might be done to improve the situation, both thoretically and observationally. Advances need to be made in the characterization of both ionizing emissivities and IGM opacities. The greatest uncertainty in the opacity model occurs in the range $`\mathrm{log}N_{HI}=1517`$, where line saturation and small-number statistics make the Ly$`\alpha `$ surveys inaccurate. The imminent launch of the FUSE satellite will open up the far-UV band (920–1180 Å) that contains Ly$`\beta `$ and higher Lyman series lines. A survey of Ly$`\beta `$ lines should allow better determinations of line saturation (doppler $`b`$-values). The FUSE spectra of AGN will also provide more accurate values of the flux near the Lyman limit and reduce the uncertainties introduced by extrapolating from spectral regions longward of 1200 Å, as we have done with IUE data.
To address the general problem of small-number statistics in the Ly$`\alpha `$ absorbers, the HST Cosmic Origins Spectrograph (Morse et al. 1998), scheduled for installation on HST in 2003, should be used for a QSO absorption-line key project. A full discussion of the advantages of this project is given in Appendix 1 of the UV-Optical Working Group White Paper (Shull et al. 1999). Because COS will have about 20 times the far-UV throughput of the previous HST spectrographs, GHRS and STIS, and offers velocity resolution 10 times better than that of FOS, it will provide much better statistics on the distribution of H I absorbers in both space and column density. The most useful COS surveys will be of low-redshift Ly$`\alpha `$ lines, particularly the rare “partial Lyman-limit systems” ($`16.0<\mathrm{log}N_{HI}<17.5`$). Accurately characterizing the distribution in column density and the evolution in redshift of these absorbers would remove a large part of the uncertainty in the IGM opacity model.
The emissivity models for both AGN and starburst galaxies also need improvement. Although we have used current surveys of Seyfert galaxies and QSOs, we may have missed certain classes of sources that are strong emitters in the EUV. We believe that BL Lac objects contribute less than 10% of Seyferts, based on estimates of their luminosity function and space density. On the other hand, Edelson et al. (1999) suggest that narrow-line Seyfert 1 galaxies may account for $`50`$% of the EUV volume emissivity in the ROSAT Wide-Field Camera sample. It is not clear whether these Seyferts are captured in the Cheng et al. (1985) luminosity function, but their ionizing spectra might be higher than that derived from an extrapolation of their UV fluxes. For low-redshift starbursts, two recent surveys (Gallego et al. 1995; Gronwall 1998) derive comparable values of H$`\alpha `$ luminosity density, although even the latter (KISS) H$`\alpha `$ survey may still be incomplete at the faint end. At higher redshifts, the QSO luminosity function is uncertain, owing to the effects of dust (Fall & Pei 1993; Pei 1995) and faint-end survey incompleteness. QSO surveys by GALEX (Galaxy Explorer) in the ultraviolet and by the Sloan Digital Sky Survey in the optical may clarify the AGN luminosity functions. However, it is worth repeating that, owing to redshifting and IGM opacity, AGN and starbursts at $`z>3`$ contribute less than 10% to the value of $`I_0`$ at $`z=0`$.
For an accurate emissivity density, what is needed most are surveys at $`z<1`$ of AGN and starburst galaxies. Even after we ascertain the space density and H$`\alpha `$ luminosity functions of star-forming galaxies, we still need an accurate measurement of $`f_{\mathrm{esc}}`$, the fraction of stellar ionizing photons that escape the galactic H I layers into the halo and IGM. Here, we have relied on recent theoretical work (Dove et al. 1999) and H$`\alpha `$ observations of gas in the Magellanic Stream (Bland-Hawthorn & Maloney 1999) that suggest $`f_{\mathrm{esc}}0.030.06`$. However, access with FUSE to the far-UV spectrum at 920–950 Å allows a direct measurement of the escaping EUV continuum from starbursts at redshifts $`z0.05`$. This work will extend the studies with the Hopkins Ultraviolet Telescope (HUT) of leaky starbursts (Leitherer et al. 1995; Hurwitz et al. 1997).
Finally, we eagerly await new measurements of the ionizing photon flux, $`\mathrm{\Phi }_{\mathrm{ion}}`$ via several direct and indirect techniques. These methods include improved Fabry-Perot measurements of H$`\alpha `$ from Galactic high-velocity clouds (Tufte et al. 1998; Bland-Hawthorn & Maloney 1999), and UV absorption-line measurements of ionization ratios such as Fe I/Fe II and Mg I/Mg II that constrain the far-UV radiation in the 0.6–1.0 Ryd band (Stocke et al. 1991; Tumlinson et al. 1999). A new absorption-line key project with HST/COS could make precise estimates of $`I_0`$ from the proximity effect. As a result of the new surveys and ventures mentioned above, it should be possible, within five years, to determine the local ionizing background to $`<30`$%. With good fortune, these measurements and the theoretical models will agree to a level better than described here in Table 1.
This work was supported by theoretical grants from NASA (NAG5-7262) and NSF (AST96-17073). The IUE spectral analysis was supported by a grant from NASA’s Astrophysical Data Program (NAG5-3006).
Table 1
Measurements and Limits of low-$`z`$ Ionizing Background<sup>1</sup>
| Technique | $`\mathrm{\Phi }_{\mathrm{ion}}`$ (cm<sup>-2</sup> s<sup>-1</sup>) | Reference |
| --- | --- | --- |
| H$`\alpha `$ Fabry-Perot | $`<3\times 10^4`$ | Vogel et al. (1995) |
| H$`\alpha `$ Filter Images | $`<1.1\times 10^4`$ | Donahue et al. (1995) |
| H$`\alpha `$ Filter Images | $`<8.4\times 10^4`$ | Stocke et al. (1991) |
| H$`\alpha `$ Filter Images | $`<9\times 10^4`$ | Kutyrev & Reynolds (1989) |
| H I Disk Edges | $`(0.55)\times 10^4`$ | Maloney (1993) |
| H I Disk Edges | $`(15)\times 10^4`$ | Dove & Shull (1994a) |
| Prox. Eff. $`z=0.5`$ | $`(0.051.0)\times 10^4`$ | Kulkarni & Fall (1993) |
<sup>1</sup> $`\mathrm{\Phi }_{\mathrm{ion}}`$ is the one-sided, normally incident photon flux in the metagalactic background, related to the specific intensity at Lyman limit by $`\mathrm{\Phi }_{\mathrm{ion}}=(2630\mathrm{cm}^2\mathrm{s}^1)I_{23}(1.8/\alpha _s)`$ – see eq. (1) in text.
Table 2
Low-$`z`$ Opacity Models
| Model | $`N_{\mathrm{min}}`$(cm<sup>-2</sup>) | $`N_{\mathrm{max}}`$(cm<sup>-2</sup>) | $`A_i`$ | $`\beta _i`$ | $`\gamma _i`$ | $`c_i`$ | $`s_i`$ |
| --- | --- | --- | --- | --- | --- | --- | --- |
| Standard | | $`10^{14}`$ | 0.105 | 1.63 | 0.15 | 0.010 | -2.81 |
| | $`10^{14}`$ | $`10^{17}`$ | 0.501 | 1.50 | 0.15 | 0.553 | -2.73 |
| | $`10^{17}`$ | $`10^{22}`$ | 0.159 | 1.50 | 1.5 | 0.263 | -1.04 |
| Model 1 | | $`10^{15.5}`$ | 0.105 | 1.63 | 0.15 | 0.048 | -2.81 |
| | $`10^{15.5}`$ | $`10^{22}`$ | 0.159 | 1.50 | 1.5 | 0.411 | -1.38 |
| Model 2 | | $`10^{15.5}`$ | 0.105 | 1.63 | 0.15 | 0.048 | -2.81 |
| | $`10^{17}`$ | $`10^{22}`$ | 0.159 | 1.50 | 1.5 | 0.263 | -1.04 |
|
no-problem/9907/cond-mat9907366.html
|
ar5iv
|
text
|
# Influence of nanoscale regions on Raman spectra of complex perovskites
## I Introduction
Complex perovskites form vast and promising group of compounds for microelectronics, yet many features they show are still a battlefield for scientists. Among the compounds with a well-known general formula $`AB_x^{}B_{1x}^{\prime \prime }O_3`$ one can find a spectrum of cases, from relaxor ferroelectrics (or ferroelectrics with diffuse phase transition), such as $`PbMg_{1/3}Nb_{2/3}O_3`$ (PMN) and $`PbSc_{1/2}Ta_{1/2}O_3`$ (PST) to antiferroelectrics ($`PbMg_{1/2}W_{1/2}O_3`$ (PMW) for example) . Historically, PMN and PST are most widely studied, due to their relaxor behaviour. Numerous structural studies of such compounds were performed by different techniques (x-ray scattering (see, for example, ), HRTEM, dark field images ) together with neutron scattering , synchrotron studies , and NMR studies , but still there is no unanimous point of view on ”what makes a relaxor to be a relaxor”. In some sense, the results of Raman studies make the problem even more confusing.
Every Raman study of a dielectric crystal leads to a question: to what space group does this compound belong? To make a line assignment one needs at least to determine a ”basis” for that procedure, i.e. to deconvolute the spectrum (and to prove whether this spectrum as a whole or a given part of the spectrum is of first order). In some cases this part of the task is not challenging, but for relaxors. As a matter of fact, in general, complex perovskites’ unit sell belongs to $`Pm3m`$ space group for which no first order Raman scattering is allowed by group theory analysis . But it is well known that these compounds do exhibit first order Raman spectra . Several attempts to make line assignment were maid, and the most common point of view is that they are consistent with $`Fm3m`$ space group . Such studies were performed both on raw and reduced Raman spectra (although due to a presence of a very low-frequency line raw spectra are not so informative), but these studies suffered from one and the same problem, namely, the problem of extra lines. Different ways to overcome this difficulty were suggested, and such lines were treated either as arising due to distortions, or as second order regions of spectra. Another surpassing detail for relaxors is that the lines are unusually broad (40 cm<sup>-1</sup> as a FWHM is a normal value for relaxors) and posses a complex structure of lines, that seems to be nearly a characteristic feature of Raman spectra of the compounds understudy. Also some of Raman lines are obviously asymmetric.
Besides, extensive study of temperature behaviour of Raman spectra of PMN and PST crystals showed that the spectra are temperature dependent and anomalies correlated with other phenomena (such as specific temperature behaviour of the acoustic phonon damping revealed by Brillouin scattering studies and sensitivity of PMN spectra for anomalies in complex dielectric response) were observed .
The present work is aimed on suggestion of a new approach to Raman spectra of $`AB_x^{}B_{1x}^{\prime \prime }O_3`$ complex perovskites.
## II Experimental
Diagonal and parallel Raman spectra of PMN (X(ZZ)Y and X(ZX)Y) and PST (Z(XX)Y and Z(XY)Y) crystals were measured with X, Y and Z being along the fourfold cubic axes. PST crystals were obtained for two levels of ordering. Investigations were carried out in a wide temperature range for both compounds. The details of sample preparation and experiment setup can be found elsewhere .
## III Raman spectra of PMN and PST crystals
Examples of Stokes parts of polarized Raman spectra of PMN and disordered PST are presented in Fig. 1. In this paper only reduced Raman spectra are discussed, so the word ”spectrum” is actually for ”reduced spectrum” unless specially stated. The reason is that dealing with complex perovskites one inevitably has to deal with low frequency region where Bose-Einstein population factor can not be neglected. Another peculiarity to be mentioned beforehand is that, being well acquainted to the results of group theory analysis and mode assignment performed for these compounds, the author deliberately tries to avoid using appropriate mode identifications (such as $`F_{2g}`$, $`E_g`$ and $`A_{1g}`$) when speaking of some part of a spectrum. This may look rather clumsy through the text but is believed to be necessary for a presentation of the approach.
One can easily observe some common features of these spectra. (Mind that at 495 K PMN is in ferrophase and at 373 K a disordered PST is in paraphase.) The striking detail is that the likely regions of spectra have practically the same frequency shifts. For example, apart from such obvious lines as those at approximately 50 cm<sup>-1</sup> and 800 cm<sup>-1</sup>, the bands in the regions of 100 - 300 cm<sup>-1</sup> and 500 - 600 cm<sup>-1</sup> are more or less likely. The lines at approximately 380 and 550 cm<sup>-1</sup> in Z(YZ)X polarization are clearly resolved only for PST. According to dielectric response data, disordered PST is a relaxor as well as PMN. Temperature evolution at least of the low frequency region of PMN was already reported, so this time let us consider the temperature evolution of Z(YY)X spectra of PST (Fig. 2). Quite obviously the spectrum at 292 K differs from one at 741 K, and also a specific temperature evolution is seen. As the population factor is already taken into account, one can suppose that the whole spectra are the first order Raman spectra since no band performs a drastically different temperature behaviour.
So, there are first order Raman spectra exhibiting unusually broad and asymmetric lines and specific temperature evolution obtained from compounds for which no first order Raman scattering is allowed.
## IV Interpretation of Raman spectra
The considerations stated above already lead to conclusion that there is a factor, which violates the Raman selection rules thus allowing the existence of forbidden spectra. It means that for some reason the scattering observed originates not only from the center of the Brillouin zone but from other points of the Brillouin zone as well. As additional evidences for violation of selection rules one can recall that the IR spectra of PMN (as well as neutron diffraction studies ) revealed practically the same set of modes as for Raman spectra. Another fact, which is worth mentioning, is that structural studies performed for PST showed that there are significant deviations from $`Fm3m`$ symmetry . And nevertheless these compounds exhibit practically identical spectra. Many studies revealed the existence of nanoregions in PMN and PST . Although the nature of such regions is still under discussion and quite contradictory interpretations are suggested , nowadays the presence of nanoscale structure in relaxors can not be denied. Recently on the basis of similarity of Raman spectra of $`BaMg_{1/3}Ta_{2/3}O_3`$ (BMT) crystals and ceramics it was suggested that the Raman scattering in BMT is determined by the short-range order in the nanoscale microstructures .
The smallness of nanoregions can be, in principle, the factor, which violates Raman selection rules. Indeed, in ideal infinite crystal only phonons at the center of the Brillouin zone (with $`\stackrel{}{q}0`$) can be observed due to crystal momentum conservation. In an imperfect crystal (and there are no reasons to treat a complex perovskite as an ideal crystal) phonons can be confined in space. Thus, according to the uncertainty relation, there appears an uncertainty in the phonon momentum and phonons with $`\stackrel{}{q}>0`$ can contribute to Raman signal. Provided that the region of scattering is very small, scattering from practically the entire Brillouin zone is allowed to contribute to the signal. To describe a frequency shift and broadening of lines in Raman spectra in the case of scattering from sufficiently small regions a model was developed nowadays known as spatial correlation or phonon-confinement model . This model is outlined briefly below.
The wave function of a phonon with a wave vector in an infinite perfect crystal is
$$\varphi (\stackrel{}{q}_0,\stackrel{}{r})=u(\stackrel{}{q}_0,r)\mathrm{exp}\left(i\stackrel{}{q}_0\stackrel{}{r}\right),$$
(1)
where $`u(\stackrel{}{q}_0,\stackrel{}{r})`$ is the periodicity of the lattice. Let us suppose that the phonon is confined to the sphere of diameter $`L`$. Such confinement can be accounted for by writing another wave function $`\psi `$ instead of $`\varphi `$
$$\psi (\stackrel{}{q}_0,\stackrel{}{r})=A\mathrm{exp}\left\{\frac{r^2}{2}/\left(\frac{L}{2}\right)^2\right\}\varphi (\stackrel{}{q}_0,\stackrel{}{r})=\psi ^{}(\stackrel{}{q}_0,\stackrel{}{r})u(\stackrel{}{q}_0,\stackrel{}{r}),$$
(2)
where
$$\left|\psi \right|^2=A^2\mathrm{exp}\left\{r^2/\left(\frac{L}{2}\right)^2\right\}.$$
(3)
That means that $`\psi `$ is confined to $`\left|r\right|L`$ in the form of Gauss distribution of width $`\sqrt{\mathrm{ln}2L}.And\psi ^{}`$ might be expanded in a Fourier series:
$$\psi (\stackrel{}{q}_0,\stackrel{}{r})=d^3qC(\stackrel{}{q}_0,\stackrel{}{q})\mathrm{exp}\left(i\stackrel{}{q}\stackrel{}{r}\right),$$
(4)
where the Fourier coefficients are given by
$$C(\stackrel{}{q}_0,\stackrel{}{q})=\frac{1}{(2\pi )^3}d^3r\psi ^{}(\stackrel{}{q}_0,\stackrel{}{r})\mathrm{exp}\left(i\stackrel{}{q}\stackrel{}{r}\right).$$
(5)
Substitution of $`\psi ^{}`$ from (2) to into (5) yields
$$C(\stackrel{}{q}_0,\stackrel{}{q})=\frac{AL}{\left(2\pi \right)^{3/2}}\mathrm{exp}\left\{\frac{1}{2}\left(\frac{L}{2}\right)^2\left(\stackrel{}{q}\stackrel{}{q}_0\right)^2\right\}.$$
(6)
Thus the $`\psi ^{}`$ and $`\psi `$ seize to be Eigenfunctions of a phonon wave vector $`\stackrel{}{q}_0`$, and become a superposition of Eigenfunctions with $`\stackrel{}{q}`$ vectors in an interval $`\left|qq_0\right|\frac{1}{2L}`$ centered at $`\stackrel{}{q}_0`$. For the chosen form of confinement (3) the Eigenfunctions are weighted through the coefficients $`C(\stackrel{}{q}_0,\stackrel{}{q})`$, according to a Gauss distribution.
Thus, it is supposed that for the phonon transition matrix elements $`\left|\stackrel{}{q}_0\left|\widehat{o}\right|\stackrel{}{q}\right|^2`$ there appear non-vanishing values also for $`\stackrel{}{q}\stackrel{}{q}_0`$ according to:
$$\left|\stackrel{}{q}_0\left|\widehat{o}\right|\stackrel{}{q}\right|^2=\left|\stackrel{}{q}_0\left|\widehat{o}\right|\stackrel{}{q}_0\right|^2C(\stackrel{}{q}_0,\stackrel{}{q}),$$
(7)
where $`\left|\widehat{o}\right|`$ is the phonon-phonon interaction operator. To simplify the equation, when writing (7) it was assumed that $`u(\stackrel{}{q},\stackrel{}{r})=u(\stackrel{}{q}_0,\stackrel{}{r})`$. Thus the phonon confinement leads to relaxation of $`\mathrm{\Delta }q=0`$ selection rule.
Normally when speaking of Raman spectra we actually deal with the excited optical phonon in the center of the Brillouin zone $`\left(\stackrel{}{q}=0\right)`$. In case of allowed $`\stackrel{}{q}o`$ transiotions contributions with $`\omega `$ determined by the dispersion relations $`\omega \left(\stackrel{}{q}\right)`$ will add to the Raman spectrum. Obviously, the additional transitions with $`\stackrel{}{q}0`$ lead to a broadening of a Raman line and its shift towards higher or lower frequencies. The direction of such frequency shift depends on a dispersion relation for each phonon brunch and will be examined in more details further on.
It is possible to write an expression for a Raman line on the basis of the above mentioned considerations.The wave function of the confined phonon can be expressed via the wave functuion of a phonon in an ideal infinite crystal as (here the notations of are used):
$$\psi (\stackrel{}{q}_0,\stackrel{}{r})=W(\stackrel{}{r},L)\varphi (\stackrel{}{q}_0,\stackrel{}{r})=\psi ^{}(\stackrel{}{q}_0,\stackrel{}{r})u(\stackrel{}{q}_0,\stackrel{}{r}),$$
(8)
where $`W(\stackrel{}{r},L)`$ describes confinement. Then the Fourier coefficients are
$$C(\stackrel{}{q}_0,\stackrel{}{q})=\frac{1}{(2\pi )^3}d^3r\psi ^{}(\stackrel{}{q}_0,\stackrel{}{r})\mathrm{exp}\left(i\stackrel{}{q}\stackrel{}{r}\right)$$
(9)
$`={\displaystyle \frac{1}{(2\pi )^3}}{\displaystyle d^3rW(\stackrel{}{r},L)\mathrm{exp}\left(i\left(\stackrel{}{q}\stackrel{}{q}_0\right)\stackrel{}{r}\right)}.`$
Let us stress again that the confined phonon wave function is a superposition of plane waves with wave vectors $`\stackrel{}{q}`$ centered at $`\stackrel{}{q}_0`$. The Raman lineshape appears to be constructed by superposition of Lorentzian line shapes (with the linewidth of an imagined ideal crystalline media for the compound understudy) centerd at $`\omega \left(\stackrel{}{q}\right)`$ and weighted by the wave-vector uncertainty caused by confinement:
$$I\left(\omega \right)\frac{_o^1\frac{d^3\stackrel{}{q}\left|C(0,\stackrel{}{q})\right|^2}{\left[\omega \omega \left(\stackrel{}{q}\right)\right]^2\left(\frac{\mathrm{\Gamma }_0}{2}\right)^2}}{_0^1d^3\stackrel{}{q}\left|C(0,\stackrel{}{q})\right|^2},$$
(10)
where $`\omega \left(\stackrel{}{q}\right)`$ is the phonon-dispersion curve, and $`\mathrm{\Gamma }_0`$ is the imagined linewidth of an ideal crystalline media for the compound understudy. It is important to notice that, unlike crystals like $`KTaO_3`$, $`K_{1x}Li_xTaO_3`$ in case of complex perovskites we have no sample compound that can give us the exact value of $`\mathrm{\Gamma }_0`$. $`\stackrel{}{q}=0`$ corresponds to the scattering from the center of the Brillouin zone and the integration is carried over the entire Brillouin zone. With $`L\mathrm{}`$, $`C(0,\stackrel{}{q})=\delta \left(\stackrel{}{q}\right)`$ and $`I\left(\omega \right)`$ is a Lorentzian centerd at $`\omega \left(0\right)`$ and with a linewidth of $`\mathrm{\Gamma }_0`$. Different functions can be chosen to describe the localization, but, according to , for the case of sampling a signal from a number of regions of localization a Gaussian (provided that the amplitude is $`0`$ on a border of a region) is the most successful. Thus we have
$$W(\stackrel{}{r},L)=\mathrm{exp}\left[\frac{8\pi ^2r^2}{L^2}\right],$$
(11)
$$\text{ }\left|C(0,\stackrel{}{q})\right|^2\mathrm{exp}\left[\frac{q^2L^2}{4}\right],$$
(12)
where $`q`$ is given in units of $`2\pi /a`$, where $`a`$ is the lattice constant (approximately 4 Å for complex perovskites), and $`L`$ is given in units of $`a`$.
Mind that further on instead of arbitrary intensity $`I\left(\omega \right)`$ Raman suscptibility $`\chi \left(\omega \right)`$ will be considered obtained as
$$\chi \left(\omega \right)=\frac{I\left(\omega \right)}{F(\omega ,T)},$$
(13)
where $`F(\omega ,T)=\left[n(\omega ,T)+1\right]`$ for Stokes part of spectra and $`F(\omega ,T)=n(\omega ,T)`$ for anti-Stokes part respectively, and
$$n(\omega ,T)=\left[\mathrm{exp}\left(\mathrm{}\omega /kT\right)1\right]^1,$$
(14)
where $`\omega `$ is the Raman shift and $`T`$ is the temperature.
So, the Raman line shape for the case of a phoon confined to a sphere of diameter $`L`$, and a Gaussian confinement is
$$\chi \left(\omega \right)\frac{_0^1\frac{dq\mathrm{exp}\left(q^2L^2/4\right)4\pi q^2}{\left[\omega \omega \left(q\right)\right]^2+\left(\mathrm{\Gamma }_0/2\right)^2}}{_0^1𝑑q\mathrm{exp}\left(q^2L^2/4\right)4\pi q^2},$$
(15)
where $`\omega \left(q\right)`$ is an approximate one-dimensional phonon-dispersion curve. Unfortunately, there are no reported experimental data on phonon-dispersion curves in complex perovskites, but it can be easily shown that for such calculations the quadratic term of polinomial approximation is playing the key role and other terms can be neglected. So, the dispersion curve for (14) can be written as
$$\omega \left(q\right)=\omega _0\pm Q\omega ^2,$$
(16)
where $`Q`$ is the quadratic term and the sign before $`Q`$ in fact determines whether the line suffers a shift to low freuencies (minus) or to higher frequencies (plus). Fig. 3 illustrates alteration of frequency position and shape of a Raman line with different $`L`$ and $`Q`$.
## V Results and discussion
The initial aim of the present study was to describe the experimental Raman spectra of PMN and PST with a fixed set of modes on the basis of spatial-correlation model. An example of such calculation is presented in Fig. 4. In the contribution only a region of 0 - 400 cm<sup>-1</sup> is considered. The results of interpretation of the rest of the frequency region (400 - 1000 cm<sup>-1</sup>) will be reported later, but it is possible to state that these results do not contradict to the suggested interpretation. Besides, such sharing of spectra regions does not lead to mistakes in calculations. Similar calculations were performed for all experimentally obtained spectra. The first important result is that the size of nanoregions is a kind of ”inborn” characteristic of the sample (or, generally speaking, of the type of compound) and is not temperature dependent. Thus, for PMN and PST disordered $`L=4`$ and for PST ordered $`L=5`$. To check the validity of such approximation in the results obtained for one of a high-temperature spectra of PST was taken equal to 12 (that value influences lineshapes not so dramatically) and both Z(YY)X and Z(YZ)X lines were calculated to make the simulated spectra effectively unpolarized. The results appeared to be in good agreement with the spectra experimentally obtained by authors of for non-polarized spectra of crystalline BMT (Fig. 5).
One can readily notice that in the frequency range of 100 - 200 cm<sup>-1</sup> the simulated Raman lines are different from those in the frequency region of 150 - 250 cm<sup>-1</sup> on the experimentally obtained spectrum, although the number of lines is the same. It appeared that for disordered PST there are modes in the region of 100 - 200 cm<sup>-1</sup>, which demonstrate an alteration of parameters (especially of the value of the line frequency position) with temperature growth. One of these modes seems to be a ”softening” mode, because its positions changes rapidly from 260 cm<sup>-1</sup> at 550 K to 140 cm<sup>-1</sup> at 600 K, while its intensity significantly decreases in the range of 300 - 500 K. For the ordered PST also one mode with similar behaviour was found. It changes its position from 160 cm$`^{1\text{ }}`$at 300 K down to 90 cm<sup>-1</sup>at 600 K.
The analysis of the obtained parameters of the phonon modes showed some anomalies in their temperature behavior. The common feature of these anomalies is that they all happen in the region of approximately 350 - 600 K and the main temperature points appear to be in the vicinty of 400 K and 600 K. The point 400 K is in good agreement with suggested estimation of Curie-Weiss temperature performed on the basis of experimental investigation of dielectric permittivity of PST crystals . As for 600 K, anomalies of the refractive index starting at approximately 600 K were reported by the authors of .
More detailed analysis of the temperature behaviour of mode parameters will be published later.
## Acknowledgement
The work was supported by INTAS Project 96/167, INTAS Fellowship grant for Young Scientists No YSF 98-75. The author is extremely thankful to the Royal Institute of Technology for technical support and to Dr. E.Obraztsova and Dr. A.Tagantsev for helpful and inspiring discussions.
## Figure Captions
|
no-problem/9907/astro-ph9907152.html
|
ar5iv
|
text
|
# SEARCH FOR ANTIMATTER IN SPACE WITH THE ALPHA MAGNETIC SPECTROMETER
## 1 Introduction
The disappearance of cosmological antimatter and the pervasive presence of dark matter are two of the greatest puzzles in the current understanding of our universe.
The Big Bang model assumes that, at its very beginning, half of the universe was made out of antimatter. The validity of this model is based on three key experimental observations: the recession of galaxies (Hubble expansion), the highly isotropic cosmic microwave background and the relative abundances of light isotopes. However, a fourth basic observation, the presence of cosmological antimatter somewhere in the universe, is missing. Indeed measurements of the intensity of gamma ray flux in the MeV region exclude the presence of a significant amount of antimatter up to the scale of the local supercluster of galaxies (tens of Megaparsecs). Antimatter should have been destroyed immediately after the Big Bang due to a mechanism creating a matter-antimatter asimmetry through a large violation of CP and the baryon number . Alternatively matter and antimatter were separated into different region of space, at scales larger than superclusters . Other possibilities have also been recently suggested . All efforts to reconcile the the absence of antimatter with cosmological models that do not require new physics failed .
We are currently unable to explain the fate of half of the baryonic matter present at the beginning of our universe.
Rotational velocities in spiral galaxies and dynamical effects in galactic clusters provide us convincing evidence that, either Newton laws break down at scales of galaxies or, more likely, most of our universe consists of non-luminous (dark) matter . There are several dark matter candidates (for a recent review see ). They are commonly classified as ”hot” and ”cold” dark matter, depending on their relativistic properties at the time of decoupling from normal matter in the early universe. As an example, light neutrinos are obvious candidates for ”hot” dark matter while Weakly Interacting Massive Particles (WIMP’s) like the lightest SUSY particle (LSP) are often considered as plausible ”cold” dark matter candidate . Even the recent results suggesting a positive cosmological constant reducing the amount of matter in the universe, confirm the dominance of dark matter over baryonic matter.
We are then unable to explain the origin of most of the mass of our universe.
To address these two fundamental questions in astroparticle physics a state of the art particle detector, the Alpha Magnetic Spectrometer (AMS) has been approved in 1995 by NASA to operate on the International Space Station (ISS).
AMS has successfully flown on the precursor flight (STS91, Discovery, June 2nd 1998, Figure 1), and it is approved for a three year long exposure on the International Space Station (ISS), (Figure 2), after its installation during Utilization Flight n.4, now scheduled in 2004. AMS has been proposed and has been built by an international collaboration coordinated by DoE, involving China, Finland, France, Germany, Italy, Portugal, Rumenia, Russia, Spain, Switzerland, Taiwan and US.
In this conference we report on the operation of AMS during the precursor flight and we give preliminary results on the search for nuclear antimatter.
## 2 AMS design principles and operation during the Shuttle flight
Search of antiparticles requires the capability to identify with the highest degree of confidence, the type of particle traversing the experiment by measuring its mass, the absolute value and the sign of its electric charge. This can be achieved through repeated measurements of the particle momentum (Magnetic Spectrometer), velocity (Time of Flight, Cerenkov detectors) and energy deposition (Ionization detectors).
AMS configuration on the precursor flight is shown in Figure 3. It consists on a large acceptance magnetic spectrometer ($`0.6m^2sr`$) based on a permanent Nd-Fe-B Magnet , surrounding a six layer high precision Silicon Tracker and sandwiched between the four planes of the Time of Flight scintillator system ($`ToF`$).
A scintillator Anticounter system, located on the magnet inner wall and a aerogel threshold Cherenkov detector ($`n=1.035`$), complete the experiment. A thin shield on the top and the bottom sides absorb low energy particles present on the Earth radiation belts. The detector works in vacuum: the amount of material in front of the $`ToF`$ is about $`1.5g/cm^2`$ and $`4g/cm^2`$ in front of the Tracker.
The magnet is based on recent advancements in permanent magnetic material technology which made it possible to use very high grade Nd-Fe-B to construct a permanent magnet with $`BL^2=0.15Tm^2`$ weighting $`2`$ tons. The magnet has a cylindrical shape with $`80cm`$ of height and an internal diameter of $`100cm`$. A charged particle traversing the spectrometer experiences a dipole field orthogonal to the cylinder axis: it triggers the experiment through the $`ToF`$ system (planes $`S1`$ to $`S4`$) which also measures the particle velocity ($`\beta `$) with a typical resolution of $`105ps`$ over a distance of $`1.4m`$.
The curvature of the tracks is measured by up to six layers of silicon double sided detectors, supported on ultralight honeycomb planes: the total material traversed by a particle is very small, $`3.2\%`$ of $`X_0`$ over the tracking volume and for normal incidence. The momentum resolution of the Silicon Spectrometer is about $`8\%`$ in the region between $`3`$ and $`10`$ $`GV`$ of rigidity: at lower rigidities its resolution worsen due to the multiple scattering while, at high energy, the maximum detectable rigidity ($`\frac{\mathrm{\Delta }R}{R}=100\%`$) is about $`500GV`$. The Tracker rigidity resolution function was measured at the GSI ion accelerator facility in Darmstadt in october 1998, using $`He`$ and $`C`$ beams, and at CERN in november of the same year, using a proton beam. The results confirm the design value; an example of the measured resolution is shown in Figure 4. The parameters of the Silicon Spectrometer are given in Table 1: about $`45\%`$ of the Tracker sensitive area was equipped during the precursor flight, with a corresponding reduction on the spectrometer acceptance.
Both the $`ToF`$ scintillators and the Silicon Tracker layers measure $`\frac{dE}{dx}`$, allowing a multiple determination of the absolute value of the particle charge, Z. Figure 5 show the measurement of the energy deposited by different light nuclei during the precursor flight.
All detector elements during production have undergone thermo-vacuum tests which have demonstrated that neither the deep vacuum nor important temperature variations deteriorate detector performances. Systematic vibration tests qualified that the mechanical design and workmanship were suited to stand the mechanical stresses during launch and landing.
During the STS91 mission the Spectrometer collected data at trigger rates varying from $`100Hz`$ at the equator to $`700Hz`$ at $`\pm 52^o`$, where the event rate was limited by the data acquisition speed.
After preprocessing and compression, the data were stored on hard disks located on the Shuttle. A total of about 100 million triggers have been recorded during the ten days mission. A considerable part of the time, however, the Shuttle was docked to the MIR station: in this condition the orientation was no good for the AMS since it was sometimes pointing towards Earth. Besides, some element of the station were in the AMS view, thus producing additional unwanted background. The useful time when only deep space was seen by the experiment was about 4 days. Samples of the data ($`<10\%`$ of the total) were are also sent to ground in real time using S-band receiving ground stations at an average rate of $`1`$ Mbit/s. Although only rough calibrations were applied to these data, the reconstructed events were used online to monitor AMS operating conditions during the mission.
For example, Figure 6 shows the Tracker response to different types of CR during the MIR docking period: the double logarthmic plot $`\frac{dE}{dx}`$ versus $`|R|=|\frac{p}{Z}|`$ clearly shows bands corresponding to light particles ($`e^\pm ,\mu ^\pm ,\pi ^\pm `$), $`p^\pm `$, $`{}_{}{}^{3}He/^4He`$ and heavier ions. Figure 7 show the CR mass spectrum obtained from $`R`$ measured by the Tracker and $`\beta `$ measured by the ToF.
A candidate $`\overline{p}`$ event, measured online, is also shown in Figure 8: one can note that the occupancy of the Silicon Tracker is very low, allowing unambiguous reconstruction of the particle trajectory in the magnetic field, meaning the sign of its charge and its momentum.
After landing, the full set of hard disks containing the data has been duplicated and the copy has been transported to CERN. During the month of august we determined the in flight calibration constants for the various detectors. The first mass production took place in the fall of 1999 using a cluster of Alpha stations located at CERN.
## 3 Antimatter Search
To search for nuclear antimatter, we search for particles with
* negative rigidity;
* module of the charge $`Z`$ equal or greater than 2;
* mass equal of greater than the He mass.
These quantities are obtained through repeated measurements of the velocity and its direction ($`ToF`$ counters), the signed momentum by the Silicon Spectrometer, the absolute value of the charge from $`\frac{dE}{dx}`$ measurements of up to four $`ToF`$ layers and up to 6 Silicon Tracker layers.
We start with a preselection of $`Z>1`$ particles and apply soft quality cuts to reject background particles with negative momentum ($`\overline{He}`$ and antinuclei with $`Z>2`$ candidates). The effects of these cuts is studied on control samples containing $`5.7`$ M $`He`$ and $`276`$ K $`Z>2`$ events.
* To reject background due to single nuclear scattering in the Tracker we apply cuts on the particle rigidity $`R`$. $`R`$ is measured by the Silicon Spectrometer, using tracks having 5 or 6 points. Since during the precursor flight the Tracker was only partially equipped, we inclued in this analysis also events containing a track detected only on 4 planes.
The particle rigidity is measured three times: the first two measurements $`r_1^n`$ and $`r_2^n`$ are obtained by using three consecutive points out of the total number of measured points $`n`$, in the following way: 6 points patterns $`r_1^6=r_{123}`$, $`r_2^6=r_{456}`$, 5 points patterns $`r_1^5=r_{123}`$, $`r_2^5=r_{345}`$, 4 points patterns $`r_1^4=r_{123}`$, $`r_2^4=r_{234}`$, where the lower indices represent the consecutive planes partecipating to the track fit. The third measurement, $`R`$, is obtained from a fit of all the points associated with one track. In order to take properly into account the presence of multiple scattering, we used the GEANE fitting procedure . The three determination of the rigidity are compared requiring that they give the same sign of the charge and consistent measurements of the momentum components. In particular, the comparison of the relative rigidity error $`\frac{\mathrm{\Delta }R}{R}`$ with the rigidity asymmetry $`A_{12}=(r_1^nr_2^n)/(r_1^n+r_2^n)`$ allows the removal of about $`90\%`$ of the negative momentum particles while keeping $`79\%`$ of the $`He`$ control sample.
* To reject background due to an interaction of the primary particle in the Tracker material we apply cuts on isolation of the clusters detected on the silicon planes. Events where too much energy is observed within $`5mm`$ of the track are rejected. This cut reject fifteen times more particle in the sample with negative momentum: the positive momentum control samples are basically unaffected ($`97\%`$ of the events pass the cuts).
* To separate between upward going and downward going particles we use the $`ToF`$ measurement.
* The Identification of the absolute value of the particle charge is based on the repeated measurement of the $`\frac{dE}{dX}`$ on the Silicon Tracker and $`ToF`$: we measure a contamination between $`p`$ and $`He`$ below the $`10^7`$ level.
After the preselection we apply additional $`\chi ^2`$-cuts on the track and $`ToF`$ measurements and on an overall likelihood function describing the probability of an event to be compatible with $`He`$, or heavier nucleus, kinematics, mass and velocity. Some of these cuts are stricter for events hitting only 4 planes. After these cuts all the candidates in the $`\overline{He}`$ sample were removed while $`2,8`$ M events on the $`He`$ sample survived, giving a total efficiency of about $`49\%`$. Similarly $`156`$ K events with $`Z>2`$ survived the cuts, but none with negative momentum, with a corresponding cuts efficiency of about $`56\%`$. The spectra of the positive charge samples after the cuts are shown in Figure 9: the spectra extends above 100 GV of rigidity for both samples.
The corresponding Tracker rigidity resolution is shown in Figure 10.
## 4 Antimatter Limits
To establish a preliminary antimatter upper limit we proceed as follows. The flux of incident $`He`$ nuclei in a rigidity bin $`(r,r+\mathrm{\Delta }r)`$ as a function of the measured rigidity $`r`$, $`\mathrm{\Phi }_{He}(r)`$, is related to the measured He flux, $`\mathrm{\Phi }_{He}^M(r)`$, by
$$\mathrm{\Phi }_{He}(r)=ϵ_{He}^1(r)\mathrm{\Phi }_{He}^M(r)$$
(1)
where $`ϵ_{He}(r)`$ is the rigidity dependent selection efficiency of the cuts discussed in the previous section, simulated through a complete MC simulation using the GEANT package. Trigger efficiency and the rigidity dependence of the anticounter veto as well as the corrections due to electronics dead time which was important on polar regions, was checked with events taken with an unbiased trigger. We also corrected $`ϵ_{He}(r)`$ for the $`He\overline{He}`$ difference in absorption cross sections .
Since we detected no $`\overline{He}`$ candidate, the differential upper limit for the flux ratio at $`95\%CL`$ is given by:
$$\frac{\mathrm{\Phi }_{\overline{He}(r)}}{\mathrm{\Phi }_{He}(r)}<\frac{3/ϵ_{\overline{He}}(r)}{ϵ_{He}^1(r)\mathrm{\Phi }_{He}^M(r)}$$
(2)
Since no $`\overline{He}`$ where found over all the measured rigidity range:
$$\mathrm{\Phi }_{\overline{He}}^M(r)𝑑r<3$$
(3)
With the model dependent assumption that the $`\overline{He}`$ rigidity spectrum coincide with the $`He`$ spectrum we obtain:
$$\frac{\mathrm{\Phi }_{\overline{He}}}{\mathrm{\Phi }_{He}}<\mathrm{1.14\; 10}^6$$
(4)
Similarly with $`Z>2`$ data we obtain
$$\frac{\mathrm{\Phi }_{\overline{Z>2}}}{\mathrm{\Phi }_{Z>2}}<\mathrm{1.9\; 10}^5$$
(5)
We can also give a conservative upper limit which does not depend on the unknown $`\overline{He}`$ energy spectrum . We integrate the arguments in equation (2) between $`r_{min}`$ and $`r_{max}`$ taking the minimum value of the efficiency in this rigidity interval $`ϵ_{\overline{He}}^{min}=min[ϵ_{He}(r)]_{r_{min}}^{r_{max}}`$. We calculate
$$\frac{_{r_{min}}^{r_{max}}\mathrm{\Phi }_{\overline{He}}𝑑r}{_{r_{min}}^{r_{max}}\mathrm{\Phi }_{He}𝑑r}<\frac{3/ϵ_{\overline{He}}^{min}}{_{r_{min}}^{r_{max}}ϵ_{He}^1(r)\mathrm{\Phi }_{He}^M(r)𝑑r}$$
(6)
which for $`r_{min}=1.6GV`$ and $`r_{max}=20GV`$ gives a model independent limit on $`\mathrm{\Phi }_{\overline{He}}/\mathrm{\Phi }_{He}`$ of $`\mathrm{1.7\; 10}^6`$ at $`95\%`$ of $`CL`$ while for $`Z>2`$ the corresponding limit is $`\mathrm{2.8\; 10}^5`$. Figure 11 shows this preliminary result for $`\overline{He}`$ compared with previous published results<sup>-</sup> and the expected AMS sensitivity on the ISS. Our result is better than the best limit published by BESS adding the data of the ’93, ’94, and ’95 flights at $`56^o`$ of latitude . It also spans over a larger rigidity interval. For $`Z>2`$ our results is about 5 times better than the previous published results<sup>,</sup>. The large AMS acceptance made possible to set these stringent limits using only 4 days of exposition to deep space.
## 5 Conclusion
The AMS experiment has successfully completed the first precursor flight in June 1998 with an excellent performance of all subsystems, collecting about 100 millions primary CR during 152 orbits around the earth. AMS upper limits on the existence of antimatter improve the results of nearly 40 years of similar searches using stratospheric ballons.
There has never been a sensitive magnetic spectrometer in space covering the energy range up to hundreds of GeV. After its installation on the ISS in 2004, AMS will measure the CR rays composition with an accuracy orders of magnitude better than before. This instrument will open a new sensitivity window in the search for antimatter and for supersymmetric dark matter in the galactic halo.
|
no-problem/9907/quant-ph9907025.html
|
ar5iv
|
text
|
# Aharonov–Bohm Effect and Coordinate Transformations.
## 1 Introduction.
Quantum Theory (QT) has become one of the most successful human achievements, and almost all of physics now relies upon QT. Nevertheless, there are some old conceptual puzzles that still beset this theory. For instance, the so called quantum measurement problem (the problem of the quantum limit) , the possible incompleteness of the general–relativistic description of gravity in the context of QT , or the possible discrepancy, in a curved manifold, between Feynman and Schrödinger formalisms .
At the classical level gravity can be understood as a purely geometric effect, the motion of a free classical particle moving in a curved manifold is given by the Weak Equivalence Principle (WEP), i.e., the particle moves along geodesics. The inclusion of additional interactions is done resorting to SEP, the famous “semicolon goes to coma rule” . This principle tells us that locally the laws of physics are the special–relativistic laws.
Classically the role of geometry is local, i.e., the dynamics of a free classical particle located at a certain point $`P`$ of any Riemannian manifold is, according to General Relativity (GR), determined by the geometric properties of this manifold at $`P`$ (the motion equations can be written in terms of the connection coefficients, which at $`P`$ depend only on the values of the components of the metric and their derivatives evaluated at $`P`$), geometry at any other point plays no role in the determination of the motion when the particle is at $`P`$. If we consider the geodesic deviation between two particles, then we would obtain information of Riemann tensor, but once again only of the region where the motion of these classical particles takes place.
Nevertheless, at the quantum level the situation could be not so satisfactory. Indeed, the experiment of Colella, Overhauser and Werner tells us that at the quantum level gravity is not anymore a purely geometric effect , the mass of the employed particles appears explicitly in the interference term. This fact emerges once again if we measure continuously the position of the particles (even if the particles follow the same trajectory), and could lead to the emergence, in some cases, of something like a gravitational quantum Zeno effect .
In order to understand better the possible appearance of nonlocal effects in QT let us at this point address an already known similarity between electrodynamics and gravitation.
At the classical level the motion of a charged particle is solely determined by the force law of Lorentz and Newton’s second law. We also know that the electric and magnetics fields are invariant under the so called gauge transformations .
Nevertheless, this is not the situation at the quantum level. The role that the concept of potential plays in physics has been deeply modified by Aharonov–Bohm effect (AB) . Indeed, even though Lorentz force vanishes at those points at which the wave function has nonvanishing values, the dynamical behavior is sensitive to the existence of a magnetic field inside a region where the charged particle can never enter, the vector potential $`𝐀`$ has in this effect a measurable consequence, detectable in the interference pattern of a charged particle. AB shows us also that in QT there are nonlocal effects, i.e., the features of the vector potential at points where the wave function vanishes affects the dynamics of the particle, in other words, we could say that in QT $`𝐀`$, sometimes, has a nonlocal role. This does not happen in the classical case, where forces have a local character. This effect has already been confirmed experimentally .
In classical physics $`𝐀`$ is a gauge field, it has no physical meaning (at least before a gauge is imposed), it is the field strength which is physically relevant. Concerning geometry something similar happens in relation with the components of the metric tensor, $`g_{\mu \nu }`$. They are deprived of physical meaning (of course, the metric tensor has physically relevant meaning, but not its components), and sometimes play also the role of a potential, i.e., gravitational potential. Therefore we may wonder if we could find a construction in which (in analogy with the electrodynamical case, where a nongauge invariant field renders nonlocal effects in QT) these noninvariant (under coordinate transformations) parameters, $`g_{\mu \nu }`$, could allow us to find nonlocal effects in QT. It is in this sense that here we will speak of a coordinate transformations–induced Aharonov–Bohm effect, the appearance of a nonlocal behavior in nonrelativistic QT by means of coordinate transformations.
In this work we will consider a Minkowskian spacetime and two coordinate systems in it. One of them is an inertial system, while the second one is accelerated. We will prove resorting to a Gedankenexperiment, which is very similar to the AB case, that in the accelerated system nonlocal effects could appear in the context of nonrelativistic QT, and that this is a geometry–induced feature.
## 2 Transformations and Aharonov–Bohm Effect.
Consider a Minkowskian spacetime, and let us denote the coordinates of an inertial coordinate system by $`x^{\stackrel{~}{\alpha }}`$. The matrix elements of the coordinate transformation leading to a second coordinate system (which in general is noninertial, and whose coordinates will be denoted by $`x^\beta `$) are given by $`\mathrm{\Lambda }_{\stackrel{~}{\alpha }}^\beta `$. In other words, we have $`x^\beta =\mathrm{\Lambda }_{\stackrel{~}{\alpha }}^\beta x^{\stackrel{~}{\alpha }}`$. Notice that no conditions have been imposed upon $`\mathrm{\Lambda }_{\stackrel{~}{\alpha }}^\beta `$.
Let us now proceed to analyze the movement of a quantum particle, and denote its corresponding Lagrangian by $`L`$. We will restrict ourselves to the case of low velocities, i.e., velocities much smaller than the speed of light.
The motion of a free classical particle in a Riemannian manifold is given by the corresponding geodesics . In the case of a Minkowskian spacetime, an inertial coordinate system obtains the motion equations calculating the extremal curves of the following expression
$$S=(\eta _{\stackrel{~}{\mu }\stackrel{~}{\nu }}\frac{dx^{\stackrel{~}{\mu }}}{d\tau }\frac{dx^{\stackrel{~}{\nu }}}{d\tau })^{1/2}𝑑\tau .$$
(1)
But in the noninertial system one has to consider not expression (1) but
$$S=(g_{\mu \nu }\frac{dx^\mu }{d\tau }\frac{dx^\nu }{d\tau })^{1/2}𝑑\tau .$$
(2)
From (2) we deduce the motion equations in the non–inertial system
$$\frac{d^2x^\beta }{d\tau ^2}+\mathrm{\Gamma }_{\mu \nu }^\beta \frac{dx^\mu }{d\tau }\frac{dx^\nu }{d\tau }=0,$$
(3)
here $`\mathrm{\Gamma }_{\mu \nu }^\beta `$ are the so called connection coefficients, and $`\tau `$ represents proper time.
In other words, we may interpret expression (2) as the action of a classical particle in the noninertial system, and $`L=(g_{\mu \nu }\frac{dx^\mu }{d\tau }\frac{dx^\nu }{d\tau })^{1/2}`$ as its Lagrangian.
We now introduce in this Minkowskian spacetime a very particular coordinate system.
Consider a cylindrical volume $`H`$ (this volume is infinitely long, and its cross section, denoted by $`E`$, has radius $`\rho _a`$, see figure). Let us now introduce a vector field, denoted by $`𝐀`$, such that it satisfies the following conditions: (i) inside $`H`$ (if $`0\rho \rho _a`$) we have, in cylindrical coordinates, $`A_\rho =0`$, $`A_z=0`$, and $`A_\varphi =F\rho /2`$; (ii) outside $`H`$ (if $`\rho _a\rho `$), $`A_\rho =0`$, $`A_z=0`$, and finally $`A_\varphi =F\rho _a/2`$. Here $`\rho _a>0`$ is a fixed number and $`F`$ is a nonvanishing real number. Clearly, $`𝐀`$ is everywhere continuous. From this definition we may evaluate its rotational; $`\times 𝐀=\mathrm{𝟎}`$, if $`\rho _a\rho `$, and $`\times 𝐀=F\widehat{𝐳}=𝐅`$, if $`0\rho \rho _a`$, being $`\widehat{𝐳}`$ the unit vector along the axis of symmetry.
We may now define, using vector field $`𝐀`$, the components of the metric of our accelerated coordinate system, namely $`g_{0\varphi ,\rho }=A_\varphi `$, $`g_{0z,\rho }=A_z`$, $`g_{0\rho ,\rho }=A_\rho `$, $`g_{00,\nu }=0`$, and $`g_{0l,j}=0`$, if $`j\rho `$ (here $`l`$ and $`j`$ represent space coordinates, while $`\nu `$ denotes spacetime ones).
The mathematical consistency of this noninertial metric is determined by the existence of a coordinate transformation that could render the aforesaid conditions. In order to see that we may have this kind of coordinate system, we must note that in this situation we must determine 16 functions, namely $`\mathrm{\Lambda }_\mu ^{\stackrel{~}{\beta }}`$ (because we have differential equations in terms of the components of the metric of the accelerated system, and we also know that $`g_{\mu \nu }=\mathrm{\Lambda }_\mu ^{\stackrel{~}{\beta }}\mathrm{\Lambda }_\nu ^{\stackrel{~}{\alpha }}\eta _{\stackrel{~}{\beta }\stackrel{~}{\alpha }}`$). But we have only 13 equations, and in consequence the system, in principle, is solvable.
We now construct a Gedankenexperiment that could be considered, in some way, an extrapolation of the famous Aharonov–Bohm construction .
Take two points $`P`$ (source point) and $`Q`$ (detection point) in this manifold such that the above mentioned cylindrical volume lies between them (see figure).
A particle will move from $`P`$ to $`Q`$. It first passes through a conventional two–slit device (here we consider each slit as a finite “hole”) , and afterwards enters a region in which a forbidden volume $`D`$ for this particle exists (this volume $`D`$ is infinitely long, and contains in its interior the, also infinitely long, cylinder $`H`$ in which $`\times 𝐀\mathrm{𝟎}`$). Then it is detected at point $`Q`$. In other words, after passing the two–slit device it remains always on one “side” of space, either “right” or “left”, volume $`D`$ acts as a barrier for the particle.
Under these conditions the proper time of any curve $`C`$ joining $`P`$ and $`Q`$ is given by
$$S=_C(g_{\mu \nu }\frac{dx^\mu }{d\tau }\frac{dx^\nu }{d\tau })^{1/2}𝑑\tau .$$
(4)
Let us now suppose that the source and detection points are moved along the $`\rho `$ coordinate a distance $`ϵ`$, i.e., any curve $`x^\alpha =C^\alpha (\tau )`$ joining $`P`$ and $`Q`$ will become now $`x^\alpha =C^\alpha (\tau )`$ for $`\alpha \rho `$, and $`\rho (\tau )=C^\rho (\tau )ϵ`$, with the condition $`|ϵ|<<1`$.
In this new situation expression (4) becomes
$$S=_C(g_{\mu \nu }\frac{dx^\mu }{d\tau }\frac{dx^\nu }{d\tau }+ϵ\frac{g_{\mu \nu }}{\rho }\frac{dx^\mu }{d\tau }\frac{dx^\nu }{d\tau })^{1/2}𝑑\tau .$$
(5)
We now consider a quantum particle moving (here only the case of low velocities is analyzed) from the new source point to the new detection point. The description of the movement of this particle can be done using Feynman’s path integral formulation for a nonrelativistic particle , thus its propagator $`U`$ is given by
$$U(𝐱_2,\tau ^{\prime \prime };𝐱_1,\tau ^{})=d[𝐱(\tau )]exp\left(\frac{i}{\mathrm{}}_\tau ^{}^{\tau ^{\prime \prime }}(g_{\mu \nu }\frac{dx^\mu }{d\tau }\frac{dx^\nu }{d\tau }+ϵ\frac{g_{\mu \nu }}{\rho }\frac{dx^\mu }{d\tau }\frac{dx^\nu }{d\tau })^{1/2}𝑑\tau \right),$$
(6)
being $`𝐱_2`$ and $`𝐱_1`$ the space coordinates of the new detection point and of the new source point, respectively.
But low velocities ($`\frac{dt}{d\tau }1`$, here $`c=1`$) and $`g_{00,\rho }=0`$ imply that the propagator is approximately
$$U(𝐱_2,\tau ^{\prime \prime };𝐱_1,\tau ^{})=d[𝐱(t)]exp\left(\frac{i}{\mathrm{}}_\tau ^{}^{\tau ^{\prime \prime }}(L+ϵA_l\frac{dx^l}{dt})𝑑t\right),$$
(7)
being $`L=(g_{\mu \nu }\frac{dx^\mu }{dt}\frac{dx^\nu }{dt})^{1/2}`$.
## 3 Interference Terms.
Let us now calculate the probability of detecting our particle. Clearly, the propagator at this point is the sum of two terms, the propagator “right” and the propagator “left”.
$`U(𝐱_2,\tau ^{\prime \prime };𝐱_1,\tau ^{})={\displaystyle _{(right)}}d[𝐱(t)]exp\left({\displaystyle \frac{i}{\mathrm{}}}{\displaystyle _\tau ^{}^{\tau ^{\prime \prime }}}[L+ϵA_l{\displaystyle \frac{dx^l}{dt}}]𝑑t\right)`$
$`+{\displaystyle _{(left)}}d[𝐱(t)]exp\left({\displaystyle \frac{i}{\mathrm{}}}{\displaystyle _\tau ^{}^{\tau ^{\prime \prime }}}[L+ϵA_l{\displaystyle \frac{dx^l}{dt}}]𝑑t\right).`$ (8)
As was mentioned before, in this Gedankenexperiment the particles can not go from the left–hand side to the right–hand side (or from the right–hand side to the left–hand side). These two conditions imply that in our case orbits are confined to a topologically restricted part of space.
If we carry out the so called “skeletonization” , then we may see that the contribution to the respective integrals of each trajectory can be written as follows
$$\underset{n=1}{\overset{N1}{}}exp\left(\frac{i}{\mathrm{}}S[n,n+1]+\frac{i}{\mathrm{}}T_{top}\right)exp\left(ϵ\frac{i}{\mathrm{}}_C𝐀𝑑𝐬\right),$$
(9)
being $`C`$ the trajectory under consideration joining new source point and new detection point, $`s^l=\frac{dx^l}{dt}dt`$, and $`S`$ the action associated to $`L`$. The new term $`T_{top}`$ is a pure boundary term, which keeps track of the imposed topological restrictions . Either “right” or “left” the rotational of $`𝐀`$ vanishes, therefore the line integral on the last term of expression (9) depends only on the initial and final points, and not on $`C`$ (it is readily seen that $`C`$ is not a closed curve, and that it lies outside cylinder $`H`$). In other words, if we consider two trajectories “right” (“left”) the contribution of our vector field $`𝐀`$ to each one of them is the same, i.e., the exponential of the line integral of $`𝐀`$ is a common factor.
Hence the propagator becomes now
$`U(𝐱_2,\tau ^{\prime \prime };𝐱_1,\tau ^{})=exp\left(ϵ{\displaystyle \frac{i}{\mathrm{}}}{\displaystyle _{C1}}𝐀𝑑𝐬\right){\displaystyle _{(right)}}d[𝐱(t)]exp\left({\displaystyle \frac{i}{\mathrm{}}}{\displaystyle _\tau ^{}^{\tau ^{\prime \prime }}}\stackrel{~}{L}𝑑t\right)`$
$`+exp\left(ϵ{\displaystyle \frac{i}{\mathrm{}}}{\displaystyle _{C2}}𝐀𝑑𝐬\right){\displaystyle _{(left)}}d[𝐱(t)]exp\left({\displaystyle \frac{i}{\mathrm{}}}{\displaystyle _\tau ^{}^{\tau ^{\prime \prime }}}\stackrel{~}{L}𝑑t\right).`$ (10)
Here $`C1`$ and $`C2`$ are any trajectory (joining points $`P`$ and $`Q`$) “right” and “left”, respectively, and $`\stackrel{~}{L}`$ represents the Lagrangian function $`L`$ plus the topological term $`T_{top}`$.
From expression (10) we may evaluate the interference term.
$$I=2\alpha cos\left(ϵ\frac{1}{\mathrm{}}_{\stackrel{~}{C}}𝐀𝑑𝐬\right)2\beta sin\left(ϵ\frac{1}{\mathrm{}}_{\stackrel{~}{C}}𝐀𝑑𝐬\right).$$
(11)
In expression (11) the closed curve $`\stackrel{~}{C}`$ is defined with $`C1`$ and $`C2`$. Firstly, we move from the new source point to the new detection point along $`C1`$, and then backwards along $`C2`$. We have also introduced the following definitions
$$\alpha =Re\{_{(right)}d[𝐱(t)]exp\left(\frac{i}{\mathrm{}}_\tau ^{}^{\tau ^{\prime \prime }}\stackrel{~}{L}𝑑t\right)_{(left)}d[𝐱(t)]exp\left(\frac{i}{\mathrm{}}_\tau ^{}^{\tau ^{\prime \prime }}\stackrel{~}{L}𝑑t\right)\},$$
(12)
and
$$\beta =Im\{_{(right)}d[𝐱(t)]exp\left(\frac{i}{\mathrm{}}_\tau ^{}^{\tau ^{\prime \prime }}\stackrel{~}{L}𝑑t\right)_{(left)}d[𝐱(t)]exp\left(\frac{i}{\mathrm{}}_\tau ^{}^{\tau ^{\prime \prime }}\stackrel{~}{L}𝑑t\right)\}.$$
(13)
Using Stokes’ theorem we may rewrite this interference term as
$$I=2\alpha cos\left(ϵ\frac{1}{\mathrm{}}_\mathrm{\Omega }\times 𝐀𝑑𝛀\right)2\beta sin\left(ϵ\frac{1}{\mathrm{}}_\mathrm{\Omega }\times 𝐀𝑑𝛀\right),$$
(14)
being $`\mathrm{\Omega }`$ an area bounded by $`\stackrel{~}{C}`$. But we have defined our vector field $`𝐀`$ such that its rotational vanishes everywhere on $`\mathrm{\Omega }`$ but in a small area located inside the forbidden volume $`D`$, i.e., in the cross section of cylinder $`H`$, which was denoted by $`E`$ and has radius $`\rho _a`$. Hence the nonvanishing part of the line integrals allows us to rewrite (14) as follows
$$I=2\alpha cos\left(ϵ\frac{1}{\mathrm{}}_E\times 𝐀𝑑𝐄\right)2\beta sin\left(ϵ\frac{1}{\mathrm{}}_E\times 𝐀𝑑𝐄\right).$$
(15)
From our previous definitions we obtain the final form of this interference term
$$I=2\alpha cos\left(ϵ\frac{F}{\mathrm{}}\pi \rho _a^2\right)2\beta sin\left(ϵ\frac{F}{\mathrm{}}\pi \rho _a^2\right).$$
(16)
## 4 Discussion.
We have proved, using a Gedankenexperiment which is very similar to the famous Aharonov–Bohm proposal, that we may find noninertial coordinate systems (in a Minkowskian spacetime), in which a nonrelativistic quantum process is determined not only by the features of geometry at those points at which the process takes place, but also by geometric parameters of regions in which the quantum system can not enter.
This is a purely quantum mechanical effect. Indeed, if we had used a classical particle, where the motion at any point $`P`$ of its trajectory is solely determined by the geometry at $`P`$, then no information of any forbidden region could be extracted.
The here introduced Gedankenexperiment could hardly be considered as a local experiment, nevertheless, from our results we could claim that geometry–induced nonlocal effects could emerge in QT. Indeed, the measurement outputs of some nonrelativistic quantum experiments are determined not only by the geometry of the region in which the experiment takes place, but also by the geometry of regions forbidden, in some way, to the experiment.
The gravitational field can be geometrized (at least at the classical level), and at this point we may wonder if at quantum level gravity could render some nonlocal effects in nonrelativistic QT (of course, the present work does not consider gravity, but it has proved that in a Minkowskian spacetime there could be a geometry–induced nonlocality, and therefore the extension to curved spacetimes is an interesting question). This nonlocal behavior has already been pointed out , and more investigation around this topic could lead to a more profound comprehension of the way in which the gravitational field could modify some fundamental expressions of QT, for example the commutation relations .
The possible incompleteness of the general relativistic description of gravity, at quantum level, has already been claimed , and implies the violation not only of Einstein equivalence principle but also of the local position invariance principle (the independence of the results of local experiments from the location of the local laboratory in spacetime, i.e., the independence of the equivalence principle from position in time and space). In other words, Ahluwalia’s work implies that the results of some local quantum experiments do depend on nonlocal characteristics. In the present work we have found a behavior that, at least qualitatively, is the same, i.e., the dynamics of some quantum processes is determined by nonlocal features. Hence, further investigation in this Aharonov–Bohm effect could help to understand better the controversy around the validity, at quantum level, of the equivalence principle
In the present work, the physical acceptability of the constructed noninertial system has not been investigated. In spite of this last fact, our work has shown that, in principle, there are noninertial systems, in which nonrelativistic QT shows very interesting nonlocal features. The possibility of having also these kind of effects in the context of more realistic schemes (feasible noninertail observers) has to be investigated, but at least we have shown that in a very wide range of noninertial coordinate systems these effects do exist.
Acknowledgments.
The author would like to thank A. Camacho–Galván and A. A. Cuevas–Sosa for their help, and D.-E. Liebscher for the fruitful discussions on the subject. The hospitality of the Astrophysikalisches Institut Potsdam is also kindly acknowledged. This work was supported by CONACYT Posdoctoral Grant No. 983023.
|
no-problem/9907/quant-ph9907040.html
|
ar5iv
|
text
|
# Resonance Energy–Exchange–Free Detection and ‘Welcher Weg’ Experiment
## I Introduction
Recently the old quantum welcher Weg (which–path) reasoning has been used to devise experiments in which there is a certain probability of detecting an object without transferring a single quantum of energy to it. The experiments are usually called interaction–free experiments but we use the name energy–exchange–free experiments in order to stress the fact that the detected object do interact with the measuring apparatus even when no quantum of energy $`h\nu `$ is transferred to it.<sup>*</sup><sup>*</sup>*A slight twist (in brackets) of Niels Bohr’s words might illuminate our decision: “It is true that in the measurements under consideration any direct mechanical interaction of the system and the measuring agencies is excluded, but …the procedure of measurements has an essential influence on the conditions on which the very definition of the physical quantities in question rests…\[T\]hese conditions must be considered as an inherent element of any phenomenon to which the term “\[interaction\]” can be unambiguously applied.” In effect, the reasoning, in an ideal case, is the following one. After the second beam splitter of a Mach–Zehnder interferometer one can always put a detector in such a position that it will never (i.e., with probability zero) detect a photon. If it does, then we are certain that an object blocked the “other” path of the interferometer. Mach–Zehnder interferometer itself cannot be used for practical energy–exchange–free measurement because of its very low efficiency (under 30%). Therefore Paul and Pavičić recently proposed a very simple and easily feasible energy–exchange–free experiment based on the resonance in a single cavity which efficiency can realistically reach 95%. As a resonator the proposal used a coated crystal which however reduced its efficiency.
In this paper (in Sec. II) we use a monolithic total–internal–reflection resonator which has recently shown extremely high efficiencies in order to construct an optical energy–exchange–free device with an efficiency approaching 100%. Since the device differs from the usual quantum measurement devices, which assume an exchange of at least one quantum of energy , it immediately provokes a question whether one carry out a welcher Weg interference experiment with its help. In Sec. III we propose such an experiment using atom interferometry.
## II Resonance Energy–Exchange–Free Detection
The experiment (see Fig. 1) uses an uncoated monolithic total–internal–reflection resonator (MOTIRR) coupled to two triangular prisms by the frustrated total internal reflection (FTIR). . Both MOTIRR and prisms require a refractive index $`n>1.41`$ to achieve total reflection. When we bring prisms within a distance of the order of the wavelength, the total reflection within the resonator will be frustrated and a fraction of the beam will tunnel out of and into the resonator. Depending on the dimension of the gap and the polarization of the incidence beam one can well define reflectivity $`R`$ within the range from $`10^5`$ to 0.99995. Losses for MOTIRR and FTIR may be less than 0.3%. The incident laser beam is chosen to be polarized perpendicularly to the incident plane so as to give a unique reflectivity for each photon. The faces of the resonator are polished spherically to give a large focusing factor and to narrow down the beam. A cavity which the beam in its round–trips has to go through is cut in the resonator and filled with an index–matching fluid to reduce losses. If there is an object in the cavity, i.e., in the way of the round–trips of the beam in the resonator, the incident beam will be almost totally reflected (into $`D_r`$). If there is no object, the beam will be almost totally transmitted (into $`D_t`$). As a source of the incoming beam a continuous wave laser (e.g., Nd:YAG) should be used because of its coherence length (up to 300 km) and of its excellent frequency stability (down to 10 kHz in the visible range).
We calculate the intensity of the beam arriving at detector $`D_r`$ when there is no object in the cavity in the following way. The portion of the incoming beam of amplitude $`A(\omega )`$ reflected at the incoming surface is described by the amplitude $`B_0(\omega )=A(\omega )\sqrt{R}`$, where $`R`$ is reflectivity. The remaining part of the beam tunnels into MOTIRR and travel around guided by one FTIR (at the face next to the right prism where a part of the beam tunnels out into $`D_t`$) and by two proper total internal reflections. After a full round–trip the following portion of this beam joins the directly reflected portion of the beam by tunnelling into the left prism: $`B_1(\omega )=A(\omega )\sqrt{1R}\sqrt{R}\sqrt{1R}e^{i\psi }`$, where $`\psi =(\omega \omega _{res})T`$ is the phase added by each round–trip; here $`\omega `$ is the frequency of the incoming beam, $`T`$ is the round–trip time, and $`\omega _{res}`$ is the selection frequency corresponding to a wavelength which satisfies $`\lambda =L/k`$, where $`L`$ is the round–trip length of the resonator and $`k`$ is an integer. Each subsequent round–trip contributes to a geometric progression
$`B(\omega )={\displaystyle \underset{i=0}{\overset{n}{}}}B_i(\omega ),`$ (1)
where $`n`$ is the number of round–trips. We lock the laser at $`\omega `$ which is as close to $`\omega _{res}`$ as possible. Because of the afore–mentioned characteristics of the continuous wave lasers we can describe the input beam coming from such a laser during the coherence time by means of $`A(\omega )=A\delta (\omega \omega _{res})`$. The following ratio of intensities of the reflected and the incoming beam then describes the efficiency of the device for free round–trips:
$`\eta ={\displaystyle \frac{_0^{\mathrm{}}B(\omega )B^{}(\omega )𝑑\omega }{_0^{\mathrm{}}A(\omega )A^{}(\omega )𝑑\omega }}=1{\displaystyle \frac{1R}{1+R}}[R^{2n}1+2{\displaystyle \underset{j=1}{\overset{n}{}}}(1+R^{2n2j+1})R^{j1}].`$ (2)
The expression is obtained by mathematical induction from the geometric progression of the amplitudes \[Eq.(1)\].
In the experiment one has to lower the intensity of the beam until it is likely that only one photon would appear within an appropriate time window (1 ns—1 ms $`<`$ coherence time) what allows the intensity in the cavity to build up. The obtained $`\eta `$ thus becomes a probability of detector $`D_r`$ reacting when there is no object in the system. As shown in Fig. 2, $`\eta `$ approaches zero after 100 round–trips for $`R=0.95`$, after 1000 round–trips for $`R=0.995`$, etc., which is all multiply assured by continuous wave laser coherence length. In other words, a response from $`D_r`$ means that there is an object in the system. In the latter case the probability of the response is $`R`$, the probability of a photon hitting the object is $`R(1R)`$, and the probability of photon exiting into $`D_t`$ detector is $`(1R)^2`$. By widening the gaps between the resonator and the prisms we can make $`R1`$ and therewith obtain an arbitrarily low probability of a photon hitting an object. We start each testing by recording the first two or three clicks of $`D_r`$ or $`D_t`$ after opening a gate for the incident beam. In this way we allow the beam to ‘wind up’ in MOTIRR. And when either $`D_r`$ or $`D_t`$ fires (possibly even two or three times in a row to be sure in the result) the testing is over. Waiting for several clicks results in a bigger time window, but a chance of a photon hitting an object remains very low. A possible 300 km coherence length does not leave any doubt that a real experiment of detecting objects without transferring a single quantum of energy to them can be carried out successfully, i.e., with an efficiency exceeding 99%. Also detectors might fail to react but this is not a problem because single photon detectors with 85% efficiency are already available and this would again only increase the time window for a few nano seconds what does not significantly influence the result.
Thus we obtain the energy–exchange–free detection device in which the observed particles do not suffer any recoil. With opaque particles bigger than the wavelength of the applied laser beam we have got the maximal efficiency. However, our device can also see smaller objects because the main process in our resonator (which is a kind of the Fabry–Perot interferometer) is an interference in which the main role plays a possibility (which need not be realized) of a photon to hit an object in one of the round trips inside MOTIRR. In other words the device “sees” objects which exceed the resolution power of a standard microscope. The efficiency $`1\eta `$ continuously decreases for smaller and smaller objects but that can be significantly improved if we choose the laser beam frequency which would correspond to an atomic resonance frequency of the object. On the other hand, the efficiency would be increased by using plasma X–ray lasers, if one designed an efficent X–ray resonator. For example, Nd<sup>3+</sup>:glass laser system at Lawrence Livermore National Laboratory produces 250–ps X–ray laser pulses at wavelengths shorter than 5 nm. Our elaboration in Paul and Pavičić shows that the resonator would work with 250–ps pulses and the geometrical round path of 4 cm.
## III ‘Welcher Weg’ Detection
The experiment (see Fig. 3) uses a combination of atom interferometer with ultracold metastable atoms and the resonance energy–exchange–free path detection by means of a movable MOTIRR (of course, without liquid what only slightly reduces the efficiency). To increase the probability of an atom being hit by the round tripping beam, the incoming laser beam should be split into many beams by multiple beam splitters, each beam containing in average one photon in the chosen time window, so as to feed MOTIRR through many optical fibers. As for atom interferometer we adapt the one presented by Shimizu et al. primarily because their method is almost background free. The atom source is a magneto–optical trap containing 1s<sub>5</sub> neon metastable atoms which are then excited to the 2p<sub>5</sub> state by a 598–nm laser beam. Of all the states to which 2p<sub>5</sub> decays we follow only 1s<sub>3</sub> atoms whose trajectory are determined only by the initial velocity and gravity (free fall from the trap). (Other states are either trapped by the magnetic field of the trap, or influenced and dispersed by another 640–nm cooling laser beam.) Now the atoms fall with different velocities but each velocity group forms interference fringes calculated as for the optical case and only corrected by a factor which arises from the acceleration by the gravity during the fall. MOTIRR is mounted on a device which follows (with acceleration) one velocity group from the double slit to microchannel plate detector (MCP). (Atoms from other groups move with respect to MOTIRR and therefore—because of their small cross section—cannot decohere MOTIRR). The laser is tuned to a frequency equal to the 1s<sub>3</sub> resonance frequency. The most distinguished fringes has the group which needs 0.1 s to reach MCP from the double slit and are accelerated to 2 m/s. The source is attenuated so much that there is in average only one atom in a velocity group. The whole process repeats every 0.4 s.Assuming that we have 10 ns recovery time for the photon detectors and 300 optical fibers we arrive at about 10<sup>7</sup> counts which all go into one detector D<sub>t</sub> when no atom obstructs a round trip. (For reflectivity $`R=0.999`$ the probability of D<sub>r</sub> being activated is $`210^9`$.) As soon as D<sub>r</sub> detector fires we know which slit the observed atom passed through. (The probability of photon hitting an atom is 0.001. In order to be able to estimate how many photons fired D<sub>r</sub> we can use photon chopping developed by Paul, Törmä, Kiss, and Jex .) After $`10^3`$ repeating of such successful detections we have enough data to see whether the interference fringes are destroyed significantly with respect to unmonitored reference samples or not.
## IV Discussion
In Sec. II we presented a device (derived from Paul and Pavičić’s device ) for a photonic detecting of objects without an energy exchange. More precisely, there is a very high probability approaching 100% that not even a single photon energy $`h\nu `$ will be transferred to the objects. Figuratively, one could call the device a “Heisenberg microscope without a kick.” In Sec. III we employed the device in the welcher Weg detection of the atoms taking part in an interference experiment. Both, the Heisenberg microscope reasoning and arguments against a welcher Weg experiment traditionally rest on the Heisenberg uncertainty relations. Uncertainty relations always refer to the mean values of the operators and that means—even when the operators are projectors—statistics obtained by recording an interaction, i.e., by a reduction of the wave packet. In our “energy–exchange–free microscope” measurement (Sec. II) we do not attach any value to any operator in the Hilbert space description of the observed systems and therefore, no uncertainty relation is involved. As for the welcher Weg experiment (Sec. III) it has recently been shown that “it is possible to obtain welcher Weg information without exposing the interfering beam to uncontrollable scattering events… That is to say, it is simply the information contained in a functioning measuring apparatus that changes the outcome of the experiment and not uncontrollable alterations of the spatial wave function, resulting from the action of the measuring apparatus on the system under observation.” There is, however, an essential difference between our proposal and the ones by Scully, Englert, and Walther (microwave cavity proposal), by Sanders and Milburn (quantum nondemolition measurement with the Kerr medium) and by Paul (perfectly reflecting mirror proposal). In all of them there is slight exchange of energy which does not significantly disturb the spatial wave function of the system taking part in the interference but does disturb its phase. In our proposal we apparently have no exchange of energy. We say “apparently” because in a future real experiment we should discuss the Bohrian physical process responsible for disappearance of the interference fringes in detail.
I thank Harry Paul for many discussions and suggestions. I acknowledge supports of the Alexander von Humboldt Foundation, Germany, the Max–Planck–Gesellschaft, Germany, and the Ministry of Science of Croatia.
|
no-problem/9907/cond-mat9907010.html
|
ar5iv
|
text
|
# Berry Phase and Ground State Symmetry in 𝐻⊗ℎ Dynamical Jahn-Teller Systems
\[
## Abstract
Due to the ubiquitous presence of a Berry phase, in most cases of dynamical Jahn-Teller systems the symmetry of the vibronic ground state is the same as that of the original degenerate electronic state. As a single exception, the linear $`Hh`$ icosahedral model, relevant to the physics of C<sub>60</sub> cations, is determined by an additional free parameter, which can be continuously tuned to eliminate the Berry phase from the low-energy closed paths: accordingly, the ground state changes to a totally-symmetric nondegenerate state.
\]
The traditional field of degenerate electron-lattice interactions (Jahn-Teller effect) in molecules and impurity centers in solids has drawn interest in recent years, excited by the discovery of new systems calling for a revision of a number of commonly accepted beliefs. Several molecular systems including C<sub>60</sub> ions, higher fullerenes and Si clusters, derive their behavior from the large (up to fivefold) degeneracy of electronic and vibrational states due to the rich structure of the icosahedral symmetry group. Novel Jahn-Teller (JT) systems have therefore been considered theoretically, disclosing intriguing features, often related to a Berry phase in the electron-phonon coupled dynamics.
As it is well known, the molecular symmetry, reduced by the JT distortion with the splitting of the electronic-state degeneracy, is restored in the dynamical Jahn-Teller (DJT) effect, where tunneling among equivalent distortions is considered. The vibronic states are therefore labelled as representations of the original point group of the undistorted system. In the weak-coupling regime, for continuity, the ground state (GS), in particular, retains the same degenerate representation as that labelling the electronic level prior to coupling. A priori, there is no particular reason for this to continue at larger couplings. However, it appears empirically that in all linear DJT systems studied before the late nineties, the GS symmetry remains the same at all couplings. The explanation of this observation was a great outcome of the Berry-phase scenario: the phase entanglement in the electron-phonon Born-Oppenheimer (BO) dynamics, originating at electronically-degenerate high-symmetry points, seemed a universal feature of the DJT systems.
In this context, it came as a surprise the discovery of the first linear JT system showing a nondegenerate GS in the strong-coupling limit: the spherical model $`𝒟^{(2)}d^{(2)}`$ of electrons of angular momentum $`L=2`$ interacting with vibrations also belonging to an $`l=2`$ representation. This system turns out to be a special case of the $`Hh`$ icosahedral model, for a 5-fold degenerate $`H`$ electronic state interacting linearly with a distortion mode of the same symmetry $`h`$. In that special case, it was shown that, for increasing coupling, a nondegenerate $`A`$ excited state in the vibronic spectrum moves down, to cross the $`H`$ GS at some finite value of the coupling parameter, thus becoming the GS at strong coupling. This phenomenon is a manifestation of the absence of Berry-phase entanglement in the coupled dynamics.
In this Letter we study the linear $`Hh`$ model in its generality. We analyse in detail the connection between the symmetry/degeneracy of the vibronic GS and the presence/absence of a Berry phase in the coupled dynamics. This model owns its peculiarities to the non-simple reducibility of the icosahedral symmetry group. In particular, the $`H`$ representation appears twice in the symmetric part of the Kronecker product of the $`H`$ representation with itself:
$$\{HH\}^{(s)}=agh^{[1]}h^{[2]}.$$
(1)
There are, therefore, two independent sets of Clebsch-Gordan (CG) coefficients
$$C_{m_1,m_2}^{m[r]}H,m_1;H,m_2|h,m^{[r]}$$
(2)
for the coupling of an $`H`$ electronic state with an $`h`$ vibrational mode, identified by a multiplicity index $`r=1,2`$. Of course, since the two $`h`$ states are totally equivalent and indistinguishable, symmetry-wise, the choice of these orthogonal sets of coefficients has some degree of arbitrariness: the free parameter $`\alpha `$ in the combination
$$C_{m_1,m_2}^m\left(\alpha \right)\mathrm{cos}\alpha C_{m_1,m_2}^{m[1]}+\mathrm{sin}\alpha C_{m_1,m_2}^{m[2]}$$
(3)
accounts for it. The coefficient $`C_{m_1,m_2}^m\left(\alpha \right)`$ coincides with the $`r=1`$ and $`r=2`$ values for $`\alpha =0`$ and $`\alpha =\frac{\pi }{2}`$ respectively. Also, for $`\alpha =\mathrm{arctan}\left(3/\sqrt{5}\right)\alpha _s`$, it becomes equivalent to the spherical CG coefficient.
The basic Hamiltonian for the $`Hh`$ model can be written:
$$H=H_{\mathrm{harm}}(\mathrm{}\omega )+H_{\mathrm{e}\mathrm{v}}(g\mathrm{}\omega ,\alpha ),$$
(4)
with
$`H_{\mathrm{harm}}(\mathrm{}\omega )`$ $`=`$ $`{\displaystyle \frac{1}{2}}\mathrm{}\omega {\displaystyle \underset{m}{}}(p_m^2+q_m^2)`$ (5)
$`H_{\mathrm{e}\mathrm{v}}(g\mathrm{}\omega ,\alpha )`$ $`=`$ $`{\displaystyle \frac{g\mathrm{}\omega }{2}}{\displaystyle \underset{mm_1m_2}{}}q_mc_{m_1}^{}c_{m_2}C_{m_1,m_2}^m\left(\alpha \right),`$ (6)
where $`q_m`$ is the distortion coordinate (with conjugate momentum $`p_m`$) and $`c_m^{}`$ is the electronic operator in standard second-quantized notation.
The novelty introduced by the $`\alpha `$-dependent CG coefficients reflects the fact that the group does not determine completely the form of the linear coupling as, for example, in cubic symmetry. The specific value of this angle must be established case by case by detailed analysis of the phonon mode and its coupling with that specific electronic state. Indeed, in a realistic case such as, for example, C$`{}_{}{}^{+}{}_{60}{}^{}`$ ions, each $`h`$ mode is characterized not only by its own frequency $`\omega _i`$ and scalar coupling $`g_i`$, but also by its particular angle of mixing $`\alpha _i`$.
For intermediate to strong coupling, the interesting nonperturbative regime, the customary framework is the BO separation of vibrational and electronic motion: when the splitting among the five potential sheets (proportional to $`g^2`$) is large, the electronic state can be safely assumed to follow adiabatically the lowest BO potential sheet, while virtual inter-sheet electronic excitations may be treated as a small correction. The BO dynamics is determined by the lowest eigenvalue of the interaction matrix $`\mathrm{\Xi }=q_mV^{(m)}`$ in the electronic space. This matrix is obtained from (6) by the same technique described in Ref. : since it is a simple generalization of that obtained for for $`𝒟^{(2)}d^{(2)}`$, here for brevity we report only the expression of the diagonal matrix elements of the $`V^{(0)}`$ matrix, corresponding to the coupling to a pure $`q_0`$ distortion:
$$\left[\begin{array}{c}C_{0,2}^2\left(\alpha \right)\\ C_{0,1}^1\left(\alpha \right)\\ C_{0,0}^0\left(\alpha \right)\\ C_{0,1}^1\left(\alpha \right)\\ C_{0,2}^2\left(\alpha \right)\end{array}\right]=\mathrm{cos}\alpha \left[\begin{array}{c}\frac{1}{2\sqrt{5}}\\ \frac{1}{2\sqrt{5}}\\ \frac{2}{\sqrt{5}}\\ \frac{1}{2\sqrt{5}}\\ \frac{1}{2\sqrt{5}}\end{array}\right]+\mathrm{sin}\alpha \left[\begin{array}{c}\frac{1}{2}\\ \frac{1}{2}\\ 0\\ \frac{1}{2}\\ \frac{1}{2}\end{array}\right].$$
(7)
This form makes it clear that a shift $`\alpha \alpha +\pi `$ introduces a sign change in the coupling matrix, and it can be compensated by a reflection $`\stackrel{}{q}\stackrel{}{q}`$. We will restrict therefore, without loss of generality, to the interval $`0\alpha \pi `$.
The electronic eigenvalue $`\frac{2}{\sqrt{5}}\mathrm{cos}\alpha `$ is the lowest for $`\alpha <\alpha _s`$ and $`\alpha >\pi \alpha _s`$ (region a): in this range the BO potential presents six absolute minima, one of which is lying along the $`\widehat{q}_0`$ pentagonal axis, with energy lowering $`E_{\mathrm{clas}}=g^2/10\mathrm{cos}^2\alpha `$ (in units of $`\mathrm{}\omega `$). However, influenced by the $`V^{(m0)}`$ matrices, in the complementary interval $`\alpha _s<\alpha <\pi \alpha _s`$ (region b), ten trigonal distortions become the absolute minima, with energy gain $`E_{\mathrm{clas}}=g^2/18\mathrm{sin}^2\alpha `$. At the boundary angles ($`\alpha =\alpha _s`$ and $`\pi \alpha _s`$), all pentagonal and trigonal minima become degenerate, and part of a continuous degenerate 4-dimensional (4-D) trough of depth $`E_{\mathrm{clas}}=g^2/28`$.
We come now to the rôle of the Berry phase in this system. As well known, the geometrical phase is related to conical degeneracies of the two lowest BO potential surfaces. In the $`𝒟^{(2)}d^{(2)}`$ system (the $`\alpha =\pi \alpha _s`$ case of the model studied here) the flat minimum trough presents tangentially degenerate points. For that case, it was shown that the tangential contacts provide a mechanism for getting rid of the Berry phase. For generic $`\alpha `$ instead, all contacts between the lowest two potential sheets occur as conic intersections, instead of tangencies, at points which are far from the potential minima. In particular, both trigonal and pentagonal axes are locations of conical crossings for $`\alpha `$ in regions a and b respectively (i.e. when they do not correspond to minima). In particular, for $`\stackrel{}{q}`$ on the $`\widehat{q}_0`$ axis, the five electronic eigenvalues are given in Eq. (7), where it can be readily verified that for $`\alpha `$ in region b the most negative one is indeed twofold degenerate.
In region a, the six minima are all equidistant, defining the simplest regular polytope in 5 dimensions (see Fig. 1a). In this case, therefore, minimal closed paths join any of the 20 triplets of minima. It is straightforward to verify that at the center of all such triplets there lies one of the trigonal axes, carrying a conical intersection. If the degeneracy were restricted to the trigonal axes, however, the rich topology of the 5-D space would allow the triangular loop to squeeze continuously to a point avoiding the degenerate line: the associated Berry phase would then vanish. Instead, we checked that the two lowest sheets remain in contact through a bulky 3-D (1 radial + 2 tangential) region of distortions surrounding each trigonal axis. This guarantees the nontrivial topology of the loops, thus the possibility of nonzero Berry phase. Indeed a geometrical phase of $`\pi `$ is associated to these triangular loops, as we computed explicitly by the discretized phase integral of Ref. . Paths encircling two (or any even number) of such triangles (thus looping through 4, 6,… minima) have zero Berry phase, since the two phases cancel out. However, such paths, though energetically equivalent to the basic triangles (since they cross the same saddle points), are longer, therefore less relevant from a minimum-action point of view. We conclude consequently that, for $`\alpha `$ in region a, the $`Hh`$ model must show the signature of a Berry-phase entanglement.
In region b, the minima are ten, each with 3 nearest neighbors and 6 second neighbors. The shortest closed paths through minima joins three points such as (1$``$2$``$3$``$1) in Fig. 1b. However, energetically, such loop is not the most convenient, since the segment joining two far neighbors (3$``$1) must cross a barrier energetically 60% more expensive than that linking next neighbors (1$``$2). Since the energy gaps between minima and saddle points grow as $`g^2`$, eventually at strong coupling only the “cheapest” paths affect the low-energy dynamics, and the relevant Berry phases should be calculated along such loops. Here, therefore, at large $`g`$, the low-energy paths are pentagons, such as (1$``$2$``$3$``$4$``$5$``$1) in Fig. 1b. We computed the Berry phases for both kinds of paths, obtaining $`\pi `$ and $`0`$ for the 3-points and 5-points loop respectively. This implies that the pentagonal loop encircles an even number (most likely 6) of degenerate regions (one of which around the pentagonal axis at the center of each 5-points loop), each carrying a phase factor $`e^{i\pi }`$. We conclude that, in region b, although nontrivial Berry phases are present, they have no effect on the strong-coupling low-energy spectrum. Thus, in particular, the GS symmetry should remain $`H`$ in region a, while a nondegenerate $`A`$ state must turn lower in region b at strong coupling. We stress that we have established the presence of nonzero Berry phases for all values of $`\alpha `$, but also that, in region b, the effect of the geometrical phase is bypassed by energetically cheaper paths with null phase.
This scenario is confirmed by numerical diagonalization (Lanczos method). In Fig. 2, we plot the gap between the lowest $`H`$ and $`A`$ vibronic states, wherever $`E_HE_A>0`$, and 0 where the GS is $`H`$. At weak coupling, as suggested by continuity, the GS is $`H`$. For $`g>7`$ and $`\alpha `$ in range b, the $`A`$ state becomes the GS. We note however a little modulation in the boundaries of this region, both $`g`$\- and $`\alpha `$-wise. We observe, in particular, that the two special values $`\alpha _s`$ and $`\pi \alpha _s`$, far from marking the closing of the $`HA`$ gap, show instead a rather sharp peak in the $`\alpha `$ direction. By drawing (in Fig. 2) the gap multiplied by $`g^2`$, we evidence, along these ridges at $`\alpha _s`$ and $`\pi \alpha _s`$, the $`g^2`$ large-$`g`$ behavior of the $`HA`$ gap, characteristic of the motion in a flat trough of size $`g`$. Inside the region b, instead, the gap vanishes much more quickly, due to the tunnelling integral through the barriers between trigonal minima vanishing exponentially in $`g^2`$.
It is straightforward to extend the one-mode Hamiltonian (4) to a more realistic case of many distortion modes, each characterized by its own frequency, coupling and angle of mixing:
$$H=\underset{i}{}\left[H_{\mathrm{harm}}(\mathrm{}\omega _i)+H_{\mathrm{e}\mathrm{v}}(g_i\mathrm{}\omega _i,\alpha _i)\right].$$
(8)
We study in detail the two-modes case. Five free parameters ($`\omega _1`$ being taken as a global scale factor) appear in the model. In order to carry out a significant study of the phase diagram, we limit ourselves to (i) two values only (1 and 5) of the ratio $`\omega _2/\omega _1`$, and (ii) $`\alpha _2\pi /2=\alpha _1\alpha `$, assuming a principle of ”maximum difference” between the modes. We take advantage of spectral invariance for individual sign change of each of the couplings $`g_ig_i`$ and for $`\alpha \alpha `$, restricting to the $`0\alpha \pi /2`$, $`g_i>0`$ sector. For convenience, we introduce polar variables $`g_1=g\mathrm{cos}\gamma `$, $`g_2=g\mathrm{sin}\gamma `$ ($`0<\gamma <\pi /2`$), and draw slices of the parameters space for fixed values of $`g`$, as $`\alpha \gamma `$ planes.
The first interesting observation concerns the case of equal frequencies: even though Hamiltonian (8) is linear in the coupling parameters the CG coefficients and the boson operators, the special case $`\omega _1=\omega _2`$ cannot be trivially reduced to a one-mode problem, by means of a suitable rotation mixing mode 1 and 2. This is a consequence of the linear independence of the coupling matrices $`V^{(m)}(\alpha )`$ for different values of $`\alpha `$.
We resort to exact diagonalization to treat the two-modes case. Due to the larger size of the matrices, we are limited to smaller couplings: we obtain a satisfactorily converged $`E_HE_A`$ gap up to $`g10`$ only. The calculations, for both $`\omega _2/\omega _1=1`$ and 5, show that for $`g7`$ the GS symmetry remains $`H`$ for any $`\alpha `$ and $`\gamma `$ as in the one-mode case. Then, already at $`g=8`$, an $`A`$ (nondegenerate) GS makes its appearance in two localized regions of the $`\alpha \gamma `$ plane. Starting from $`g9`$, these separated regions assume essentially their asymptotic strong-coupling shape (see Fig. 3). The first region, located symmetrically across $`\alpha =\pi /2`$, corresponds mainly to mode 1 with b-type (no-Berry) coupling: mode 2 (Berry-phase entangled in this region) acts as a weak perturbation, incapable to change the GS symmetry for small enough $`\gamma `$. On the other side, the second region of $`A`$ GS is located around $`\alpha =0`$: there, it is mode 2 who is responsible for the no-Berry phase coupling, mode 1 acting as a weak perturbation, for $`\gamma `$ close enough to $`\pi /2`$. For $`\omega _2/\omega _1=1`$ (not reported here), the two $`A`$-GS regions are, of course, equivalent. For $`\omega _2/\omega _1=5`$ (Fig. 3) instead, these two regions differ in size, in relation with the different relative energetics of mode 1 versus mode 2.
In conclusion, we have illustrated the importance of the energetics of paths surrounding the points of degeneracies of the two lowest BO potential sheets, for defining the effective rôle of the Berry phase. In all classical linear JT models, the low-energy paths are affected by the geometrical phase in a way leading to a “boring” fixed ground-state symmetry. The $`Hh`$ model is special in being determined by an additional parameter, allowing to change the connectivity of the graph of low-energy paths through minima along with the regions of degeneracy of the two lowest sheets. Consequently, this new parameter leads continuously from a regular, Berry-phase entangled, region to a whole region where, although present, the Berry phase is totally ineffective in imposing its selection rules to the low-energy vibronic states, and to the GS in particular.
Finally, for a system such as C$`{}_{}{}^{+}{}_{60}{}^{}`$ our study implies that a detailed knowledge of not only the coupling parameters $`g_i`$, but also the characteristic angles $`\alpha _i`$ should be acquired for all modes in order to compute even such a basic property as the GS symmetry.
We thank Arnout Ceulemans, Brian Judd, Fabrizia Negri, Erio Tosatti, and Lu Yu for useful discussions.
|
no-problem/9907/astro-ph9907038.html
|
ar5iv
|
text
|
# A Preliminary Indication of Evolution of Type Ia Supernovae from their Risetimes
## 1 Introduction
High redshift (0.3 $`<z<`$ 1.0) type Ia supernovae (SNe Ia) are unexpectedly dim, a phenomenon readily attributed to a cosmological constant and an accelerating Universe (Riess et al. 1998; Perlmutter et al. 1999). These cosmological conclusions rely on the assumption that SNe Ia have not evolved. Both the High-$`z`$ Supernova Search Team (Schmidt et al. 1998) and the Supernova Cosmology Project (SCP; Perlmutter et al. 1997) have found no indication from spectra, light curves, and various subsamples that SNe Ia have evolved between $`z=0`$ and $`z=0.5`$ (Riess et al. 1998; Perlmutter et al. 1999); this evidence will be considered in §4. However, an unexpected luminosity evolution of $``$ 25% over a lookback time of approximately 5 Gyr would be sufficient to nullify the cosmological conclusions. Evolution is a notorious foe, plaguing previous measurements of the global deceleration parameter using brightest cluster galaxies (e.g., Sandage & Hardy 1973). While we cannot hope to prove that the samples of SNe Ia have not evolved, we would increase our confidence in their reliability by adding to the list of ways in which they are similar while failing to discern any way in which they are different.
An important probe into supernova physics and possibly evolution is the rapid rise in luminosity of SNe Ia shortly after explosion. A wellspring of energy from birth, an expanding supernova releases ever dwindling resources of trapped energy from the radioactive decay of <sup>56</sup>Ni and <sup>56</sup>Co. The risetime (i.e., the time interval between explosion and peak) is dictated by the amount and location of synthesized <sup>56</sup>Ni and the opacity of the intervening layers (Leibundgut & Pinto 1992; Nugent et al. 1995; Vacca & Leibundgut 1996).
Theoretical modeling indicates that expected variations in the composition of SN Ia progenitors at high redshift could be accompanied by an evolution in luminosity not accounted for by current empirical distance techniques (Höflich, Wheeler, & Thielemann 1998). One predicted signature of this evolution is an alteration of the risetime.
A preliminary measurement of the risetime using a large set of pre-discovery images of high-redshift ($`z0.5`$) SNe Ia from the SCP was presented by Goldhaber (1998a,b) and Groom (1998). Although the early SN Ia light is not strongly detected for individual objects, the statistics of $``$ 40 different SNe Ia have been used to meaningfully measure the fiducial risetime. The final results from this measurement will be presented by Goldhaber et al. (1999).
We have previously determined the risetime of nearby SNe Ia using a set of discovery and pre-discovery images from a mix of amateur and professional supernova searches (Riess et al. 1999). A comparison of the high-redshift and low-redshift rise behavior (§2) should provide a valuable test of evolution. We discuss systematic errors which could bias this comparison in §3, and the implications in §4.
## 2 Analysis
Details concerning the acquisition and photometric calibration of very early observations of nearby SNe Ia can be found in Riess et al. (1999). These observations include $``$25 new measurements of SNe Ia between 10 and 18 days before $`B`$ maximum. The final analysis of the high-redshift dataset and the details of the SCP’s methods of analysis will be presented in Goldhaber et al. (1999).
It is clear from the observations of nearby SNe Ia that there is considerable inhomogeneity in the risetimes of SNe Ia (Riess et al. 1999). However, individual SN Ia risetimes are well correlated with the post-rise light-curve shape (Goldhaber 1998a,b). Therefore, one can determine both a fiducial risetime and the correlation between the risetime and the post-rise light-curve shape. Riess et al. (1999) explored multiple techniques to quantify the fiducial SN Ia risetime, all with consistent results.
To reliably compare the fiducial risetimes, it is essential to apply the same method of analysis to the high-redshift and low-redshift SN Ia datasets. For this reason, we emulated the stated methodology of Goldhaber (1998a,b) and Groom (1998) in analyzing the low-redshift data.
We used the “stretch” method (Perlmutter et al. 1997) to normalize the $`B`$-band light curves of 10 nearby SNe Ia from Riess et al. (1999) using the same fiducial template, a modified Leibundgut (1989) template, employed by Goldhaber (1998a,b) and Groom (1998). (This template is very similar to the Leibundgut (1989) template.) We performed this normalization using only observations later than 10 days before maximum. To emulate the lack of significant measurements of high-redshift SNe Ia at late times, we discarded observations past 35 days after $`B`$ maximum. The normalization parameters were then applied to the rise data (i.e., the observations earlier than 10 days before $`B`$ maximum). As noted by Riess et al. (1999), this process results in an impressive decrease in the dispersion of the rise data from different SNe Ia.
After normalization, we fit the same empirical model proposed by Goldhaber (1998a,b) and Groom (1998) to the rise data. This model is motivated by the description of a young SN Ia as a homologously expanding fireball whose initial luminosity is most sensitive to its changing radius (and relatively insensitive to the fractionally smaller changes in photospheric velocity and temperature). The luminosity is
$$L=\alpha (t+t_r)^2,$$
(1)
where $`t`$ is the time elapsed relative to maximum, $`t_r`$ is the risetime, and $`\alpha `$ is the “speed” of the rise.
The two free parameters, $`t_r`$ and $`\alpha `$, were determined by finding the best match between the model and data, i.e., when the standard $`\chi ^2`$ statistic is minimized. The minimum $`\chi _\nu ^2`$ was 0.82, indicating a good concordance between model and data. This fit and confidence intervals of the parameters are shown in Figure 1. Like Goldhaber (1998a,b) and Groom (1998), we identified the time of maximum as the time when the SCP template reaches its brightest magnitude. (Uncertainties in determining the fiducial time of maximum do not affect a consistent comparison.) The resulting parameters from this fit were $`t_r`$=19.98$`\pm 0.15`$ days and $`\alpha `$=0.071$`\pm 0.005`$. Because the SCP template is a good match to the Leibundgut (1989) template, it is not surprising that this risetime is quite similar to the value found by Riess et al. (1999) using the Leibundgut template as the fiducial template. Besides the 29 detections of SNe Ia between explosion and 10 days before maximum (relative to the SCP template), Riess et al. (1999) also provide 4 non-detection limits of SNe Ia in the temporal vicinity of explosion. Unfortunately, these non-detections provide negligible additional constraints; in all cases they are more than 3 mag above the expected luminosity of the SNe Ia (based on the fit to the detections).
As seen in Figure 1, the post-rise low-redshift data are well fit by the same modified Leibundgut (1989) template used to normalize the high-redshift data, verifying that the two data sets have indeed been normalized to the same light-curve shape. The light curve shown for the high-redshift SNe Ia prior to 10 days before maximum is the best fit of equation (1) to the preliminary SCP data (Goldhaber 1998a,b). This model fit clearly departs from the low-redshift data as seen in the residuals from the fit to the nearby rise in Figure 2. By 13.8 days before maximum, the difference is 0.5 mag. At 15.5 days before maximum, the difference rises to 1 mag.
The low-redshift risetime of 19.98$`\pm 0.15`$ days is significantly longer than the preliminary measurement of the risetime of 17.50$`\pm 0.40`$ days found for high-redshift SNe Ia from the SCP (Goldhaber 1998a,b; Goldhaber et al. 1999 will present final results). The statistical likelihood that these risetime measurements are discrepant is greater than 99.99% (5.8$`\sigma `$).
## 3 Biases and Systematic Errors
### 3.1 Low Redshift
Most of the earliest observations of nearby SNe Ia were recorded with unfiltered CCDs and transformed to the $`B`$ band. Riess et al. (1999) describe in detail a number of systematic tests they performed to appraise the influence of the transformation process on the estimate of the fiducial risetime. Employing different methods to calibrate the observations onto the standard passband system had little effect on the inferred risetime. In addition, the risetime was found to be insensitive to the shape or color of the young SN Ia spectral energy distribution. A comparison between transformed magnitudes and coeval magnitude measurements observed through standard passbands shows excellent agreement with no evidence for systematic departures.
However, the discrepancy between the low-redshift and high-redshift risetimes is independent of any systematic uncertainties in the transformation of unfiltered CCD observations to a standard passband. The reason is that the two earliest (in the “dilated” timeframe of the SCP template) unfiltered SN Ia detections, SN 1998bu and SN 1997bq, were recorded nearly one full day before the explosion time inferred from the SCP data (see Figure 3). Moreover, it is not possible to detect SNe Ia outside the Local Group of galaxies less than $``$0.5 days after explosion with small-aperture telescopes (in this case the Beijing Astronomical Observatory and the amateur telescope of C. Faranda). Therefore, we conclude that the fiducial risetime of the low-redshift SNe Ia must be at least 1.5 days longer than the preliminary risetime measurement inferred by Goldhaber (1998a,b) and Groom (1998) for the high-redshift SNe Ia, independent of the reliability of photometric transformations described in Riess et al. (1999). This difference alone is significant at the 99.99% (3.8$`\sigma `$) confidence level.
Riess et al. (1999) discuss the possibility that an intrinsic dispersion in risetime (for a given light-curve shape) can lead to the preferential inclusion of slowly rising SNe in our nearby sample and a bias in the inferred risetime. Although the best-fit $`\chi _\nu ^2`$ does not support additional intrinsic dispersion, Riess et al. (1999) considered a subsample of SNe Ia whose membership is independent of the rise. From the unbiased set we infer a risetime of 20.42$`\pm 0.34`$ days, inconsistent with the SCP preliminary measurement of the risetime at the 99.99% (5.6$`\sigma `$) confidence level.
The difference in risetimes does not seem to be a result of the stretch method. Even if this method were to distort the true risetime, this should not affect the risetime comparison because the light-curve shapes represented in the nearby sample span the range of light-curve shapes of the SCP sample. The SCP sample of SN Ia light curves has a narrow distribution of stretch factors around unity (Perlmutter et al. 1999). The average light-curve shape for our sample is 94%$`\pm `$9% of the mean width of the SCP light curves. SN Ia light curves in the nearby sample have stretch factors which are both smaller and larger than unity.
### 3.2 High Redshift
Correctly measuring the rise of faint SNe Ia at high redshift is a great technical challenge which must be convincingly overcome before we can trust the implications of the comparison to the low-redshift rise behavior. The differences in the rising curve of the low-redshift and high-redshift SNe Ia are only significant at 12 days before $`B`$ maximum and younger, when SNe Ia are more than 2 magnitudes below their peak brightness. For SNe Ia with redshifts of 0.4 to 0.5, this corresponds to observed $`R`$-band magnitudes of 24.0 to 24.3 (Garnavich et al. 1998), which are $`K`$-corrected to $`B`$ magnitudes of 24.8 to 25.1. Even larger differences between the low and high-redshift rise are evident when the SNe Ia are 3 mag below maximum, requiring observations of high-redshift SNe Ia at $`R`$-band magnitudes of 25.0 to 25.3. These faint fluxes push the limits of what can be accomplished with 4-m-class telescopes under reasonable conditions and moderate integrations. Many of these individual observations of SNe Ia at high redshift have a signal-to-noise ratio near unity, a regime where careful data analysis is required to avoid systematic errors in photometry.
The SCP observations of high-redshift SNe Ia on the rise (i.e., those 10 to 25 days before $`B`$ maximum) originate from reference images. These are observations taken 3 to 4 weeks before a subsequent set of “search” observations and are used to measure host galaxy brightnesses without SN light. SNe Ia found during the search phase are preferentially discovered near maximum brightness. Due to time dilation, the reference observations therefore contain the light of SNe $``$14 to 18 days before maximum. The SCP has taken great care to obtain “final” reference images years after (or before) the SN Ia explosions to accurately assess the amount of SN Ia light in the original reference images (Perlmutter et al. 1999). They also employ light curves of SNe Ia to determine the amount of any residual light remaining in the final reference images (Perlmutter 1999). Consequently, we expect the SCP measurement of the risetime of high-redshift SNe Ia to be accurate and comparisons to the low-redshift risetime to be meaningful.
However, a very powerful crosscheck of systematic errors on the SCP’s faint photometry of young high-redshift SNe Ia comes from examining their data of SNe Ia well past maximum when the SNe Ia are again of similar brightness. In the age range of 35 to 45 days past maximum, the nearby SNe Ia are 2.7 to 3.1 magnitudes below their maximum brightness. This is the same flux range at which differences of 0.6 to 0.8 mag are evident in the similarly normalized low and high-redshift rising SN Ia light curves. Figure 4 shows a comparison of the low and high-redshift behavior of SNe Ia (normalized to the composite light curve of the high-redshift SNe Ia) on the rising and declining sides of maximum at identical magnitudes. This comparison demonstrates a high degree of concurrence of the declining light curves at the same magnitude levels where the disparity occurs on the rising light curves. The difference in the mean between the high-redshift and low-redshift magnitudes on the decline and in this magnitude range is less than 0.02 mag, indicating that conspicuous systematic errors in the faint SCP photometry cannot explain the differences in the rise behavior. It is important to note that the data in the age range of 35 to 45 days past maximum were not used in the process of normalizing the light curves, making this an independent test.
As discussed in §3.1, an intrinsic dispersion in SN Ia risetime (for SNe Ia with the same post-rise light-curve shape), together with a selection criterion related to the brightness of SNe Ia on the rise, could also bias the high-redshift measurement. Because high-redshift SNe Ia are discovered by their appearance in differenced images, SNe Ia which are fainter than average in the reference observations will display a larger change in the “search” images. This effect would seem to favor the discovery of SNe Ia which are faster or have shorter risetimes than the average for a given light-curve shape. However, the individual measurement uncertainties in the SCP reference observations are larger than the differences in the low-redshift and high-redshift rise (Goldhaber 1998a,b). Therefore the criterion used to discover a SN Ia, the signal-to-noise ratio of the change in flux, is unaltered by changes in flux at this level. Because SNe Ia in the high-redshift sample and the unbiased subset of the nearby sample were not discovered during the early rise, the apparent difference in the samples’ risetimes is not simply a result of a selection bias.
The transformation of observed high-redshift SN Ia magnitudes to rest-frame magnitudes requires the application of cross-filter $`K`$-corrections (Kim, Goobar, & Perlmutter 1996). Systematic errors in these corrections are likely at the 0.03 to 0.05 mag level (Perlmutter et al. 1999; Kim et al. 1996; Schmidt et al. 1998) but cannot explain the observed differences of $``$0.7 mag at 14-15 days before $`B`$ maximum.
Additional sources of systematic errors in the SCP measurement of the SN Ia high-redshift risetime are best considered by Goldhaber et al. (1999).
Despite the above tests, it would be desirable to have an additional crosscheck in the form of an independent measurement of the high-redshift risetime. Although the data of the SCP are currently the most extensive available on the rise of high-redshift SNe Ia, early detections were made by the High-$`z`$ team of SN 1995K at $`z=0.48`$ (Schmidt et al. 1998) and SN 1996K at $`z=0.38`$ (Riess et al. 1998). Unfortunately, with only two data points it is not possible to derive an independent, meaningful comparison to the risetime measurements. More early rise data are needed from the High-$`z`$ team to yield an accurate measurement of the risetime.
## 4 Discussion
Our measurement of the risetime from nearby SNe Ia is inconsistent with the preliminary measurement of the risetime of high-redshift SNe Ia inferred by the SCP (Goldhaber 1998a,b, Groom 1998; see Goldhaber et al. 1999 for final results) with high statistical confidence (5.8$`\sigma `$). The sense of the difference is that the low-redshift risetime measurement is 2.5$`\pm 0.4`$ days longer than the high-redshift measurement. This difference must be either the result of a systematic error in the measurements or intrinsic to SNe Ia. No compelling source of systematic error was found in §3 which could bring the low-redshift and high-redshift risetimes into concordance. However, systematic errors in the preliminary high-redshift measurement are best addressed by Goldhaber et al. (1999).
We attempted to follow the methodology of Goldhaber (1998a,b) and Groom (1998) in making this comparison, and if they have correctly followed their stated methodology, we believe the difference reported here is significant. This difference can only be rendered insignificant if future analysis of the high-redshift data concludes that a substantial error was made in determining the high-redshift risetime or its uncertainty. The significantly higher quality of the low-redshift photometry and the existence of strong early detections of nearby SNe Ia makes it highly unlikely that a systematic error in the low-redshift risetime measurement could alone bring the disparate risetimes into concordance. A further comparison between the data sets cannot be made at this time due to the unavailability of the SCP photometry.
It is surprising, considering the relatively low signal-to-noise ratio of the high-redshift SN Ia photometry, that a measurement of the risetime could be made to within the stated precision (1$`\sigma `$ = 0.4 days; Groom 1998). Indeed, further careful analysis of the high-redshift data could result in a larger measurement uncertainty. For example, if the precision of the high-redshift risetime measurement decreased significantly to 1$`\sigma `$ = 1 to 1.5 days, the significance of the difference in the measured risetimes would reduce to only $``$95% (i.e., 2$`\sigma `$). If further analysis indicates an extreme increase in the uncertainty of the high-redshift risetime measurement, the current precision of the comparison of the risetimes as a test of evolution might become insufficient to reach a robust conclusion.
If the difference in the risetime behavior is intrinsic to the SNe Ia, this would likely indicate an evolution of the characteristics of SN Ia explosions. Synonymous with evolution is the existence of a previously unknown, additional parameter not included in the current one-parameter empirical description of SN Ia light curves whose typical value evolves with redshift. What would be the implications of an evolution of SNe Ia between $`z=0`$ and $`z=0.5`$?
An evolution of SNe Ia may reveal a redshift-dependent variation in the composition of SN Ia progenitors (Ruiz-Lapuente & Canal 1998; Livio et al. 1999, Kobayashi et al. 1998). The construction of SN Ia progenitors is limited by the time required for stars to reach their degenerate endstates and for transfer of sufficient material from the donor star. Because white dwarfs born from low-mass stars will be absent at high redshifts, some evolution of SNe Ia may be expected. The apparent evolution of SNe Ia may augment our currently limited understanding of the nature of SN Ia progenitors.
SNe Ia have also been employed as a powerful tool for measuring cosmological parameters. In this role, the observed faintness of high-redshift SNe Ia has been taken as evidence for a current acceleration of the Universe due to a cosmological constant (Riess et al. 1998; Perlmutter et al. 1999). If SNe Ia are evolving, how are previous measurements of cosmological parameters from SNe Ia affected?
The observed evolution of SNe Ia during their rise could only impact distance measurements if this evolution extends to the post-rise development. Only observations of the brightness and colors of SNe Ia near peak and a few weeks thereafter have been used to estimate their distances.
The observation of different risetimes for SNe with similar subsequent light-curve shapes may signal a departure of the viability of previous one-parameter empirical models. Unfortunately, we cannot yet directly infer the size or direction of the effect on the cosmological parameters.
A pure empiricist could be guided simply by Ockham’s razor to conclude that the two unexpected characteristics of high-redshift SNe Ia (that they appear to rise more quickly and to be systematically dimmer than expected) are most economically explained by a single hypothesis: they have evolved. Such evolution might be expected between high and low redshifts where variations in metallicity and progenitor ages must occur. This hypothesis would obviate the need for a cosmological constant.
However, as noted by Schmidt et al. (1998), Riess et al. (1998), and Perlmutter et al. (1999), the sample of nearby SNe Ia already spans an impressive range of environments and stellar populations. SNe Ia hosted by early-type, late-type, and starburst galaxies show no systematic differences in their distance estimates. The relative reliability of SN Ia distance measurements across expected variations in progenitor properties in the nearby Universe is arguably the best evidence that evolution since $`z0.5`$ should not affect cosmological measurements. Yet it has not been determined if the specific environments of progenitors in different host galaxy types vary substantially. The individual environments of SNe Ia in the nearby Universe must be investigated before we can infer whether or not their assumed variability provides evidence against evolution. It is important to note that none of the host galaxies used to measure the low-redshift risetime are early-type galaxies. The complete distribution of host galaxy types used to measure the high-redshift risetime is not yet known (Perlmutter et al. 1999).
Semi-empirical methods used to calibrate the maximum luminosity of SNe Ia suggest that the peak luminosity may be affected by a change in the risetime, but without additional information these methods give opposite indications of the direction of the change in peak luminosity (Nugent et al. 1995). Treating a SN Ia as an expanding photosphere with all other variables unaffected, a shorter risetime results in a dimmer SN Ia at peak. Alternately, a determination of the peak luminosity from the instantaneous rate of radioactive energy deposition indicates that a shorter risetime yields a brighter peak. Because the expanding photosphere determination of the peak luminosity is a steeper function of the risetime, the methods together suggest that a shorter risetime yields a somewhat dimmer peak (Nugent et al. 1995).
Because the perceived differences are only apparent when the SNe Ia are very young, we might conclude that physical differences exist only on the surface or outer layer of the SNe. This conclusion resonates with the observation that spectroscopic differences among normal and overluminous SNe Ia are most apparent at early times (e.g., Filippenko et al. 1992; Phillips et al. 1992; Li et al. 1999). These superficial differences may be related to an aspect of the material recently accreted (but not thoroughly processed) onto the progenitor. If this parameter were the metallicity of the accreted material, it would not be surprising that the opacity and hence the risetime could be affected. A lower surface metallicity, expected at high redshift, would produce a faster or shorter risetime, in concordance with our results. However, modeling indicates that the photosphere of a SN Ia may recede below the surface layer of unprocessed material in only a few days (Höflich 1999; see also Lentz).
If the only source of the observed risetime difference is the surface of the SN Ia, how would the peak luminosity be affected? As a fraction of the peak output, the difference in the total energy lost during a short or long rise is negligible. If the conditions necessary for explosion are dictated by the progenitor mass or properties near the center, the variations in the surface chemistry would not affect the size of the energy source (i.e., the <sup>56</sup>Ni mass). Once the photosphere receded beneath the surface, the subsequent evolution of the SN Ia including the peak luminosity may be unaltered.
Because the risetime is a function of the diffusion time of energy from the decay of <sup>56</sup>Ni to the surface, the observed difference in risetimes could indicate a variation in the initial location of the synthesized <sup>56</sup>Ni. Longer risetimes would result from SNe Ia which had a greater depth of intermediate-mass elements covering the <sup>56</sup>Ni. If this variation is caused only by mixing, the diffences in the diffusion times at peak should be negligible resulting in little or no variation in peak luminosity (Pinto 1999).
Yet, it is also possible that the risetime difference could be indicative of a deeper evolution of the SN Ia explosion which is only observable at the surface where the unburnt material resides. If the change in the risetime results from a more complex alteration of the SN Ia physics, we cannot easily infer the effect on the post-rise light-curve.
We might hope to employ detailed modeling of SN Ia explosions to gauge the effect that the observed evolution has on measurements of the cosmological parameters. However, the inability of current theory to adequately model many of the observed characteristics of SNe Ia engenders little faith that theory alone can be used to predict the consequences of the observed evolution. Specifically, the value of the fiducial risetime and the trend between risetime and peak luminosity (or decline rate) is in poor concordance with most available theoretical models (Riess et al. 1999).
Theoretical models have indicated that differences in white dwarf carbon-to-oxygen (C/O) ratios should produce variations in SN Ia explosions (Höflich, Wheeler, & Thielemann 1998). The similarity between stellar and cosmological timescales leads to the conclusion that the white dwarfs which produce high-redshift SNe Ia originate, on average, in younger and hence more massive stars than today’s SNe Ia (von Hippel, Bothun, & Schommer 1997). Variations in the C/O ratio may be a natural consequence of white dwarfs which evolved from different stellar masses.
Höflich et al. (1998) predict that decreasing the progenitor’s C/O ratio by 60% produces a SN Ia with the same decline rate, yet is 30% brighter and requires 3 days longer to reach maximum brightness. This, if low-redshift SNe Ia have significantly smaller C/O ratios than high-redshift SNe Ia, the direction and size of this effect obviates the need for a cosmological constant or an accelerating Universe to explain the observations of low and high-redshift SNe Ia. However, Höflich et al. (1999) also expect that more massive stars would yield white dwarfs with lower C/O ratios. At higher redshifts, lower mass stars have not yet had time to become white dwarfs. Therefore, the theoretical prediction would be for higher redshift progenitors to give rise to more slowly rising SNe Ia, which is opposite to the observed trend reported here. In addition, others suggest an inverse relation to that of Höflich et al. (1998) between the C/O ratio and luminosity (Umeda et al. 1999). More work is needed to understand this complicated process.
Observations at high and low redshifts can directly test for risetime evolution and its cosmological implications. An exploration of the rise behavior of nearby SNe Ia born in a wide range of environments may reveal objects more similar to those at high redshift; by determining the relative luminosity of “fast” and “slow” rising SNe Ia in the nearby Universe we could directly evaluate the impact of the apparent evolution on cosmological inferences. Comparisons of the spectra of high and low-redshift SNe Ia observed during the early rise, for example, may indicate systematic differences. A measurement of the rise behavior of SNe Ia at $`z0.2`$ should yield results which are between those presented here and the preliminary SCP measurement. Finally, the most challenging but potentially most fruitful way to explore the role of evolution in the current cosmological measurements is by extending the measurement of high-redshift SNe Ia to $`z>1.0`$, where the effect of luminosity evolution is likely to diverge from that of a vacuum energy density. As seen in Figure 5, a simple linear luminosity evolution of SNe Ia mimics the effects of a cosmological constant and mass density only in a specific redshift range. Continued degeneracy between evolution and cosmology at additional redshifts can only be envisioned by the most imaginitive (and sadistic) minds.
We are indebted to Mark Armstrong, Eric Thouvenot and Chuck Faranda for providing the CCD images of young SNe Ia. We wish to thank Ed Moran, Peter Meikle, Peter Nugent, Gerson Goldhaber, Saul Perlmutter, Don Groom, Robert Kirshner, Peter Garnavich, Saurabh Jha, Nick Suntzeff and Doug Leonard for helpful discussions. The Aspen Center for Physics provided a stimulating environment in which these results were discussed. The work at U.C. Berkeley was supported by the Miller Institute for Basic Research in Science, by NSF grant AST-9417213, and by grant GO-7505 from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555.
References
Filippenko, A.V. et al. 1992, ApJ, 384, L15
Garnavich, P., et al. 1998, ApJ, 493, 53
Goldhaber, G., 1998a, B.A.A.S., 193, 4713
Goldhaber, G., 1998b, in “Gravity: From the Hubble Length to the Planck Length ,” SLAC Summer Institute (Stanford, CA: Stanford Linear Accelerator Center)
Goldhaber, G., et al., 1999, in preparation
Groom, D. E., 1998, B.A.A.S., 193, 11102
Höflich, P., Wheeler, J. C., & Thielemann, F. K., 1998, ApJ, 495, 617
Höflich, P., et al., in preparation
Jha, S., et al., 1999, ApJSS, in press (astro-ph/9906220)
Kim, A., Goobar, A., & Perlmutter, S. 1996, PASP, 108, 190
Kobayashi, C., Tsujimoto, T., Nomoto, K., Hachisu, I., Kato, M., 1998, ApJ, 503, 155
Leibundgut, B. 1989, PhD thesis, University of Basel
Leibundgut, B., & Pinto, P. A., 1992, ApJ, 401, 49
Lentz, E. J., Baron, E., Branch, D., Hauschildt, P. H, & Nugent, P. E., 1999, ApJ, submitted (astro-ph/9906016)
Li, W., et al., 1999, AJ, in press
Livio, M., 1999, astro-ph/9903264
Nugent, P., Branch, D., Baron, E., Fisher, A., & Vaughan, T., 1995, PRL, 75, 394 (Erratum: 75, 1874)
Perlmutter, S., 1999, private communication
Perlmutter, S., et al., 1997, ApJ, 483, 565
Perlmutter, S., et al., 1999, ApJ, 517, 565
Phillips, M. M., Wells, L., Suntzeff, N., Hamuy, M., Leibundgut, B., Kirshner, R. P., & Foltz, C. 1992, AJ, 103, 1632
Pinto, P., 1999, private communication
Riess, A. G., et al., 1998, AJ, 116, 1009
Riess, A. G., et al., 1999, AJ, submitted
Ruiz-Lapuente, P., & Canal, R. 1998, ApJ, 497, 57
Sandage, A., & Hardy, E., 1973, ApJ, 183, 743
Schmidt, B. P, et al. 1998, ApJ, 507, 46
Umeda, H. et al., 1999, astro-ph/9906192
Vacca, W. D., & Leibundgut, B., 1996, ApJ, 471, L37
von Hippel, T., Bothun, G. D., & Schommer, R. A. 1997, AJ, 114, 1154
FIGURE CAPTIONS:
Fig 1.-$`B`$-band data of nearby SNe Ia normalized by the “stretch” method to the same modified Leibundgut (1989) template used to normalize the high-redshift SNe Ia (Goldhaber 1998a,b) and the inferred risetime parameters. The observation times of the individual SNe Ia are dilated to provide the best fit of the post-rise (i.e., after 10 days before maximum) data (diamonds) to the modified Leibundgut (1989) template (asterisks). Fitting a quadratic rise model to the nearby rise data (filled circles) yields confidence intervals for the fiducial speed and risetime to $`B`$ maximum. The preliminary best-fit of the same model to the high-redshift data is shown as a dashed line. The best estimate of risetime of nearby SNe Ia is 19.98$`\pm 0.15`$ days, significantly longer and statistically inconsistent with the preliminary measurement of the risetime of high-redshift SNe Ia of 17.5$`\pm `$0.4 days (Goldhaber 1998a,b).
Fig 2.-Residuals from the best fit rise model of the nearby SNe Ia. Individual observations of nearby SNe Ia are shown as filled circles and the preliminary best model fit to the high-redshift data is shown as a dashed line.
Fig 3.-Pre-explosion, post-explosion, and maximum light observations of SN 1998bu. After dilating to match the modified Leibundgut template, the detection by Faranda of SN 1998bu (middle panel) is 18.5 days before $`B`$ maximum and 1 day before the explosion time expected from the high-redshift SNe Ia. The existence of this detection is strong evidence that the risetime to $`B`$ maximum is at least 1.5 days greater than the preliminary value inferred by Goldhaber (1998a,b) and Groom (1998) for high-redshift SNe Ia. The image at maximum light is from Jha et al. (1999)
Fig 4.-Comparison of nearby and high-redshift SNe Ia at similar magnitudes below maximum on the rise and decline. The excellent agreement between the two samples on the decline is strong evidence that systematic errors incurred in measuring faint SNe at high redshift on the rise is not the cause of the apparent difference between fits to the rise of the samples.
Fig 5.-The degeneracy between simple linear evolution and cosmological parameters on the Hubble diagram of nearby and high-redshift SNe Ia. The possible confusion between the effect of evolution and a cosmological constant on SN Ia distances could be resolved by additional measurements of SNe Ia at redshifts greater than one. The data shown are from Riess et al. (1998).
|
no-problem/9907/cond-mat9907070.html
|
ar5iv
|
text
|
# Two-scale localization in disordered wires in a magnetic field.
## Abstract
Calculating the density-density correlation function for disordered wires, we study localization properties of wave functions in a magnetic field. The supersymmetry technique combined with the transfer matrix method is used. It is demonstrated that at arbitrarily weak magnetic field the far tail of the wave functions decays with the length $`L_{\mathrm{cu}}=2L_{\mathrm{co}}`$, where $`L_{\mathrm{co}}`$ and $`L_{\mathrm{cu}}`$ are the localization lengths in the absence of a magnetic field and in a strong magnetic field, respectively. At shorter distances, the decay of the wave functions is characterized by the length $`L_{\mathrm{co}}`$. Increasing the magnetic field broadens the region of the decay with the length $`L_{\mathrm{cu}}`$, leading finally to the decay with $`L_{\mathrm{cu}}`$ at all distances. In other words, the crossover between the orthogonal and unitary ensembles in disordered wires is characterized by two localization lengths. This peculiar behavior must result in two different temperature regimes in the hopping conductivity with the boundary between them depending on the magnetic field.
Disordered systems have been under intensive study for several decades but only recently strong localization at moderate disorder in 1D wires has become the subject of a systematic experimental investigation . The authors of Ref. studied electron transport in submicrometer-wide wires fabricated from Si $`\delta `$-doped GaAs. A large number of the wires were connected in parallel and the conductivity of the entire system was measured as a function of temperature $`T`$ and magnetic field $`H`$ applied perpendicular to the wires. The activation temperature dependence observed allowed to demonstrate the exponential localization and extract the dependence of the localization length $`L_c`$ on the magnetic field. It was found that in a strong magnetic field the localization length is twice as large as in the absence of the magnetic field.
Localization due to quantum interference is a fundamental property of disordered one-dimensional (1D) systems. Mott and Twose predicted that at arbitrary weak disorder all states of a disordered chain had to become localized, which has been proven later . Thouless realized that all wave functions had to be localized also in thick wires. Such wires are more interesting because electrons can diffuse at distances exceeding the mean free path $`l`$ but get localized on the localization length $`L_\mathrm{c}=l\left(p_0^2S\right)`$, where $`p_0`$ is the Fermi-momentum and $`S`$ is the cross-section of the wire. Explicit calculations confirmed this picture for thick wires as well as for systems of coupled chains . In contrast to the chains, the electron motion in thick wires is sensitive to an external magnetic field. A remarkable effect, namely, doubling of the localization length when applying a strong magnetic field $`H`$ was predicted for such systems . It is this doubling that has been observed in Ref.. Different aspects of the localization in wires as well as connections to random matrix theories were discussed later.
Surprisingly, little attention has been paid to the crossover between the limits of zero and strong magnetic fields $`H`$ (between the orthogonal and unitary ensembles). Study of this crossover is important, however, because the localization length can be measured experimentally. Except for an interpolation formula for the localization length suggested in Ref. and a numerical study of Ref. , practically no attempt has been made to describe the crossover. Apparently, the absence of attention has been due to a common belief that the crossover is simple and the only thing that can happen is that the localization length changes smoothly between its value $`L_{\mathrm{co}}`$ at zero field and $`L_{\mathrm{cu}}=2L_{\mathrm{co}}`$ at a strong magnetic field. Such a scenario was used, for example, by the authors of Ref. in their attempt to fit the data on the interpolation curve of Ref. , suggested basically to describe the smooth change of the localization length.
In this Letter we present results of an analytical study of a correlation function describing the decay of wave functions in disordered quantum wires. In contrast to the one-scale picture, the behavior of the wave functions at finite magnetic fields turns out to be more complicated. The most striking picture of this decay is that, even at very weak magnetic fields, the far tail of the wave functions falls off with the length $`L_{\mathrm{cu}}`$, whereas at smaller distances the decay is governed by the length $`L_{\mathrm{co}}`$. The larger length $`L_{\mathrm{cu}}`$ does not depend on the strength of the magnetic field. Increasing the magnetic field results in a broadening of the asymptotic region until the entire wave function starts decaying with the length $`L_{\mathrm{cu}}`$.
The behavior of localized wave functions can be well described by the correlation function $`p_{\mathrm{}}\left(r\right)`$
$$p_{\mathrm{}}\left(r\right)=\underset{\alpha }{}\left|\psi _\alpha \left(0\right)\right|^2\left|\psi _\alpha \left(r\right)\right|^2\delta \left(\epsilon \epsilon _\alpha \right),$$
(1)
where $`\psi _\alpha \left(r\right)`$ and $`\epsilon _\alpha `$ are the eigenfunction and eigenenergy of a state $`\alpha `$, $`r>0`$ is the coordinate along the wire, and the angle brackets stay for averaging over disorder.
The function $`p_{\mathrm{}}\left(r\right)`$ is important not only for a theoretical description of localized wave functions but it also determines directly the hopping conductivity at low temperature. According to results obtained quite long ago , the hopping conductivity of one-dimensional chains and wires is described by a simple formula
$$\sigma =\sigma _0\mathrm{exp}\left(T_0/T\right)\text{}T_0\left(\nu _1L_\mathrm{c}\right)^1\text{}\nu _1=\nu S,$$
(2)
where $`\nu `$ is the density of states, $`\nu _1`$ is the one-dimensional density of states, $`S`$ is the cross-section of the wire and $`L_\mathrm{c}`$ is a localization length that can be extracted from the function $`p_{\mathrm{}}\left(r\right)`$, Eq. (1). The pre-factor $`\sigma _0`$ depends on parameters of the model. Remarkably, even the Coulomb interaction does not change the temperature dependence, Eq. (2), entering $`\sigma _0`$ only . Equation (2) was used in Ref. for extracting the localization length from the temperature dependence of the conductivity.
The correlation function $`p_{\mathrm{}}\left(r\right)`$ can be found using the supersymmetry technique . Let us first discuss the final result in a weak field which reads
$`p_{\mathrm{}}(r){\displaystyle \frac{1}{4\sqrt{\pi }L_{\mathrm{co}}}}\left({\displaystyle \frac{\pi ^2}{8}}\right)^2\left({\displaystyle \frac{4L_{\mathrm{co}}}{r}}\right)^{3/2}`$ (3)
$`\times \left[\mathrm{exp}\left({\displaystyle \frac{r}{4L_{\mathrm{co}}}}\right)+\alpha X^2\mathrm{ln}^2X\mathrm{exp}\left({\displaystyle \frac{r}{4L_{\mathrm{cu}}}}\right)\right]`$ (4)
where $`L_{\mathrm{co}}=\pi \nu _1D_0`$ is the localization length for the orthogonal ensemble, $`L_{\mathrm{cu}}=2L_{\mathrm{co}}`$ is the localization length for the unitary ensemble, $`D_0`$ is the classical diffusion coefficient, and $`\alpha `$ is a positive constant of order unity. The parameter $`X=2\pi \varphi /\varphi _0`$ describes the crossover between the orthogonal and unitary ensemble, $`\varphi _0=hc/e`$ is the flux quantum, and $`\varphi =2HL_{\mathrm{co}}y^2_{\mathrm{sec}}^{1/2}`$ is a characteristic magnetic flux through an area limited by the localization length. The coordinate $`y`$ is perpendicular to the direction of the wire and the magnetic field. The symbol $`\mathrm{}_{\mathrm{sec}}`$ stands for the averaging across the wire. For the “flat” wires made on the basis of a 2D gas as in Ref. , one has $`y^2^{1/2}=d/\sqrt{12}`$, where $`d`$ is the width of the wire. For wires with a circular cross-section $`y^2^{1/2}=d/4`$, where $`d`$ is the diameter. Equation (4) is written in the limit $`X1`$ and $`rL_{\mathrm{co}}`$. At $`X=0,`$ Eq. (4) reduces to the well known result for the wire without a magnetic field . At small but finite $`X1`$, the second term in Eq. (4) is small for not very large $`r`$ but in the limit $`r\mathrm{}`$ it is always larger than the first one.
Equation (4) shows that a weak magnetic field changes drastically the tail of wave functions leaving their main body almost unchanged. Even if there is a correction to the localization length $`L_{\mathrm{co}}`$, the second term in Eq. (4) is more important in the limit of large $`r`$. In other words, the characteristic behavior of wave functions at distances $`rr_X`$ is described by formulae of the orthogonal ensemble, while the tail $`rr_X`$ corresponds to the unitary one. Comparing the first and the second term in Eq. (4) one estimates the characteristic distance $`r_X`$
$$r_XL_\mathrm{c}\mathrm{ln}X.$$
(5)
The fact that any weak magnetic field changes the tail of the wave function looks quite natural because regions where the amplitude of the wave function is very small must be very sensitive to external perturbations. The localization of the wave functions is a result of multiple interference of waves scattered by impurities. The tails are formed by a coherent propagation of waves along very long paths, which makes the influence of any weak magnetic field on the interference extremely important. The separation into the “orthogonal” and “unitary” parts loses its sense at $`X1`$ corresponding to characteristic magnetic fields determining the crossover between the orthogonal and unitary ensembles.
Analogous changing of the asymptotic behavior in a magnetic field occurs in 0D (see, e.g.). The level-level correlation function for the orthogonal ensemble is proportional to the energy $`\omega `$ in the limit $`\omega 0`$ but any weak magnetic field changes this behavior to an $`\omega ^2`$ -dependence, which is characteristic for the unitary ensemble. Of course, the region of the $`\omega ^2`$\- dependence is very narrow at weak magnetic fields.
The factor $`X^2\mathrm{ln}^2X`$ in Eq. (4) is proportional to the square of the flux through a segment of the wire limited by $`r_X,`$ Eq. (5) and resembles the Mott law for the ac conductivity, $`\sigma (\omega )\omega ^2\mathrm{ln}^2\omega `$. Using the standard perturbation theory one can understand that $`r_X`$ is the length at which the magnetic field considerably changes correlations and so, its presence in Eq. (8) is natural.
The final result, Eq. (4), is essentially non-perturbative and its derivation is not simple. Let us present now the scheme of the computation.
The correlation function $`p_{\mathrm{}}\left(r\right)`$ is extracted from the low-frequency limit of the density-density correlation function which, in its turn, is presented as a functional integral over $`8\times 8`$ supermatrices $`Q`$
$$p_{\mathrm{}}\left(r\right)=\underset{\omega 0}{lim}\left(i\omega Y_\omega \left(r\right)\right)$$
(6)
$`Y_\omega \left(r\right)=2\pi ^2\nu ^2{\displaystyle Q_{24}^{12}\left(0\right)Q_{42}^{21}\left(r\right)\mathrm{exp}\left(F\left[Q\right]\right)DQ}`$where $`Q_{42}^{12}`$ and $`Q_{24}^{21}`$ are the matrix elements of supermatrices $`Q`$. The free energy functional $`F\left[Q\right]`$ is written in 1D in the presence of a magnetic field as
$$F[Q]=\frac{\pi \nu _1}{8}\mathrm{Str}\left[D_0\left(Q(x)\right)^2+2i\omega \mathrm{\Lambda }Q(x)\right]dx.$$
(7)
In equation (7), we introduced the operator $`Q=_xQ(ie/\mathrm{}c)𝐀[Q,\tau _3],`$ the coordinate $`x`$ is chosen along the wire, and the standard notations for the supertrace $`\mathrm{Str}`$ and matrices $`\tau _3`$ and $`\mathrm{\Lambda }`$ are used . Choosing the gauge $`𝐀=(Hy,0,0)`$ and using the notations introduced below Eq. (4) one rewrites the functional $`F`$ as
$$F\left[Q\right]=F\left[Q\right]_{H=0}+X^2\left(32L_{\mathrm{co}}\right)^1\mathrm{Str}[Q,\tau _3]^2dx.$$
(8)
Calculation of the one-dimensional functional integral in Eq. (6) is performed using the transfer matrix technique. The Fourier transform of the correlator $`Y_\omega `$ is written as the integral
$$Y_\omega \left(k\right)=2\pi ^2\nu _1\nu \mathrm{\Psi }Q_{24}^{12}(P_{k42}+P_{k42})dQ$$
(9)
of the product of the scalar and the matrix functions $`\mathrm{\Psi }(Q)`$ and $`P(Q)`$, representing the partition functions of the segments $`x<0`$ and $`x>0`$, respectively. Due to the supersymmetry one has $`\mathrm{\Psi }(Q)dQ=1`$.
Discretizing the wire and using Eqs. (7, 8) one derives recurrence relations for the functions $`\mathrm{\Psi }`$ and $`P`$ on neighboring sites. In the continuum limit, these relations yield effective “Schrödinger equations.” The corresponding Hamiltonian contains the Laplacian $`\mathrm{}_Q`$ acting in the space of matrix elements . The functions $`\mathrm{\Psi }`$ and $`P`$ have the symmetry of the states with the zero and the first “angular momentum,” respectively. In a schematic form, one has
$`\left[\mathrm{}_{0Q}+X^2(\lambda _{1c}^2\lambda _c^2)\stackrel{~}{\omega }\mathrm{Str}(\mathrm{\Lambda }Q)\right]\mathrm{\Psi }`$ $`=`$ $`0,`$ (10)
$`\left[2ikL_{\mathrm{co}}\mathrm{}_{1Q}+X^2(\lambda _{1c}^2\lambda _c^2)\stackrel{~}{\omega }\mathrm{Str}(\mathrm{\Lambda }Q)\right]P_k`$ $`=`$ $`Q\mathrm{\Psi },`$ (11)
where $`\mathrm{}_{0Q}`$ and $`\mathrm{}_{1Q}`$ are the projections of the Laplacian $`\mathrm{}_Q`$ on the states with the zero and first angular momentum, and $`\stackrel{~}{\omega }=i\omega L_{\mathrm{co}}^2/2D_0`$.
Equations (9, 10) solving the problem are rather general. A certain parametrization of $`Q`$-matrices should be chosen to perform explicit calculations. As soon as $`H0`$, the standard parametrization used in Refs. is not convenient, so that we use the “magnetic parametrization” of Refs. constructed for studying the crossover between the orthogonal and unitary ensembles in 0D. However, even this parametrization, when applied to Eqs. (10) leads to extremely complicated partial differential equations.
Instead of trying to solve these equations exactly, we concentrate on a study of the limit of weak magnetic fields. The localization length is determined by the poles of the function $`P`$ in the $`k`$-plane in the region of “free motion”, faraway from the “barrier” given by the last term in Eq. (10). The position of the poles $`k_n`$ are determined by the equation $`_n(k_n)=0`$, where $`_n(k)`$ are eigenvalues of the operator entering the LHS of Eq. (10) corresponding to the eigenfunctions $`\phi _n(Q)`$. According to the general procedure developed in Ref. , one has no need to consider solutions at arbitrary $`\omega `$. Although the dependence on $`\omega `$ is necessary for computing different matrix elements, the poles can be found just by putting $`\omega =0`$. This is natural because the localization properties are determined for finite samples at $`\omega =0`$ . Even then, we cannot find all solutions $`P_k\left(Q\right)`$ and corresponding poles $`k`$. However, in order to determine the function $`Y_\omega \left(r\right)`$ at large distances, we need only one state with the smallest non-zero $`k_0\left(X\right)`$. At zero magnetic field, such a state $`\phi _0\left(Q\right)`$ is well known to have a pole at $`k_0\left(0\right)=\pm i\left(4L_{\mathrm{co}}\right)`$ <sup>-1</sup>. This corresponds to the first term in Eq. (4). As soon as one applies a weak magnetic field, the state is distorted. This effect is not crucial: We find no corrections in the lowest order either to the localization length or to the pre-exponential.
The main result of the present work, namely, the second term in Eq. (4), can be obtained without complicated calculations. It turns out that, at arbitrarily weak magnetic field, an additional state $`\phi _1\left(Q\right)`$ with a smaller value $`k_1=\pm i\left(8L_{\mathrm{co}}\right)^1`$ appears. It is this state that determines the behavior of $`p_{\mathrm{}}\left(r\right)`$ in the limit $`r\mathrm{}`$. Remarkably, the value $`k_1`$ does not depend on the magnetic field even in the limit $`X\mathrm{}`$, when the state $`\phi _1`$ gives the main contribution at all distances.
The origin of the states, $`\phi _0`$ and $`\phi _1`$, is apparent. An essential feature of the magnetic parametrization is a finite contribution arising due to the singularity of the Jacobian at $`\lambda _{1c}`$, $`\lambda _c1`$, where $`\lambda _{1c}`$ and $`\lambda _c`$ are “eigenvalues” corresponding to the cooperon degrees of freedom. This singularity is of the type $`\left(\lambda _{1c}\lambda _c\right)^2`$ and is usual in the supersymmetry technique. The procedure of regularization of the singularity has been developed for the 0D cases in Refs. . In the 1D case, the contribution from the singular term gives the correlator for the unitary ensemble (in the limit $`\lambda _{1c}`$, $`\lambda _c1`$ the cooperon degrees of freedom are frozen). In order to compensate this part at moderate fields, the regular contribution should contain both the solutions $`\phi _0`$ and $`\phi _1`$. At $`X0`$, the solution $`\phi _1`$ exactly compensates the singular term, so that the orthogonal part is given by $`\phi _0`$ only.
Having disregarded the term with the external frequency $`\omega `$, we arrive at a Hamiltonian with separated cooperon and diffuson variables. However, it is still too complicated since essential values of $`\lambda _c`$ and $`\lambda _{1c}`$ are such that $`\lambda _{1c}`$, $`\lambda _c1`$. This difficulty is avoided by considering the derivative over the magnetic field, $`Y_X^{}`$. For the quantity $`Y_X^{}`$, in contrast, large $`\lambda _{1c}`$ and $`\lambda _{1d}`$ become important, which substantially simplifies the “Hamiltonian”
$`=\lambda _{1d}^2_{\lambda _{1d}}^2\lambda _{1c}^2_{\lambda _{1c}}^2+2\lambda _{1c}_{\lambda _{1c}}+X^2\lambda _{1c}^2.`$
The solution $`\mathrm{\Psi }`$ decaying at $`\lambda _{1c}\mathrm{}`$ equals
$$\mathrm{\Psi }=(1+X\lambda _{1c})\mathrm{exp}(X\lambda _{1c}),$$
(12)
and satisfies the boundary condition $`\mathrm{\Psi }(0)=1`$. The main contribution to the correlator $`Y_X^{}`$ comes from the region $`1\lambda _{1c}1/X`$ within logarithmic accuracy and the function $`\mathrm{\Psi }`$, Eq. (12), takes the form $`\mathrm{\Psi }=1X^2\lambda _{1c}^2/2`$.
There are two terms in the expression for $`Y^{}dQQ_{24}^{12}(\mathrm{\Psi }_X^{}P+\mathrm{\Psi }P_X^{})`$, each of them having a different localization length. The function $`P`$ determined from
$$(\lambda _{1d}^2_{\lambda _{1d}}^2\lambda _{1c}^2_{\lambda _{1c}}^2+2\lambda _{1c}_{\lambda _{1c}}+2ikL_{\mathrm{co}})P=Q$$
(13)
is given by $`P=\frac{1}{2}Q/(1+ikL_{\mathrm{co}})`$. This leads to a fast decaying contribution to the function $`Y`$. The main contribution at large distances comes from the function $`P_X^{}`$ satisfying the equation
$$(+2ikL_{\mathrm{co}})P_X^{}X\lambda _{1c}^2Q.$$
(14)
Recalling that $`Q\lambda _{1c}\lambda _{1d}`$, we represent $`P_X^{}`$ in the form $`P_X^{}=XQ\lambda _{1c}^2f(\lambda _{1d})`$ and obtain
$$(\lambda _{1d}^2_{\lambda _{1d}}^22\lambda _{1d}_{\lambda _{1d}}+2ikL_{\mathrm{co}})f1.$$
(15)
This expression is essentially the same as the one for the unitary ensemble, yielding the localization length $`L_{cu}`$. The pre-exponential of the correlator $`Y_X^{}`$ is most easily estimated for $`k=0`$ when $`f=\mathrm{ln}\lambda _{1d}`$. Estimating the derivative of Eq. (9)
$$Y_X^{}X^{1/X}d\lambda _{1c}^{1/\lambda _{1c}}d\lambda _{1d}\mathrm{ln}\lambda _{1d}X\mathrm{ln}^2X,$$
(16)
we integrate the result over $`X`$ and use the fact that the correlator $`p_{\mathrm{}}(r)`$ at vanishing magnetic field must coincide with the one of the orthogonal ensemble. This leads us finally to Eq. (4).
It is relevant to note that in our analysis we considered the limit when $`\omega 0`$ but the magnetic field $`H`$ remains finite. In principle, this limit differs from the limit $`H=0`$, $`\omega 0`$ considered in Refs. . However, for the correlation function $`p_{\mathrm{}}\left(r\right)`$, Eq. (1), the limit $`H0`$ must give the same results as those obtained for $`H=0`$.
Although our analysis was performed for very large distances $`r`$, it allows us to believe that the entire range of fields between the orthogonal and unitary ensembles is governed by an interplay between the two localization lengths. On the other hand, one should keep in mind that Eq. (4) is not exact in the sense that it remains to be seen how the localization lengths and the pre-exponentials change at arbitrary $`X`$.
The localization length $`L_c`$ enters directly such a physical observable as the hopping conductivity . However, previous theories as well as the interpretation of the experiment of Ref. assumed the presence of only one characteristic length. For the hopping conductivity in quasi-1D samples, the accounting for the two localization lengths can be performed using Eqs. (2, 5). It is not difficult to understand that the activation law, Eq. (2), with $`L_c=L_{\mathrm{co}}`$, holds for temperatures $`\left(\nu _1r_X\right)^1<T<\left(\nu _1L_c\right)^1`$. In this regime, the change of the asymptotic behavior of wave functions at very large distances is not essential. Nevertheless, the far asymptotics is extremely important for calculation of the conductivity at lower temperatures $`T<\left(\nu _1r_X\right)^1`$. Repeating arguments of Refs. one comes to the conclusion that Eqs. (2) can be used also for the description of the latter regime provided the temperature $`T_0`$ is replaced by $`T_0/2`$. This means that, experimentally, decreasing temperature should lead to a crossover from the activation behavior, Eq. (2), to another activation behavior with the characteristic temperature $`T_0/2`$. Measurements at lower temperatures than those used in Ref. might help to check our predictions.
In conclusion, we have demonstrated that the far tail of the density-density correlator in disordered wires at arbitrary weak magnetic field decays with the localization length of the unitary ensemble $`L_{\mathrm{cu}}`$, which is double as large as the length $`L_{\mathrm{co}}`$ of the main part. This means that there is no one-scale crossover between the limits $`H=0`$ and $`H=\mathrm{}`$. An arbitrary weak magnetic field drastically changes the decay of wave functions: the initial decrease with the localization length $`L_{\mathrm{co}}`$ persists only up to the distances $`r_XL_c\mathrm{ln}X`$ but then it is followed by the decay with the length $`L_{\mathrm{cu}}`$. An increase of the magnetic field leads to a respective decrease of $`r_X`$ so that finally, at $`X1`$, wave functions decay everywhere with the length $`L_{\mathrm{cu}}`$. The behavior found can manifest itself in the hopping conductivity, which might be the simplest way to test our predictions experimentally.
|
no-problem/9907/cond-mat9907105.html
|
ar5iv
|
text
|
# Dynamic electromagnetic response of three-dimensional Josephson junction arrays
## I. Introduction
Oscillators based on arrays of Josephson junctions operating in the voltage mode have shown to be promising radiation sources in the mm- and sub-mm wavelength range. One of the most remarkable features of such systems is that the microwave power output increases with the number of array junctions while at the same time the linewidth of the generated high-frequency voltage oscillation decreases with this number . The latter phenomena is not only interesting for the application of the arrays as microwave sources but also for their application as highly sensitive detectors. From this point of view, two- and three-dimensional arrays of Josephson junctions represent multi-junction interferometers which can be used e.g., as magnetometers, gradiometers or particle detectors.
The capabilities of 2D- and 3D-Josephson arrays can be used successfully if all array junctions operate in a coherent array mode, i.e., if they oscillate strongly phase-locked . The problem of phase-locking in 2D-arrays is presently subject of intensive research and great experimental and theoretical progress has been achieved using present-day integrated-circuit technology \[1-3\]. Since this technology is essentially two-dimensional there are up to now no corresponding experimental and theoretical studies on 3D-arrays. However, 3D-arrays of Josephson junctions may be promising candidates for highly sensitive detectors which allow as a new quality the three-dimensional reconstruction of the detected electromagnetic field including phase information about the field variables. In view of recent developments in molecular beam epitaxy the fabrication of 3D-arrays with appropriate junction parameters and array homogeneity seems to be possible in the nearest future.
In this paper we present theoretical results on the dynamical properties of a 3D-system of coupled Josephson junctions. A schematic drawing of the 3D-network is shown in Fig.1. It consists of superconducting islands (large cubes) connected by Josephson junctions (smaller dashed cubes). This system is the simplest three-dimensional configuration that guarantees coherent operation, i.e., single frequency operation, if it is DC biased parallel to one of the networks’ axes (c.f. Fig.2). If in the voltage state of the array a magnetic field is applied to the network, the six intrinsically coupled four-junction DC SQUIDs lying on the six faces of the cube are generating field dependent macroscopic current and voltage distributions which can be easily measured. From the AC Josephson effect and the flux quantization condition the current and voltage distributions depend directly but nonlinearly on the strength and the orientation of the external field.
## II. Network equations
To get quantitative results about the networks’ response function we will consider a network built by junctions which can be described by the RCSJ-model, e.g., externally shunted $`NbAl0_xNb`$ junctions. However, if other SIS-type junctions are used the results remain qualitatively valid. Fig.2 shows the equivalent circuit for the 3D-network which is DC biased along the x-direction. It consists of Josephson junctions (crosses) which are connected by superconducting wires. The bias current $`I_B`$ is fed into and extracted from the network through ohmic resistors (which must not necessarily be equal). This method of biasing preserves well defined boundary conditions for the networks dynamics and removes some of the ambiguities due to the symmetry of the system. Since the bias current is flowing parallel to the x-axis there are two groups of junctions with different dynamical behavior. The junctions lying parallel to the bias current (active junctions) switch into their voltage state for $`I_B>I_{c,A}(𝒇_{\text{ext}})`$, where $`I_{c,A}(𝒇_{\text{ext}})`$ is the critical current of the array depending on the externally applied flux $`𝒇_{\text{ext}}=(f_x,f_y,f_z)`$. The junctions lying perpendicular to the bias current (passive junctions) do in general not switch into the voltage state but show librations (semirotations) of their Josephson-phase differences. The amplitude of these librations is directly related to the strength and orientation of the external field. If the values of the ohmic resistors at the current in- and output do not differ too much, the passive junctions are in the zero-voltage state and the only frequency present in the system is the driving frequency defined by the active network junctions. This prevents chaotic or aperiodic dynamics of the junction phases.
In the temporal gauge the variables characterizing the dynamics of the junction network are the gauge invariant phase differences . If we denote the vector of the 12 network variables by $`𝝌(t)`$ and use the usual reduced units the system of coupled network equations can be written symbolically as
$`\beta \ddot{𝝌}+\dot{𝝌}+\mathrm{sin}𝝌`$ $`=`$ $`\lambda \left[L^1𝝌+F𝒇_{\text{ext}}\right]`$
$`+E𝒋_{\text{rf}}`$
$`+G(𝒓_𝑩)I_B`$
$`+D(𝒓_𝑩)\dot{𝝌},`$
where $`\lambda =\mathrm{\Phi }_0/(2\pi \mu _0i_ca)`$ is the magnetic penetration depth, $`L`$ the inductance matrix of the array, $`F`$ the matrix of the distribution of the external flux $`𝒇_{\text{ext}}`$ over the different network meshes and $`E`$ the matrix which describes the distribution of the currents $`𝒋_{\text{rf}}`$ induced by the oscillating electric field vector of an incoming external wave. The elements of $`E`$ and $`F`$ depend on the orientation of the external electromagnetic field. $`G`$ and $`D`$ are the matrices which define the boundary conditions depending on the values of the eight input and output resistors $`𝒓_𝑩=\left\{r_{B,i}\right\}.`$ The single junction parameters are the McCumber parameter $`\beta `$, the critical current $`i_c`$ and the normal resistance $`r`$. $`\mathrm{\Phi }_0`$ is the magnetic flux quantum and $`a`$ the lattice spacing measuring the distance between the centers of adjacent superconducting islands. The inductance matrix $`L`$ includes all inductances present in the array and depends on the geometry of the network and the inductances of the single junctions. We determine the coefficients of $`L`$ by assuming an array geometry similar to that shown in Fig.1 and by using junction inductances which are typical for externally shunted junctions. Changing the geometry of the network or the value of the junction inductances, however, influences our results only slightly.
The network equation (II.) clearly shows the dependence of the time evolution of the network phases $`𝝌(t)`$ on the applied quasi-static external magnetic field which can approximately be expressed as $`𝒇_{\text{ext}}=𝑩_{\text{ext}}(x,y,z)a^2/\mathrm{\Phi }_0`$, where $`𝑩_{\text{ext}}`$ is the magnetic field vector of the external field. The sensitivity of the network depends on the lattice spacing $`a`$ and reaches its maximum for vanishing magnetic coupling within the network, i.e., for infinite magnetic penetration depth $`\lambda \mathrm{}.`$ For $`\lambda 0`$, however, the network response vanishes totally. The sensitivity of the network dynamics with respect to distinct directions of the magnetic field, i.e., the anisotropy of the response function, can be manipulated by an appropriate choice of the input and output resistors.
The device possesses in principle two modes of operation. The DC mode with $`I_B<I_{c,A}`$ and the AC mode with $`I_B>I_{c,A}`$. For subcritical bias currents $`I_B<I_{c,A}`$, a constant external field induces constant loop currents in the meshes of the network and the resulting current distribution within the array is a superposition of these loop currents with the bias current. The critical array current $`I_{c,A}(𝒇_{\text{ext}})`$ is a function of the strength of the external field and the orientation of this field. However, because of the many degrees of freedom of the system the response function is not unique. There exists in general for any given $`𝑩_{\text{ext}}`$ a whole set of different equilibrium current distributions in the network on which the critical array current depends . By a detailed theoretical analysis it is possible to formulate selection rules which describe the external field depending transitions between the different distributions. Although there occur in the region of subcritical bias currents very interesting dynamical phenomena, the scenario becomes very complicated and will be presented elsewhere .
In the following we will restrict our treatment to the AC mode of the network. For $`I_B>4i_c`$ the active array junctions show persistent voltage oscillations whose frequency is directly proportional to the voltage drop. If the constant voltage drop originating from the input and output resistors is subtracted, the averaged voltage drop across the array is for $`I_B>I_{c,A}`$ given by $`V_0=1/4<_x\dot{\chi }_x(t)>`$, where the summation runs over the four active network junctions lying parallel to the x-axis and $`<\mathrm{}>`$ denotes time averaging over one oscillation period. Due to flux quantization, the network currents induced by the external field are converted by the network dynamics into oscillating loop currents in each network mesh. The voltage oscillation induced by these oscillating loop currents interferes with the voltage oscillation of the active junctions, so that the averaged voltage drop $`V`$ across the whole array becomes a function of the strength and the orientation of the external field.
## III. Results
By numerically integrating the network equation (II.) we computed the macroscopic voltage response function of the 3D-network for various different sets of junction and array parameters. In the following we present typical results. Fig.3 shows a spherical plot of the voltage response function for $`\beta =0.5,`$ $`\lambda =5`$, $`I_B=4.4i_c`$ and a homogeneous magnetic field $`𝑩_{\text{ext}}(x,y,z)=B_{0,\text{ext}}\widehat{𝒆}_{\theta ,\varphi }`$ which induces a flux $`\left|𝒇_{\text{ext}}\right|`$$`=B_{0,\text{ext}}a^2/\mathrm{\Phi }_0=3.`$ Here $`\widehat{𝒆}_{\theta ,\varphi }`$ is a vector with unit length pointing in the direction given by the spherical coordinates $`\theta `$ and $`\varphi `$ and the origin is put in the center of the 3D-array (c.f. Fig.3). The voltage response function is plotted in the form $`𝑽(\theta ,\varphi )=V_0\widehat{𝒆}_{\theta ,\varphi }=(V_x,V_y,V_z)`$, where $`V_0`$ is the macroscopic averaged voltage drop across the array measured in units of $`i_cr`$. The eight input and output resistors (c.f. Fig.2) all have the same value so that, corresponding to the symmetry of the network, the voltage response function shows a periodicity with period $`\pi /2`$ along each closed intersection curve with the x-z-, x-y- and y-x-plane, respectively, and a reflection symmetry with respect to each integer multiple of $`\pi /4.`$
In Fig.4(a) intersection curves in the z-y-plane, i.e., $`\varphi =\pi /2`$, are plotted for $`\theta =[\pi /2,\mathrm{\hspace{0.17em}3}\pi /2]`$ and different values of $`\left|𝒇_{\text{ext}}\right|`$. The symmetry of the voltage response functions with respect to integer multiples of $`\pi /4`$ is clearly observable and the sensitivity of the network reaches its maximum around integer multiples of $`\pi /2.`$ For such values of $`\theta `$ the field vector of the external magnetic field lies directly perpendicular to one of the faces of the cube and the induced currents in general become maximal in the direction parallel (and antiparallel) to the bias current. In this case the externally induced voltage drop becomes also maximal. The number of local maxima and minima of the voltage response function is strongly related to the applied flux $`\left|𝒇_{\text{ext}}\right|`$ and can be determined numerically.
The resolving power of 3D-SQUIDs with respect to the strength of external magnetic fields is comparable to the resolving power of a conventional 2D-SQUID, i.e. a SQUID loop with two junctions. The angular resolution of 3D-SQUIDs, however, is orders of magnitudes better than for 2D-SQUIDs. For external magnetic fields inducing a flux $`|f_{ext}|`$ in the order of one, the transfer factor $`|V/\theta |`$ lies around $`0.02i_cr/rad`$ which is for typical junctions in the range of some $`10\mu `$V$`/rad`$, such that, e.g., an angle of $`10^2rad`$ should be easily resolvable. The resolving power with respect to the strength and the resolving power with respect to the direction of an external field can be further increased by using networks which consist of several cubes. In this case the resolution powers increase proportional to the number of cubes.
Fig.4(b) shows for comparison the angular resolution of the 3D-junction array and a corresponding conventional 2D-SQUID. Both configurations are considered to have identical parameters. If the applied magnetic field lies perpendicular to the 2D-SQUID plane, i.e. $`\theta =\pi `$, the device is very insensitive to variations of $`\theta `$. This insensitivity, however, has advantages if only the strength of the magnetic field is to be measured. In contrast to this, the 3D-array’s response is very sensitive to variations of the direction of the external magnetic field. Therefore, the 3D-network has the advantage that simultaneously the strength and the direction of the magnetic field can be measured very precisely.
Three-axis SQUIDs, which consist of three independent operating 2D-SQUIDs are not directly comparable with 3D-SQUIDs. For three-axis SQUIDs the three independent voltage response functions are computationally postprocessed and the information about strength and orientation of the magnetic field vector is computationally extracted. In contrast to this, the single voltage response function of 3D-SQUIDs provides all information at once and no further postprocessing is needed.
For increasing $`\left|𝒇_{\text{ext}}\right|`$ the number of local maxima and minima of the voltage response function grows. Fig.5 shows density plots of the response function for a solid angle $`\mathrm{\Delta }\mathrm{\Omega }=\mathrm{\Delta }\theta \mathrm{\Delta }\varphi `$ with $`\mathrm{\Delta }\theta =\mathrm{\Delta }\varphi =2\pi /3`$, and (a) $`\left|𝒇_{\text{ext}}\right|=50`$ and (b) $`\left|𝒇_{\text{ext}}\right|=200`$. The network parameters are the same as for Figs.(3) and (4). The minima (black dots) and maxima (white areas) form significant patterns which are quite similar to optical interference patterns. However, according to Eq.(II.) these patterns are a result of nonlinear interactions and a trace of their nonlinear origin are the complex local structures occurring in Fig.5(b). By evaluating the voltage response patterns, the strength of the external magnetic field and, up to the symmetry redundancy, the orientation of this field can be determined very precisely. The symmetry redundancy can be removed by an appropriate choice of the input and output resistors. In this case, however, the resistors possess slightly different values and the voltage response of the network becomes much more complicated and more difficult to interpret .
If the parameters of the array junctions and the dimension of the 3D-network are chosen appropriately, the systems are also able to detect time dependent electromagnetic fields with wavelengths which lie typically in the mm-range. For time dependent external fields the induced flux becomes time dependent $`𝒇_{\text{ext}}=𝒇_{\text{ext}}(t)`$ and the currents $`𝒋_{\text{rf}}(t)`$ are induced by the oscillating electric field vector of the external wave (c.f. Eq.1). In contrast to 2D-configurations, for which the influence of the oscillating magnetic field is in general negligible, for the 3D-network both field vectors contribute to the response function of the array if the lattice spacing $`a`$ is chosen appropriately. Fig.6 shows current-voltage characteristics of the 3D-network for different fixed directions of the external microwave and array parameters $`\beta =0.5`$ and $`\lambda =5`$. The microwave is assumed to be linearly polarized parallel to the bias current and the amplitudes of the externally induced flux and of the induced current $`𝒋_{\text{rf}}`$ are $`\left|𝒇_{\text{ext}}\right|=\left|𝒋_{\text{rf}}\right|=1`$. The frequency of the external microwave is $`\nu =0.2i_cr`$ in Fig.6(a) and $`\nu =0.8i_cr`$ in Fig.6(b), and $`\overline{I}_B=I_B/4`$ is the normalized bias current. In each of both figures the I-V-curves for the magnetic field directions $`(\theta ,\varphi )=(0,\pi /2)`$ (solid curve) and $`(\theta ,\varphi )=(\pi /4,\pi /2)`$ (dashed curve) are plotted, and, for comparison, the I-V-curve for negligible influence of the magnetic field (dotted curve).
It is clearly observable that the I-V-characteristics depends on the direction of the incoming wave and that the magnetic part is indeed relevant. In general, the distribution of the Shapiro steps and their width differ significantly from those corresponding to a single junction. In addition, it can be observed that for some characteristics the step plateaus show an unusual fine structure. The first, fifth and seventh step on the left characteristics in Fig.6(a) show such structures. These fine structures are according to Eq.(1) implied by the different contributions of the electric and the magnetic field to the network dynamics. We observed that on the Shapiro steps the network junctions first lock to the electric part of the external microwave and then for increasing bias current eventually also to the magnetic part. This can change the current distributions within the network and implies a slight decrease in the averaged voltage drop and therefore in the network frequency.
The 3D-network dynamics governed by Eq.(1) includes all informations about the field variables of the external microwave. By a detailed analysis it is therefore possible to extract from the macroscopic current and voltage response functions of the network the information about the phase relationship within the incoming wave.
By assuming the simple model of a circularly polarized incoming microwave we can show that the voltage response function of the 3D-network is directly affected by the helicity of the external field. The inset of Fig.6(b) shows the I-V-characteristics near the critical array current $`I_{c,A}`$ for $`(\theta ,\varphi )=(0,\pi /2)`$ for a microwave that is linearly polarized in the x-direction (solid curve) and a microwave that is circularly polarized in the z-x-plane (dashed curve). The splitting between the critical array current $`I_{c,A}`$ of the two modes lies in the range of several $`0.01i_cr`$ and the shape of the characteristics differ significantly. This result already indicates that the 3D-network can operate phase-sensitively and that the modulations in the voltage response function caused by differently polarized external microwaves should be easily observable in experiments.
## IV. Conclusions
By presenting various theoretical results we have shown that 3D-networks of Josephson junctions represent ultrasensitive 3D-SQUIDs which can be useful for a number of different applications like magnetic field sensors, video detectors or mixers. Especially the capability of the networks to operate direction-sensitively and to detect incoming microwaves phase-sensitively is a novel quality and may allow the design of a new generation of superconducting devices.
## Acknowledgment
Financial support by the Forschungsschwerpunktprogramm des Landes Baden-Württemberg is gratefully acknowledged.
|
no-problem/9907/astro-ph9907430.html
|
ar5iv
|
text
|
# Untitled Document
Table1: Brightest Young Clusters (Age $`<30`$ Myr)
| no. | $`\mathrm{\Delta }`$ R.A. <sup>a</sup> | $`\mathrm{\Delta }`$ Dec <sup>a</sup> | chip | $`M_V`$ <sup>b</sup> | $`UB`$ | $`BV`$ | $`VI`$ | $`\mathrm{\Delta }V_{16}`$ <sup>c</sup> | Paper I # <sup>d</sup> |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 1 | -22.77 | 23.15 | 2 | -13.92 | -0.61 | 0.02 | 0.49 | 1.75 | 605 |
| 2 | -36.17 | -3.29 | 1 | -13.81 | -0.61 | 0.06 | 0.33 | 3.66 | 405 |
| 3 | 0.00 | 0.00 | 2 | -13.68 | -0.72 | 0.06 | 0.06 | 1.95 | 442 |
| 4 | 0.38 | 0.60 | 2 | -12.90 | -0.67 | 0.05 | 0.07 | 1.96 | 450 |
| 5 | 39.86 | -33.25 | 3 | -12.71 | -0.76 | 0.13 | 0.13 | 2.19 | 142 |
| 6 | -34.61 | -1.51 | 1 | -12.55 | -0.64 | 0.00 | 0.48 | 2.38 | 430 |
| 7 | 24.79 | -2.30 | 3 | -12.54 | -0.79 | -0.03 | 0.03 | 2.16 | 418 |
| 8 | -26.26 | -20.19 | 1 | -12.40 | -0.68 | 0.14 | 0.38 | 2.34 | 208 |
| 9 | 23.24 | -53.92 | 3 | -12.38 | -0.73 | 0.39 | 0.34 | 3.02 | 89/90 ? |
| 10 | 43.12 | -1.60 | 3 | -12.35 | -0.61 | 0.14 | 0.69 | 1.79 | 428 |
| 11 | 23.73 | -53.94 | 3 | -12.34 | -0.28 | 0.84 | 0.67 | 2.67 | 89/90 ? |
| 12 | -35.45 | 3.91 | 1 | -12.32 | -0.71 | 0.00 | 0.12 | 2.54 | 481 |
| 13 | 22.86 | 0.91 | 3 | -12.21 | -0.86 | -0.06 | -0.06 | 2.08 | 455 |
| 14 | 23.31 | -54.02 | 3 | -12.20 | -0.79 | 0.37 | 0.28 | 3.10 | 89/90 ? |
| 15 | 34.48 | -8.75 | 3 | -12.16 | -0.57 | 0.28 | 0.73 | 1.84 | 342 |
| 16 | 42.39 | -2.31 | 3 | -12.14 | -0.54 | 0.24 | 0.71 | 2.09 | 417 |
| 17 | 35.20 | -39.61 | 3 | -12.11 | -0.73 | 0.06 | 0.04 | 2.77 | 120 ? |
| 18 | 26.76 | -4.62 | 3 | -12.03 | -0.67 | 0.25 | 0.31 | 2.28 | 389 |
| 19 | 35.26 | -39.69 | 3 | -11.98 | -0.79 | -0.06 | 0.03 | 2.91 | 120 ? |
| 20 | 23.15 | -53.59 | 3 | -11.95 | -0.65 | 0.53 | 0.46 | 3.34 | 89/90 ? |
| 21 | 20.82 | 13.63 | 2 | -11.82 | -0.81 | -0.06 | -0.06 | 1.88 | 534 |
| 22 | -35.42 | -11.72 | 1 | -11.78 | -0.69 | 0.04 | 0.27 | 2.17 | 302 |
| 23 | -13.70 | -21.54 | 4 | -11.78 | -0.71 | 0.09 | 0.19 | 2.18 | 200 |
| 24 | 18.42 | 13.87 | 2 | -11.77 | -0.56 | 0.09 | 0.59 | 1.76 | 537 |
| 25 | -14.23 | -18.00 | 4 | -11.75 | -0.70 | 0.26 | 0.42 | 2.25 | 236 |
| 26 | 21.75 | -54.27 | 3 | -11.75 | -0.55 | 0.31 | 0.73 | 2.05 | 88 |
| 27 | 1.92 | -14.91 | 3 | -11.72 | -0.58 | 0.09 | 0.56 | 2.12 | 265 |
| 28 | -10.47 | -10.69 | 2 | -11.70 | -0.72 | 0.09 | 0.18 | 2.38 | 313 |
| 29 | -36.10 | -3.01 | 1 | -11.69 | -0.79 | 0.09 | 0.56 | 4.08 | 405 ? |
| 30 | 22.87 | -54.46 | 3 | -11.67 | -0.69 | 0.14 | 0.14 | 2.94 | 86/87 ? |
| 31 | -36.89 | -10.75 | 1 | -11.66 | -0.69 | 0.04 | 0.45 | 2.76 | 317 |
| 32 | -4.24 | 32.20 | 2 | -11.66 | -0.77 | 0.03 | 0.12 | 2.07 | 690 |
| 33 | 22.91 | -54.21 | 3 | -11.63 | -0.67 | 0.17 | 0.09 | 3.32 | 86/87 ? |
| 34 | 1.33 | 7.05 | 2 | -11.61 | -0.24 | 0.64 | 0.84 | 2.57 | 503 ? |
| 35 | 37.37 | -13.40 | 3 | -11.60 | -0.55 | 0.20 | 0.59 | 2.21 | 282 |
| 36 | 2.22 | -58.50 | 4 | -11.58 | -0.25 | 0.56 | 1.15 | 2.24 | 61 |
| 37 | 39.18 | -9.07 | 3 | -11.57 | -0.61 | 0.11 | 0.45 | 1.72 | 336 |
| 38 | 1.37 | 6.82 | 2 | -11.50 | -0.07 | 0.82 | 1.00 | 2.95 | 503 ? |
| 39 | 22.86 | -54.52 | 3 | -11.48 | -0.68 | 0.15 | 0.24 | 3.10 | 86/87 ? |
Table1: continued
| no. | $`\mathrm{\Delta }`$ R.A. <sup>a</sup> | $`\mathrm{\Delta }`$ Dec <sup>a</sup> | chip | $`M_V`$ <sup>b</sup> | $`UB`$ | $`BV`$ | $`VI`$ | $`\mathrm{\Delta }V_{16}`$ <sup>c</sup> | Paper I # <sup>d</sup> |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| $`\mathrm{\#}`$ <sup>c</sup> | | | | | | | | | |
| 40 | -37.36 | -9.04 | 1 | -11.46 | -0.70 | 0.06 | 0.17 | 2.61 | 338/339/340 ? |
| 41 | 21.56 | 0.06 | 3 | -11.42 | -0.67 | 0.05 | 0.38 | 1.89 | 443 |
| 42 | 3.31 | 14.22 | 2 | -11.41 | -0.64 | 0.09 | 0.48 | 2.62 | 538 |
| 43 | -24.56 | 26.51 | 2 | -11.40 | -0.65 | 0.03 | 0.61 | 2.16 | 640 |
| 44 | -23.15 | 19.54 | 2 | -11.38 | -0.31 | 0.58 | 1.15 | 1.92 | 561 |
| 45 | 6.51 | -58.40 | 4 | -11.38 | -0.63 | 0.22 | 0.20 | 2.39 | 62 |
| 46 | -37.62 | -9.23 | 1 | -11.37 | -0.79 | -0.19 | -0.15 | 2.65 | 338/339/340 ? |
| 47 | -35.67 | 3.74 | 1 | -11.37 | -0.68 | 0.05 | 0.12 | 2.42 | 481 |
| 48 | 41.61 | -12.97 | 3 | -11.37 | -0.47 | 0.23 | 0.65 | 2.03 | 285 |
| 49 | -22.94 | -15.73 | 1 | -11.34 | -0.56 | 0.28 | 0.32 | 2.39 | 253 |
| 50 | -35.31 | 3.96 | 1 | -11.33 | -0.80 | -0.05 | 0.22 | 3.13 | 485 |
<sup>a</sup> Following the convention used in Paper I, the coordinates are the offsets in <sup>′′</sup> from Object # 442, which is near the nucleus of NGC 4038. The equatorial coordinates of Object # 442 are R.A. = 12<sup>h</sup>01<sup>m</sup>52.97<sup>s</sup> and DEC = –1852$`^{^{}}`$08.29$`^{^{\prime \prime }}`$ (J2000 coordinates in the coordinate frame of the Guide Star Catalog).
<sup>b</sup> Absolute magnitude in the $`V`$ band using a distance modulus of m–M = 31.41. The values are not corrected for extinction.
<sup>c</sup> $`\mathrm{\Delta }V_{16}`$ on the PC (i.e., chip 1) cannot be directly compared with $`\mathrm{\Delta }V_{16}`$ on the WFC (i.e., chips 2,3,4) since the pixel scale and PSF are different.
<sup>d</sup> Question marks indicate cases where a one-to-one matching was problematic, generally because of differences in spatial resolution between the Cycle 2 and Cycle 5 observations.
|
no-problem/9907/quant-ph9907046.html
|
ar5iv
|
text
|
# Quantum state reconstruction in the presence of dissipation
## Abstract
We propose a realistic scheme to determine the quantum state of a single mode cavity field even after it has started to decay due to the coupling with an environment. Although dissipation destroys quantum coherences, we show that at zero temperature enough information about the initial state remains, in an observable quantity, to allow the reconstruction of its Wigner function.
Methods to reconstruct quantum states of light are of great importance in quantum optics. There have been several proposals using different techniques to achieve such reconstructions , amongst them, the direct sampling of the density matrix of a signal mode in optical homodyne tomography , the tomographic reconstruction by unbalanced homodyning ; the direct measurement (quantum endoscopy), of the Wigner function of the electromagnetic field in a cavity or the vibrational state of an ion in a trap . It is well known that dissipation has a destructive effect in most of these schemes, and issues such as compensation of losses in quantum-state measuments have already been discussed in the literature .
In this contribution we present a novel method of how to reconstruct a quantum state even after the action of dissipation. We consider a single mode high-$`Q`$ cavity where a nonclassical field state $`\widehat{\rho }(0)`$ is prepared and subsequently driven by a coherent pulse. Both processes are assumed to occur in a time scale much shorter than the decay time of the cavity. Then the field is allowed to decay. We will show below that by displacing the initial state we make its quantum coherences robust enough to allow its experimental determination despite the existence of dissipation.
The master equation in the interaction picture for a damped cavity mode at zero temperature and under the Born-Markov approximation is given by
$`{\displaystyle \frac{\widehat{\rho }}{t}}={\displaystyle \frac{\gamma }{2}}\left(2\widehat{a}\widehat{\rho }\widehat{a}^{}\widehat{a}^{}\widehat{a}\widehat{\rho }\widehat{\rho }\widehat{a}^{}\widehat{a}\right),`$ (1)
where $`\widehat{a}`$ and $`\widehat{a}^{}`$ are the annihilation and creation operators and $`\gamma `$ the decay constant. We can define the superoperators $`\widehat{J}`$ and $`\widehat{L}`$ by their action on the density operator
$$\widehat{J}\widehat{\rho }=\gamma \widehat{a}\widehat{\rho }\widehat{a}^{},\widehat{L}\widehat{\rho }=\frac{\gamma }{2}\left(\widehat{a}^{}\widehat{a}\widehat{\rho }+\widehat{\rho }\widehat{a}^{}\widehat{a}\right).$$
(2)
The formal solution of (1) can be written as
$$\widehat{\rho }(t)=\mathrm{exp}\left[(\widehat{J}+\widehat{L})t\right]\widehat{\rho }(0)=\mathrm{exp}(\widehat{L}t)\mathrm{exp}\left[\frac{\widehat{J}}{\gamma }(1e^{\gamma t})\right]\widehat{\rho }(0).$$
(3)
We assume that the initial field $`\widehat{\rho }(0)`$ is prepared in a time scale much shorter than the decay time of the cavity $`\gamma ^1`$. As soon as the field is generated, a coherent field $`|\alpha `$ is injected inside the cavity (also in a short time scale) displacing then the initial state $`\widehat{\rho }_\alpha =\widehat{D}(\alpha )\widehat{\rho }(0)\widehat{D}^{}(\alpha )`$. This procedure will enable us to obtain information about all the elements of the initial density matrix from the diagonal elements of the time-evolved displaced density matrix only. As diagonal elements decay much slower than off-diagonal ones, information about the initial state stored this way becomes robust enough to withstand the decoherence process. We will now show how this robustness can be used to obtain the Wigner function of the initial state after it has started to decay.
The diagonal matrix elements of $`\widehat{\rho }_\alpha (t)=\mathrm{exp}\left[(\widehat{J}+\widehat{L})t\right]\widehat{\rho }_\alpha `$ in the number state basis are
$$m|\widehat{\rho }_\alpha (t)|m=\frac{e^{m\gamma t}}{q^m}\underset{n=0}{\overset{\mathrm{}}{}}q^n\left(\begin{array}{c}n\\ m\end{array}\right)n|\widehat{\rho }_\alpha |n,$$
(4)
where $`q=1e^{\gamma t}`$.
We note that if we multiply those elements by the function
$$\chi (t)=12e^{\gamma t}$$
(5)
and sum over $`m`$ we obtain
$$F=\frac{2}{\pi }\underset{m=0}{\overset{\mathrm{}}{}}\chi ^m(t)m|\widehat{\rho }_\alpha (t)|m=\frac{2}{\pi }\underset{n=0}{\overset{\mathrm{}}{}}(1)^nn|\widehat{D}(\alpha )\widehat{\rho }(0)\widehat{D}^{}(\alpha )|n.$$
(6)
The expression above is exactly the Wigner function corresponding to $`\widehat{\rho }`$ (the initial field state) at the point specified by the complex amplitude $`\alpha `$. Therefore if we measure the diagonal elements of the dissipated displaced cavity field $`P_m(\alpha ;t)=m|\widehat{\rho }_\alpha (t)|m`$ for a range of $`\alpha `$’s, the transformation in Eq. (6) will give us the Wigner function $`F`$ for this range. This is the main result of our paper; the reconstruction is made possible even under the normally destructive action of dissipation. We would like to stress that the identity in Eq. (6) means that the time-dependence is completely cancelled, bringing out the Wigner function of the initial state.
One way of determining $`P_m(\alpha ;t)`$ is by injecting atoms into the cavity and measuring their population inversion as they exit after an interaction time $`\tau `$ much shorter than the cavity decay time. We may use three-level atoms in a cascade configuration with the upper and the lower level having the same parity. In this case the population inversion is given by
$$W(\alpha ;t+\tau )=\underset{n=0}{\overset{\mathrm{}}{}}P_n(\alpha ;t)\left[\frac{\mathrm{\Gamma }_n}{\delta _n^2}+\frac{(n+1)(n+2)}{\delta _n^2}\mathrm{cos}\left(2\delta _n\lambda \tau \right)\right],$$
(7)
where $`\mathrm{\Gamma }_n=\left[\mathrm{\Delta }+\chi (n+1)\right]/2`$, $`\delta _n^2=\mathrm{\Gamma }_n^2+\lambda ^2(n+1)(n+2)`$, $`\mathrm{\Delta }`$ is the atom-field detuning, $`\chi `$ is the Stark shift coefficient, and $`\lambda `$ is the coupling constant. In the case of having $`\mathrm{\Delta }=0`$ (two-photon resonance condition), $`\chi =0`$, and for strong enough fields, for which it is valid the approximation $`\left[(n+1)(n+2)\right]^{1/2}n+3/2`$, the population inversion reduces to
$$W(\alpha ;t+\tau )=\underset{n=0}{\overset{\mathrm{}}{}}P_n(\alpha ;t)\mathrm{cos}\left(\left[2n+3\right]\lambda \tau \right),$$
(8)
By inverting the Fourier series in Eq. (8) we obtain for $`P_n(\alpha ;t)`$
$$P_n(\alpha ;t)=\frac{2\lambda }{\pi }_0^{\frac{\pi }{\lambda }}𝑑\tau W(t+\tau )\mathrm{cos}\left(\left[2n+3\right]\lambda \tau \right).$$
(9)
We need a maximum interaction time $`\tau _{max}=\pi /\lambda `$ much shorter than the cavity decay time. This condition implies that we must be in the strong-coupling regime, i.e. $`\lambda \gamma `$.
Our scheme is easily generalized to other ($`s`$-parametrized ) quasi-probability distributions given by ,
$$F(\alpha ;s)=\frac{2}{\pi (s1)}\underset{n=0}{\overset{\mathrm{}}{}}\left(\frac{s+1}{s1}\right)^nn|\widehat{\rho }_\alpha |n,$$
(10)
by choosing
$$\chi (s;t)=1+\frac{2e^{\gamma t}}{s1}.$$
(11)
In conclusion, we have presented a novel technique to reconstruct the Wigner function of an initial nonclassical state at times when the field would have normally lost its quantum coherence . Reconstruction approaches do not usually take into account the effect of losses. The crucial point of our method is the driving of the initial field immediately after preparation, that is not only used to cover a region in phase space but also to store quantum coherences in the diagonal elements of the time evolved displaced density matrix, making them robust. In other words, we have shown that the initial displacement transfers to any initial state the robustness of a coherent state against dissipation.
The possibility of reconstructing quantum states at any time opens up potential applications in quantum computing. For instance, this method could be used in a scheme to refresh the state of a quantum computer in order to avoid dissipation-induced errors.
###### Acknowledgements.
One of us, H.M.-C., thanks W. Vogel for useful comments. This work was partially supported by Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP), Brazil, Consejo Nacional de Ciencia y Tecnología (CONACyT), México, Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq), Brazil, and International Centre for Theoretical Physics (ICTP), Italy.
|
no-problem/9907/astro-ph9907366.html
|
ar5iv
|
text
|
# The CFH Optical PDCS Survey (COP): First results
## 1 The data
### 1.1 Description of the survey
We have made a spectroscopical follow-up of 10 PDCS lines of sight including 15 clusters. These PDCS clusters (Lubin et al. 1996) are optically selected cluster candidates. We have measured about 700 redshifts in 6 nights at CFH with the MOS spectrograph. The expected redshift of these clusters is around 0.4. Two areas of the sky are particularly well covered: 4 lines of sight around 9h and 3 lines of sight around 13h. The sampled redshift range is z=\[0.,0.9\].
### 1.2 Are PDCS clusters real?
The first results of this survey show that more than 60% of the candidate clusters are real gravitationally bounded structures with a velocity dispersion ranging from 600 km/s to 1500 km/s.
## 2 Results
### 2.1 Dynamic of the clusters
Using the 6 best sampled clusters, we have used the same techniques as were were used for the nearby ENACS clusters (Adami et al. 1998). We plot the variation of the galaxy normalized velocity dispersion versus the absolute magnitude and the morphological type (fig 1: we assume as a first approximation that the em line galaxies are a mix of early and late spirals and that the only absorption line galaxies are a mix of ellipticals and SO). The open symbols are the results for nearby clusters (z$``$0.05: Adami et al. 1998) and the filled triangles are for the PDCS clusters (z$``$0.4). We see that the two distributions are essentially the same: the evolution of the internal dynamics of the clusters between 0.05 and 0.4 seems to be negligible. At z$``$0.4 or z$``$0.05, the emission line galaxies are an infalling population while the absorption line galaxies seem to be virialized and the galaxies follow the energy equipartition law (thick line in fig 1 left). The conclusion is that the epoch formation of the clusters is probably significantly greater than 0.4, the clusters continually evolving after with late type galaxies still infalling at low redshifts.
### 2.2 Periodicity of the structures in the COP survey
The 2 well sampled COP areas (9h and 13h) allow a study of the periodicity along the line of sight. The structures are defined exactly as the ENACS (Katgert et al. 1996). At 9h, we find a periodicity of 90 $`\pm `$ 2 Mpc and at 13h the periodicity is 143 $`\pm `$ 10 Mpc. Comparing these results with Broadhurst et al (1990) (128 Mpc in another direction), we tentatively conclude that periodicities in the structure distribution are in agreement with the ”Web” representation of the Universe and the value of this periodicity depends of the line of sight.
|
no-problem/9907/cond-mat9907283.html
|
ar5iv
|
text
|
# Experimental investigation of universal parametric correlators using a vibrating plate
## Abstract
The parametric variation of the eigenfrequencies of a chaotic plate is measured and compared to random matrix theory using recently calculated universal correlation functions. The sensitivity of the flexural modes of the plate to pressure is used to isolate this symmetry class of modes and simplify the data analysis. The size of the plate is used as the external parameter and the eigenvalues are observed to undergo one or two oscillations in the experimental window. The correlations of the eigenvalues are in good agreement with statistical measures such as the parametric number variance, the velocity autocorrelation, and the intralevel velocity autocorrelation derived for the Gaussian Orthogonal Ensemble of random matrix theory. Our results show that the theory can be also applied to wave systems other than quantum systems.
It has been widely recognized that the eigenvalues of a quantum system show universal features that depend only on the presence or absence of chaos in the corresponding classical or ray system . For example, it has been established that the eigenvalues of integrable systems display Poisson statistics, and chaotic systems with time-reversal symmetry show statistics which are similar to the Gaussian Orthogonal Ensemble (GOE) of Random Matrix Theory (RMT) . The universality has been confirmed using not only quantum systems, but also systems which obey an elastomechanical wave equation . The difference in the statistical properties has been recognized to be due to the presence of level repulsion which were demonstrated as avoided crossings as a system parameter was varied. However, it was postulated only recently that the resulting fluctuation of the energy levels also show universal properties which are independent of the nature of the parameter .
When a quantum system is subjected to a perturbation via an external parameter $`X`$, the eigenvalues change and oscillate as a function of $`X`$. Using supersymmetry techniques, Simons and Altshuler were able to calculate the correlations as a function of external parameter for energy levels with Wigner-Dyson distributions of RMT. The agreement of their analytical results with numerical simulations of disordered metallic rings and a chaotic billiard led them to the remarkable conjecture that correlations in the eigenvalues show universal features which are independent of the nature of the perturbation after appropriate normalization. Here the proper rescaling required to compare across different systems is given by expressing the energy $`E`$ in units of the local mean level spacing $`\mathrm{\Delta }`$, and the parameter in units of the square root of the local mean squared slope:
$$\epsilon =E/\mathrm{\Delta }x=\sqrt{\left(\frac{d\epsilon }{dX}\right)^2}X,$$
(1)
where $`\epsilon `$ is the normalized energy, and $`x`$ is the rescaled external parameter.
The conjecture was tested further with numerical simulations of a Hydrogen atom in a magnetic field, where agreement was found over a certain parameter range, but systematic deviations were also found because the system is only partially chaotic . Although some of the correlations have been indirectly tested in the conductance fluctuations of electrons in ballistic cavities , and also in microwave cavities and quartz blocks where bouncing ball-like modes complicates the analysis, there has been no report of a direct experimental test of their universality.
In this paper, we report direct experimental evidence for the universality of the above mentioned parametric correlators. A freely vibrating plate with the shape of a Sinai-Stadium is used and the smooth motion of the eigenfrequencies is measured as a function of the size of the plate. Two classes of uncoupled modes exist in an isotropic plate: flexural, for which the displacement is perpendicular to the plane of the plate, and in-plane, for which the displacement is in the plane of the plate . We are able to experimentally isolate the flexural modes and therefore can simplify the analysis by not having to consider problems associated with mixed symmetries. The flexural modes obey a scalar equation for the displacement $`W`$ perpendicular to the plate:
$$(^2k^2)(^2+k^2)W=0,$$
(2)
where $`k`$ denotes the wavenumber. The dispersion relation is given by
$$f=\frac{k^2}{2\pi }\sqrt{\frac{E_Yh^3}{12\rho (1\nu ^2)}},$$
(3)
where $`f`$ is the frequency, $`h`$ is the thickness of the plate, $`\rho `$ is the density, $`E_Y`$ is Young’s modulus, and $`\nu `$ is Poisson’s ratio. Any solution $`W`$ of Eq. (2) can be written as a superposition of two modes, $`W_1`$ and $`W_2`$, where
$$(^2+k^2)W_1=0\text{and}(^2k^2)W_2=0.$$
(4)
$`W_1`$ is a solution to the Helmholtz equation with free boundary conditions. $`W_2`$ is an exponential mode or boundary mode. The boundary modes are responsible for only about one percent of the density of states and do not appear to alter the universality of the eigenvalues. Equation (2) is an approximation to the full elastomechanical wave equation in the limit where the wavelength is much larger than the thickness of the plate. The typical wavelength in our experiments is 8 mm, and the thickness of the plate is 2 mm. In this case, Eq. (2) is a good approximation.
The eigenvalue statistics is first confirmed to agree very well with GOE statistics using traditional measures such as the spacing statistics $`P(s)`$ and the spectral rigidity $`\mathrm{\Delta }_3(L)`$, demonstrating the quality of the data and the universality of the geometry. We then compare statistical observables of the eigenvalue motion as a function of the parameter to analytical calculations. In particular, we find that the data agrees with calculations of the parametric number variance $`v(x)`$ by Simons et al. and shows a linear behavior for small $`x`$ which is different from semiclassical calculations . To investigate correlations in energy-parameter space, comparisons are made with the exact calculations for the intralevel velocity autocorrelation $`\stackrel{~}{c}(w,x)`$ which describes the correlations between the rate of change of eigenvalues separated in energy by $`w`$ and in parameter by $`x`$ . Good agreement is observed for selected values of $`w`$ and over all $`x`$. Another statistical measure is the velocity autocorrelation $`c(x)`$ which correlates the rate of change of eigenvalue as a function of parametric separation $`x`$. For this quantity we find that the data is in good agreement at small and intermediate values of the parameter. Deviations are observed for larger $`x`$ where statistical sampling is poor. Combined, these results provide the first experimental evidence for the universality of a broad class of the statistical observables of parametric level motion that have been studied theoretically.
In the experiments we use an Aluminium plate of thickness 2.0 mm, machined in the shape of a quarter Sinai-Stadium with radii 40 mm and 70 mm (see Fig. 1). The plate rests on three piezoelectric transducers, of which one is a transmitter and two are receivers. We measure acoustic transmission spectra of the plate using a HP 4395A network analyzer. A sample of the transmission signal at different values of the parameter is shown in Fig. 1. The amplitude of the resonances depend on the location of the transducers but the eigenfrequencies are unchanged. The plate is kept in a temperature controlled oven held at 300 K to within 1 mK. A vacuum system ensures that the air pressure is below $`10^1`$ Torr, which is low enough that air damping of the plate is insignificant compared to other damping mechanisms. Of the two classes of modes, flexural modes are more sensitive to the presence of air damping than in-plane modes because of the flexural out-of-plane oscillation. We find that going from vacuum to atmospheric pressure, the $`Q`$-factor of the flexural modes decreases by at least a factor of 3, whereas the $`Q`$-factor for the in-plane modes is unchanged.
We first measure the transmission spectrum of the plate, then decrease the size of the plate by sanding off material at the longest straight edge, as indicated in Fig. 1. The amount of material removed is determined by measuring the mass of the plate to within $`5\times 10^5`$ grams. Approximately $`5\times 10^2`$ grams is removed each time and in all 6% of the material is removed in 63 steps. The spectrum is measured in the interval between 100 kHz and 300 kHz. Periodically, the spectrum is also measured at 1 atmosphere to identify the flexural modes. After this separation, we find approximately 300 resonances, of which 25 drift out of the frequency window due to the overall increase in frequency when the size of the plate is decreased. A resonance frequency can be determined to within 0.5 Hz by fitting the resonance peak to a Breit-Wigner function. We are confident that all eigenfrequencies in the frequency window are detected, because it is impossible for the amplitude of a resonance peak to lie below our detection level for all 63 values of the parameter. The absence of interaction of the flexural modes with the in-plane modes is checked to within experimental accuracy by noting a lack of interaction at flexural-in-plane encounters (see Fig. 1).
In the data analysis, the implementation of the normalization or unfolding given by Eq. (1) is of great importance. Since the cumulative level density or staircase function for a freely vibrating plate was recently calculated , both the mean level spacing and the mean squared velocity are known analytically. This knowledge can be directly applied to our data, which makes the data analysis very clean from a theoretical viewpoint.
We start by presenting our result for the distribution of nearest neighbor spacings $`P(s)`$ and the spectral rigidity $`\mathrm{\Delta }_3(L)`$ which are shown in Fig. 2. We find complete agreement with RMT for both observables. Fully chaotic systems are very rare and most chaotic geometries have regions in phase space which are integrable. The Sinai-Stadium geometry is no exception and is known to have small regions of integrability. However if these regions are very small, they can support an integrable level only at very high frequencies, and therefore complete agreement with GOE is expected and observed.
We now present the main results which is the correlations in the parametric variation of the eigenfrequencies. The parametric number variance $`v(x)`$ is defined as:
$$v(x)=(n(\epsilon ,x^{})n(\epsilon ,x^{}+x))^2,$$
(5)
where the average is over the parameter $`x^{}`$ and energy $`\epsilon `$. Here, $`n(\epsilon ,x)`$ is the staircase function which counts the number of energy levels at fixed $`x`$ with energy lower than $`\epsilon `$. The parameter $`x`$ has been normalized according to Eq. (1), as explained above. The variance measures the difference in the number of eigenvalues which are below a fixed value of normalized energy $`\epsilon `$. Therefore this quantity measures the collective motion of levels under parametric change . The comparison of the data with the theory is shown in Fig. 3. The $`v(x)`$ calculated from the data grows linearly from zero and has a slope of $`0.8\pm 0.01`$ which is in excellent agreement with the calculated value of $`\sqrt{2/\pi }0.797`$ by Simons et al. . A saturation is expected at large values of $`x`$ and therefore the $`v(x)`$ becomes sub-linear at higher $`x`$.
However, $`v(x)`$ does not give an indication of the correlations in the oscillations of the eigenvalues with the parameter $`x`$. To investigate such correlations, a new set of measures are required that study the rate of change of eigenvalue as a function of parameter . One example is the intralevel velocity autocorrelation $`\stackrel{~}{c}(\omega ,x)`$, which correlates velocities which are separated by a distance $`x`$ in parameter space and by a distance $`\omega `$ in energy:
$$\stackrel{~}{c}(\omega ,x)=\frac{\underset{n,m}{}\delta (\epsilon _n(x^{})\epsilon _m(x^{}+x)\omega )\frac{\epsilon _n(x^{})}{x^{}}\frac{\epsilon _m(x^{}+x)}{x^{}}}{_{n,m}\delta (\epsilon _n(x^{})\epsilon _m(x^{}+x)\omega )}$$
(6)
The average is over the parameter $`x^{}`$. Using the supersymmetric nonlinear $`\sigma `$ model developed by Efetov , Simons and Altshuler derived an integral representation for the intralevel velocity autocorrelation. Another correlation is the velocity autocorrelation $`c(x)`$ which correlates velocities which belong to the same energy level:
$$c(x)=\frac{\epsilon (x^{})}{x^{}}\frac{\epsilon (x^{}+x)}{x^{}}$$
(7)
The brackets denote an average over the parameter $`x^{}`$ and the energy $`\epsilon `$. For this correlator no analytical results exist for intermediate values of $`x`$. Therefore we compare our result for $`c(x)`$ to a curve calculated by Mucciolo using large GOE matrices which agrees with the analytical results in the limit of large and small $`x`$.
We first present the result for the velocity autocorrelation $`c(x)`$ (see Fig. 4). For values of $`x`$ smaller than 1, we find good agreement with the numerical RMT curve . At larger values of $`x`$, however, we see a deviation which is outside the experimental error bars. The shape of the correlation function indicates that the slope $`\epsilon (x)/x`$ changes smoothly and has opposite signs near $`x=0.5`$ because the parameter $`x`$ has been normalized to correspond to approximately one oscillation for $`x=1`$. This behavior of the correlation functions indicates that, locally, there is a particular length scale over which eigenfrequencies oscillate. The distribution of velocities $`\epsilon (x)/x`$ of the eigenvalues should be a Gaussian with a mean value of zero. The data is shown in the inset to Fig. 4. The data is close to a Gaussian, but is slightly asymmetric with more velocities of small magnitude which are negative than positive. We emphasize that the mean slope is zero, indicating that this discrepancy does not originate in the normalization of the eigenfrequencies. We believe that the deviation is due to a finite data set. It appears that the correlations are very robust and give good agreement even if the velocity distribution is not exactly Gaussian.
To make a more stringent test of the correlations, we compare our data with the intralevel velocity autocorrelation $`\stackrel{~}{c}(\omega ,x)`$ for $`\omega =0.25`$, $`\omega =0.50`$, and $`\omega =1.0`$ as shown in Fig. 5. We compare our data to a numerical evaluation of the integral representation of this correlator . In calculating these quantities we have averaged over a small energy window of $`\delta \omega =0.03`$ which is also done in the theoretical calculations. The occurrence of the peaks in the correlation functions and the systematic increase of the value of $`x`$ where the peak occurs can be understood from the fact that near an avoided crossing, one has to go across by nearly as much along the normalized energy axis as along the parameter axis to encounter a similar slope (see Fig. 1). The comparison of the data in Fig. 5 shows very good agreement for all three values of $`\omega `$, validating the theory.
In conclusion, we have investigated experimentally the parametric level motion of the flexural modes of a freely vibrating plate as a function of the size of the plate. We have used our data to calculate statistical quantities which probe the parametric motion of the levels, and found agreement with the universal predictions of RMT. The agreement with RMT suggests that the universal predictions for parametric level motion extends beyond quantum chaotic systems to a wider range of wave systems, including acoustical waves.
We thank B. Simons and E. Mucciolo for providing the theoretical data, and M. Oxborrow and J. Norton for technical assistance. We thank O. Brodier and H. Gould for useful discussions. This work was supported by the Danish National Research Council (K.S.), Research Corporation and by an Alfred P. Sloan Fellowship (A.K.). We thank Hewlett-Packard for a partial equipment grant.
|
no-problem/9907/gr-qc9907066.html
|
ar5iv
|
text
|
# Schwarzschild black hole as a grand canonical ensemble
## Acknowledgments
I would like to thank Prof. Jacob Bekenstein for his guidance and support during the course of this work. It is also a pleasure to thank A. E. Mayo for a lot of help and S. Hod and L. Sriramkumar for helpful discussions. This research was supported by a grant from the Israel Science Foundation, established by the Israel National Academy of Sciences.
|
no-problem/9907/astro-ph9907215.html
|
ar5iv
|
text
|
# Detection of Circular Polarization in the Galactic Center Black Hole Candidate Sagittarius A*
## 1 Introduction
The compact radio source in the Galactic Center, Sagittarius A\*, is the best and closest candidate for a supermassive black hole in the center of a galaxy (Maoz 1998). The source Sgr A\* is positionally coincident with a $`2.6\times 10^6M_{\mathrm{}}`$ dark mass (Genzel et al. 1997, Ghez et al. 1998). Very long baseline interferometry (VLBI) has shown that this source has a scale less than 1 AU and a brightness temperature in excess of $`10^9\mathrm{K}`$ (Rogers et al. 1994, Bower & Backer 1998, Lo et al. 1998, Krichbaum et al. 1998). Long-term studies of Sgr A\* indicate that the source shows no motion with respect to the center of the Galaxy (Backer & Sramek 1999, Reid, Readhead, Vermeulen, & Treuhaft 1999). For these reasons, it is inferred that Sgr A\* is a supermassive black hole with a synchrotron emission region fed through accretion (Melia 1994, Narayan et al. 1998, Falcke, Mannheim & Biermann 1993, Mahadevan 1998). In this view, Sgr A\* is a weak active galactic nucleus (AGN). However, strong interstellar scattering of the radiation along the line of sight has been shown to broaden the image of Sgr A\* at radio through millimeter wavelengths (e.g., Lo et al. 1998, Frail et al. 1994). As a consequence, VLBI observations have not convincingly demonstrated the existence of source structure that would be an important diagnostic of physical processes.
Polarization has proved to be an important tool in the study of AGN. Studies of linear polarization, which is typically on the order of a few percent or less of the total intensity, have confirmed that the emission process is synchrotron radiation and demonstrated that shocks align magnetic fields in a collimated jet, leading to correlated variability in the total and polarized intensity (Hughes, Aller & Aller 1985, Marscher & Gear 1985). Circular polarization, on the other hand, is less well understood in AGN. Typically, the degree of circular polarization is $`m_c<0.1\%`$ with only a few cases where $`m_c`$ approaches $`0.5\%`$ (Weiler & de Pater 1983). The degree of circular polarization usually peaks near 1.4 GHz and decreases strongly with increasing frequency.
Recently, VLBI imaging of 3C 279 has found $`m_\mathrm{c}1\%`$ in an individual radio component with a fractional linear polarization of 10% (Wardle et al. 1998). The integrated circular polarization, however, is less than 0.5%. The circular polarization is probably produced through the conversion of linear to circular polarization by low-energy electrons in the synchrotron source. This process is also known as repolarization (Pacholczyk 1977).
In recent work we have shown that the linear polarization of Sgr A\* from centimeter to millimeter wavelengths is extremely low. Linear polarization was not detected in a spectro-polarimetric experiment with an upper limit of 0.2% for rotation measures as large as $`10^7\mathrm{rad}\mathrm{m}^2`$ at 8.4 GHz (Bower et al. 1999a). More recently, we have found that linear polarization is less than 0.2% at 22 GHz and less than $`1\%`$ at 86 GHz (Bower et al. 1999b). Interstellar depolarization is very unlikely within the parameter space covered by these observations. Given these stringent limits on linear polarization, the presence of circular polarization is not expected. Nevertheless, we have detected circular polarization at a surprisingly high level.
## 2 Observations and Results
We observed Sgr A\* with the Very Large Array (VLA) of the National Radio Astronomy Observatory<sup>1</sup><sup>1</sup>1The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. in its A configuration on 10, 18 and 24 April 1998 at 4.8 GHz and 8.4 GHz with a bandwidth of 50 MHz in two intermediate frequency (IF) bands in both right circular polarization (RCP) and left circular polarization (LCP). At 4.8 GHz, we cycled rapidly between 2.5 minute scans on Sgr A\* and the nearby calibrators B1737-294, B1742-283 and B1745-291. Every hour the calibrators B1741-038 and B1748-253 were observed. These observations covered a range of four hours. A similar approach was used at 8.4 GHz over only a single hour. Absolute fluxes were calibrated with the source 3C 286. Time-dependent amplitude calibration of the array was performed through self-calibration of the compact and bright quasar B1741-038. Each source was phase self-calibrated. All presented results were done using baselines greater than 100$`k\lambda `$ in order to resolve out large scale structure in the Galactic Center (e.g., Yusef-Zadeh, Roberts & Biretta 1998).
Detection of circular polarization with an array with circularly-polarized feeds requires careful calibration and is subject to a variety of errors. The requirements are more complex for Sgr A\*, which is located in a significantly confused region. Below, we summarize the errors that arise for standard circular polarization measurements and show that they agree with the measured values for the calibrators. Following that, we demonstrate that the background radiation for Sgr A\* is not responsible for the measured signal.
The Stokes parameter $`V`$ is formed from the difference of the left- and right-handed parallel polarization correlated visibilities, $`LL`$ and $`RR`$. This difference is sensitive to amplitude calibration errors. Gain variations with time were measured to be less than $`0.3\%`$ for all antennas. Averaged over independent gain measurements at all antennas this implies a calibration error of $`0.02\%`$ and $`0.06\%`$ at 4.8 and 8.4 GHz, respectively.
Beam squint may also introduce false circular polarization for objects off-axis (Chu & Turrin 1973). Beam squint is due to the slightly displaced RCP and LCP beams in an offset reflector antenna. In the case of the VLA antenna geometry, the offset is 4.6% of the primary beam FWHM (Condon et al. 1998). At 4.8 GHz, this corresponds to $`0.50^{}`$ within a primary beam of $`10.82^{}`$. All of our sources were observed on the primary axis with a precision of less than $`1^{\prime \prime }`$. However, pointing errors for individual antennas can be as large as $`10^{\prime \prime }`$. This will lead to a false circular polarization of $`0.5\%`$ in a single observation on a single antenna. Averaging over multiple observations with the entire array produces a false circular polarization $`0.03\%`$. At 8.4 GHz, the expected false circular polarization is $`0.10\%`$.
Second-order polarization leakage effects may produce false circular polarization, as well. For an array such as the VLA with polarization leakage terms on the order of 1%, weakly linearly polarized sources will produce false circular polarization on the order of a few times 0.01%. We performed our analysis with and without polarization leakage correction and found no difference in the results.
Finally, false circular polarization may appear through interference. We analyzed each of the 12 2.5-minute scans for Sgr A\* at 4.8 GHz in the 10 April 1998 data independently and found no dependence on time in Stokes $`V`$. The typical rms image noise in each scan was 375 $`\mu \mathrm{Jy}\mathrm{beam}^1`$, or 0.07%. Assuming that the circular polarization did not change, we find $`\chi ^2=13.0`$ for 11 degrees of freedom. Interference may also have a frequency dependence. We measured Stokes $`V`$ for the separate IF bands for Sgr A\* at 4.8 GHz in the 10 April data to be -0.43% and -0.29%, which is consistent with no frequency dependence for the error determined below.
Adding in quadrature errors from amplitude calibration, beam squint and polarization leakage gives total errors for a measurement corresponding to a single day of $`0.05\%`$ and $`0.12\%`$ at 4.8 and 8.4 GHz, respectively. We estimate the error in the average measurements to be 0.03% and 0.07%. These values are very similar to the values observed for the calibrator sources. We estimate the error in the mean for Sgr A\* from the variance of the three measurements to be 0.05% and 0.06% at the two frequencies, respectively. These estimates are very close to the error determined above, which suggests that we are accounting for most sources of error. However, we now detail additional sources of error and how we eliminate them independently.
Sgr A\* may have additional errors because of the presence of significant extended and compact structure in the Galactic Center region. We eliminate the effect of this structure on the correlated visibilities by comparing the given results with those obtained using all baselines, only baselines greater than $`300k\lambda `$, and only baselines less than $`300k\lambda `$. We find at 4.8 GHz for 10 April 1998 $`m_c=0.36\%`$, -0.34%, and -0.37%, respectively.
A source in the beam sidelobes could also introduce a false signal if it is circularly-polarized or if the sidelobe response is circularly-polarized. The signature of this effect would include time-dependence and possibly frequency-dependence as the sidelobe swept over the source. But we have demonstrated that these effects do not exist at our sensitivity levels.
Finally, non-linear response of the detectors to the extended structure will introduce a systematic offset in circular polarization between the amplitude calibrator and Sgr A\*. The detector response is linear to better than $`1\%`$. The shift in system temperature between the calibrator and Sgr A\* is from 25 to 35 K at 4.8 GHz, implying a systematic error of $`0.3\%`$ per antenna per polarization. Averaging over both polarizations and all antennas, one expects a systematic offset of unknown sign in Stokes $`V`$ due to the background radiation of $`0.04\%`$. The magnitude of this effect at 8.4 GHz is also $`0.04\%`$.
We show in Figure 1 the Stokes $`V`$ image of Sgr A\* at 4.8 GHz from 10 April 1998. The peak at -1.8 mJy is more than 20 times the noise level of 68 $`\mu \mathrm{Jy}\mathrm{beam}^1`$. For the same epoch, the calibrator B1748-253 has a peak flux of 262 $`\mu \mathrm{Jy}`$ in a map with a noise level of 58 $`\mu \mathrm{Jy}\mathrm{beam}^1`$. The total intensities of Sgr A\* and B1748-253 are 0.525 and 0.488 Jy at the time of detection, respectively.
We summarize in Table 1 the total and circularly polarized intensities for all sources at 4.8 and 8.4 GHz in our experiment. At 4.8 GHz, the calibrators all have mean circular polarizations less than 0.02% with the exception of B1737-294 which is dominated by the thermal noise of $`60\mu \mathrm{Jy}`$. The detection of circular polarization in Sgr A\* is very certain at 4.8 GHz. At 8.4 GHz, the greater thermal noise and the less accurate calibration make the detection of circular polarization for Sgr A\* in each observation less certain. However, the average result firmly demonstrates detection. The average circular polarization flux is 16 times that of B1748-253.
We find $`m_c=0.36\pm 0.05\%`$ and $`m_c=0.26\pm 0.06\%`$ at 4.8 and 8.4 GHz, respectively. The errors are estimated from the variance of the three separate measurements for each frequency. These errors set an upper limit to the variability, as well. The average spectral index of the fractional circular polarization is $`\alpha =0.6\pm 0.3`$ for $`m_c\nu ^\alpha `$. The error in $`\alpha `$ is less than that expected from the errors in $`m_c`$. This is due to the fact that variations in $`m_c`$ between epochs appear to be due to systematic errors that are common to both frequencies.
## 3 Mechanisms for the Production of Circular Polarization
While the detection of circular polarization for Sgr A\* is in itself an unexpected result, two additional properties set this source apart from other radio cores and make the result difficult to understand: the fact that circular polarization exceeds linear by more than a factor of two and the flatness of the circularly polarized spectrum.
We have established previously that the interstellar scattering does not depolarize any intrinsic linearly polarized emission from Sgr A\* (Bower et al. 1999a, 1999b). However, the sub-parsec accretion region of Sgr A\* may have very large rotation measures (Bower et al. 1999a). This region may depolarize an intrinsic linearly polarized signal without interfering with the circularly polarized radiation. Detailed modeling of the accretion region may be able to address these issues (e.g., Melia & Coker 1999).
We consider now whether the circular polarization is produced not in the source, but in the intervening scattering screen. A birefringent scattering medium may produce scintillating circular polarization from an unpolarized background source. This effect has been studied in detail (Macquart & Melrose 1999). This requires a scattering region with a fluctuating rotation measure gradient. Such a mechanism is appealing due to the strong scattering medium and the strong observed gradients in RM in the GC region (Yusef-Zadeh, Wardle & Parastaran 1997). The fact that the scattered image of Sgr A\* itself is anisotropic might also indicate an anisotropic scattering medium. However, the diffractive effect has $`\alpha =4`$ or steeper, which is not consistent with the measured spectral index. Further calculations of this relatively unexplored issue should show whether such a scattering model could nevertheless be made consistent with the observations.
Alternatively, we can ask whether the conversion mechanism or intrinsic synchrotron circular polarization could be at work in Sgr A\* (Pacholczyk 1977, Jones & O’Dell 1977). Here, the main problem is the low level of linear polarization. Magnetic field reversals would reduce linear polarization, but most likely would affect circular polarization in the same way (Wilson & Weiler 1997). However, one important factor in the relative level of linear to circular polarization is the electron energy distribution, since low-energy electrons (with Lorentz factors less than 100) can lead to Faraday-depolarization of linear and/or conversion of linear to circular polarization.
As model calculations show (Jones & O‘Dell, Fig.1) circular polarization of a radio component peaks near the self-absorption frequency $`\nu _{\mathrm{ssa}}`$, where linear polarization drops to a minimum. In typical radio core components the strongest contribution to linear polarization therefore comes from the optically-thin power law part of its spectrum. However, in Sgr A\* such a power law is most likely absent or already ends at a frequency $`\nu _{\mathrm{max}}\nu _{\mathrm{ssa}}`$, as indicated by the steep high-frequency cut-off in its spectrum towards the infrared (Serabyn et al. 1997, Falcke et al. 1998).
Such a situation has not yet been considered in synchrotron propagation calculations involving conversion. However, it may result in a high $`m_\mathrm{c}`$-to-$`m_\mathrm{l}`$ ratio if $`\nu _{\mathrm{max}}\nu _{\mathrm{ssa}}`$ is true for the electrons that produce the low frequency spectrum. For example, Jones & O’Dell show that for a power law of electron energies with characteristic frequencies $`\nu _{\mathrm{min}}`$ extending at least a factor thirty below $`\nu _{\mathrm{ssa}}`$, circular polarization around $`\nu _{\mathrm{ssa}}`$ can exceed $`m_\mathrm{l}`$. With the absence of any higher frequency emission, linear polarization could be quenched. On the other hand, a narrow distribution with $`\nu _{\mathrm{min}}\nu _{\mathrm{ssa}}\nu _{\mathrm{max}}`$ would again lead to significant linear polarization even at the self-absorption frequency (Jones & Hardee 1979).
We consider here a simple synchrotron model of Sgr A\*, where the flux density at 5 GHz is produced in a spherical component by a flat electron distribution with a power law index of $`p=1`$ ranging over electron energies that correspond to the characteristic frequencies $`\nu _{\mathrm{max}}=\nu _{\mathrm{ssa}}=5`$ GHz and $`\nu _{\mathrm{min}}=\nu _{\mathrm{max}}/30`$. This model corresponds to a single zone of a complete inhomogeneous model (e.g., Blandford & Königl 1979). The low and high energy electrons are fully mixed. Assuming equipartition, we find a magnetic field of 0.4 Gauss and a maximum electron Lorentz factor of 60. The electron distriubtion chosen here has an equal number of low- and high-energy electrons per logarithmic interval. According to Pacholczyk (1977, Eq. 3.152), such a power law will contain enough low-energy electrons near $`\gamma _{\mathrm{min}}`$ to produce the observed circular polarization through repolarization. Intrinsic circular polarization could be as important as conversion in this model. The intrinsic synchrotron circular polarization is given by $`m_\mathrm{c}=3\%(B/\mathrm{Gauss})^{1/2}(\nu /\mathrm{GHz})^{1/2}=0.9\%`$, assuming an angle of $`60^{}`$ between the magnetic field and the line of sight (Legg & Westfold 1968). However, field reversals and optical depth effects will decrease that number.
The polarization properties of models (e.g., ADAF and Bondi-Hoyle) that produce gyrosynchrotron emission with low temperature electrons are largely unexplored (Narayan et al. 1998, Melia 1994). However, Ramaty (1969) did show that circular polarization may dominate linear polarization in some simple gyrosynchrotron sources.
Obviously, a more self-consistent treatment of these problems is required. The circular and linear polarization spectrum will depend on the electron distribution and the temperature and magnetic field stratification in the source. Nevertheless, it seems as if a highly self-absorbed source with a low high-energy cut-off and a modest amount of low-energy electrons might explain the observed properties of Sgr A\*.
Observationally, the most crucial steps are the measurement of the circularly polarized spectrum over a broader frequency range and its variability characteristics.
This discovery opens a significant new parameter space for the study of the nearest supermassive black hole candidate and its environment. This is especially important at centimeter wavelengths where the morphological structure of Sgr A\* will remain concealed forever due to strong scattering. In concert with other radio and millimeter wavelength techniques, we may be closer to decoding the complex picture of Sgr A\*.
We thank A.G. Pacholczyk, M. Rees, J. Wardle, F. Melia, R. Perley & R. Beck for many useful discussions.
|
no-problem/9907/hep-th9907106.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
According to the AdS/CFT correspondence , the $`𝒩=4`$ $`SU(N)`$ supersymmetric gauge theory considered in the ‘t Hooft limit with $`\lambda g_{YM}^2N`$ fixed is dual to the IIB string theory compactified on $`AdS_5\times S^5`$ . The parameters of the two theories are identified as $`g_{YM}^2=g_s`$, $`\lambda =(R/l_s)^4`$ and hence $`1/N=g_s(l_s/R)^4`$. In general, a gauge theory diagram with genus $`g`$ comes with a power of $`N^{22g}`$ and with some powers of $`\lambda `$. In the string theory, correlation functions for a given genus $`g`$ come with a power of $`g_s^{22g}`$ and have a coefficient given by a power series expansion in $`\alpha ^{}/R^2=\lambda ^{1/2}`$. At a given order of $`N`$ or $`g_s`$, the functions of $`\lambda `$ have different expansions (powers of $`\lambda `$ versus powers of $`\lambda ^{1/2}`$) in the two theories, although according to the AdS/CFT proposal, they are supposed to represent the same function. It is clear that comparison between the two theories is possible only for quantities whose $`\lambda `$ dependences can be computed exactly or that are independent of $`\lambda `$. Since the anomaly is independent of $`\lambda `$, it is a perfect candidate for this purpose.
In the $`𝒩=4`$ SYM, we have two kinds of anomaly, the trace anomaly and the $`SU(4)_R`$ chiral anomaly. Both have been checked and found to match with SUGRA calculations to leading order at large $`N`$. Subleading orders in $`N`$ for other gauge systems have also been discussed . However, the $`1/N^2`$ corrections for the original $`𝒩=4`$ case have never been tested. In this paper, we will be interested in the $`1/N^2`$ corrections to the chiral anomaly. To leading order in $`N`$, the chiral anomaly has been elegantly accounted for by the Chern-Simons action in the $`AdS_5`$ SUGRA. The coefficient $`k`$ (see (9) below) determines the magnitude of the chiral anomaly in field theory and is given by $`k=N^2`$ to the leading order in $`N`$. For finite $`N`$, the chiral anomaly of the $`SU(N)`$ gauge theory is proportional to $`N^21`$ and there is a mismatch of -1. Our goal is to determine the $`1/N^2`$ correction to this coefficient $`k`$ and reproduce this “-1” correction.
According to the proposal , quantities of order $`1/N^2`$ corresponds to a 1-loop string calculation. Although a classical string action on $`AdS_5\times S^5`$ with RR fields background is available , the quantization of it is notoriously difficult and we still don’t have reliable means to compute its quantum corrections. This is partially reflected by the fact that we still don’t know the complete spectrum of states of the string theory on $`AdS_5\times S^5`$ . The only explicitly known states are the full towers of KK states coming from the compactification of the 10 dimensional IIB SUGRA multiplet. It is thus natural to first examine the quantum corrections to the Chern-Simons coefficient coming from all of these states. In fact, we find precisely the expected correction of -1. This correction of -1 is entirely due to the quantum treatment of the doubleton multiplet that is gauged away. This is very satisfactory since the doubleton is known to be dual to the decoupled $`U(1)`$ factor of the SYM theory which is $`SU(N)`$ rather than $`U(N)`$. Given that this already provide the desired shift $`N^2N^21`$, we expect that there is no contribution from other string states at all. We also note that a consistent truncation to the states of the 5 dimensional supergravity multiplet alone is not sufficient.
We will first review in sec. 2 the arguments for matching the chiral anomaly with the Chern-Simons term in the five dimensional SUGRA action to leading order in $`N`$. Then in sec. 3 we determine the one loop corrections to the coefficient of the Chern-Simons action from the Kaluza-Klein towers. We finally discuss why it is plausible that other string states don’t modify the Chern-Simons coefficient.
## 2 Chiral Anomalies in the $`𝒩=4`$ SYM
The $`𝒩=4`$ SYM in 4 dimensions has a R-symmetry group of $`SU(4)_R`$. The matter fields are in nontrivial representation of it, with the scalars $`X^i`$ transforming in the $`\mathrm{𝟔}`$ and the four complex Weyl fermions $`\lambda `$ in the fundamental representation of $`SU(4)_R`$ with the chirality part (0,1/2) in $`\mathrm{𝟒}`$ and (1/2,0) in $`\mathrm{𝟒}^{}`$ (see for example . Our convention here together with the convention we adopt in (17) are equivalent to those these authors used.) It is convenient to use the compact notation of differential forms . The correctly normalized anomaly is derived from the descent equation
$$\frac{i^{n+2}}{(2\pi )^n(n+1)!}TrF^{n+1}=d\omega _{2n+1}(A),\delta _v\omega _{2n+1}=d\omega _{2n}^1(v,A),$$
(1)
where
$$F=dA+A^2,\delta _vA=dv+[A,v]$$
(2)
and $`v=v^aT^a`$, $`A=A^aT^a`$. $`v^a`$ are commuting gauge parameters and $`T^a`$’s are anti-Hermitian and are in the fundamental representation. Explicitly for $`n=2`$,
$`\omega _4^1(v,A)={\displaystyle \frac{1}{24\pi ^2}}Tr[vd(AdA+{\displaystyle \frac{1}{2}}A^3)],`$ (3)
$`\omega _5(A)={\displaystyle \frac{1}{24\pi ^2}}Tr[A(dA)^2+{\displaystyle \frac{3}{2}}A^3dA+{\displaystyle \frac{3}{5}}A^5].`$ (4)
In this notation, the R-symmetry anomaly is
$$v(D_iJ^i)=(N^21)_{S^4}\omega _4^1(v,A),$$
(5)
where we used the convention for Einstein summation:
$$vF=d^4xv^a(x)F^a(x)$$
(6)
for any $`v^a(x)`$ and $`F^a(x)`$. The $`N^21`$ factor is due to the fact that $`\lambda `$ is in the adjoint of $`SU(N)`$.
For $`T^a`$ in a general representation $`𝐑`$ of the group, the corresponding quantities with the trace taken in $`𝐑`$ are
$$\omega _{2n}^{1}{}_{}{}^{𝐑}=A(𝐑)\omega _{2n}^1,\omega _{2n+1}^𝐑=A(𝐑)\omega _{2n+1},$$
(7)
where $`A(𝐑)`$ is the anomaly coefficient defined by the ratio of the $`d`$-symbol taken in the representation $`𝐑`$ and the fundamental representation. In general $`2n`$ or $`2n+1`$ dimensions, since the $`d`$-symbol is given by a symmetrized trace of $`n+1`$ Lie algebra generators, it is easy to show that the complex conjugate representation $`𝐑^{}`$ has an anomaly coefficient
$$A(𝐑^{})=(1)^{n+1}A(𝐑).$$
(8)
According to the AdS/CFT proposal, one should be able to see this anomaly from the dual point of view of string theory. To leading order in $`N`$, one looks at the IIB SUGRA compactified on $`AdS_5\times S^5`$ . The tree level SUGRA action contains the term
$$S_{cl}[A]=\frac{1}{4g_{SG}^2}d^5x\sqrt{g}F_{\mu \nu }^aF^{\mu \nu a}+k_{AdS_5}\omega _5.$$
(9)
We note that in terms of components, $`k\omega _5=\frac{ik}{96\pi ^2}d^5xd^{abc}ϵ^{\mu \nu \lambda \rho \sigma }A_\mu ^a_\nu A_\lambda ^b_\rho A_\sigma ^c+\mathrm{}`$, if one uses Hermitian generators and the definition of $`d`$-symbol in . From the dual gauge theory point of view, since the current $`J^a`$ is coupled to the $`SU(4)`$ gauge fields of the bulk, using the AdS/CFT proposal one can determine the coefficient $`g_{SG}^2`$ and $`k`$ from the 2-point and 3-point correlators of $`J^a`$ of the gauge theory. The normalization to leading order in $`N`$
$$g_{SG}^2=\frac{16\pi ^2}{N^2},k=N^2,$$
(10)
has been determined in . The ratio of the coefficient $`g_{SG}^2`$ and $`k`$ also agrees with what one gets from supersymmetry . One may also determine the values of $`g_{SG}^2`$ and $`k`$ from a dimensional reduction of the 10 dimensional IIB SUGRA. This requires the knowledge of the $`N`$ dependence of $`R`$. According to the proposal , the radius $`R`$ of $`S^5`$ is determined by $`R^4/\alpha ^2=g_s^2N`$. Using this, it is easy to determine the normalization of the gauge kinetic energy term and one indeed finds $`g_{SG}^2=16\pi ^2/N^2`$ and hence $`k=N^2`$ using SUSY.
In usual consideration of SUGRA on $`AdS`$, one considers gauge configurations $`A^a`$ which vanish at the boundary and so the Chern-Simons term is gauge invariant. For the consideration of AdS/CFT correspondence, the boundary value of the $`A^a`$ is nonvanishing and coupled to the R-currents $`J^a`$. Under a gauge variation $`\delta _vA`$, the variation of the Chern-Simons term is a boundary term
$$\delta _vS_{cl}=k_{S^4}\omega _4^1.$$
(11)
Now by the conjecture ,
$$S_{cl}[A_\mu ^a(x,x^5)]=\mathrm{\Gamma }[A_i^a(x)],$$
(12)
where $`\mathrm{\Gamma }`$ is the generating functional for current correlators in the boundary theory, and hence
$$\delta _vS_{cl}=\delta _v\mathrm{\Gamma }=v(D_iJ^i).$$
(13)
From this and (11), one can read off the anomaly. It is
$$v(D_iJ^i)=N^2_{S^4}\omega _4^1(v,A),$$
(14)
which agrees with the gauge theory computation (5) to leading order in $`N`$.
## 3 Induced Chern-Simons
As we have seen in the previous section, the IIB tree level SUGRA contains a Chern-Simons term which can account for the chiral anomaly of the gauge theory to leading order in $`N`$. But there is also a mismatch of “-1” which is of order $`1/N^2`$. In this section, we will examine the $`1/N`$ corrections to the Chern-Simons action on the string theory side. From the point of view of the IIB string theory, an expansion in $`1/N`$ is a quantum expansion beyond tree level. In particular the $`1/N^2`$ correction to the chiral anomaly corresponds to a 1-loop computation in IIB string theory. We will first examine the corrections coming from the Kaluza-Klein states. Based on the origin of the Chern-Simons action in $`AdS_5`$ supergravity, we will argue in the discussion section that the other string states are not likely to modify the Chern-Simons coefficient.
Fermionic contributions
It is well known that chiral fermions in even dimensions can give rise to an anomaly. Although there is no chirality in odd dimensions, there is a similar phenomenon for fermions in odd dimensions. Consider a Dirac fermion $`\psi `$ in odd dimensions (flat) minimally coupled to (external or gauge) vector bosons $`A_\mu `$ of a group $`G`$. At the quantum level, a regularization needs to be introduced to make sense of the theory and one cannot preserve both the gauge symmetry (small and large) and the parity at the same time . If one chooses to preserve the gauge symmetry by doing a Pauli-Villars regularization, then there will be an induced Chern-Simons term generated at one loop. The result is independent of the fermion mass . In our notation, the induced Chern-Simons term is
$$\mathrm{\Delta }\mathrm{\Gamma }=\pm \frac{1}{2}\omega _{2n+1}^𝐑=\pm \frac{1}{2}A(𝐑)\omega _{2n+1},$$
(15)
where $`𝐑`$ is the representation of the Dirac fermion. The $`\pm `$ sign depends on the regularization and can often be fixed within a specific context.
This result was originally obtained for fermions coupled to gauge fields in a flat spacetime and has been extended to full generality for arbitrary curved backgrounds and any odd dimensions . The induced parity violating terms are given (up to a normalization factor) by the secondary characteristic class $`Q(A,\omega )`$ satisfying
$$dQ(A,\omega )=\widehat{A}(R)ch(F)|_{2n+2},$$
(16)
where $`\omega `$ is the gravitational connection. Going one step down the descent relation, one gets the chiral anomaly of a Dirac operator defined on a curved manifold. Since $`\widehat{A}(R)=1+o(R^2)`$ and $`TrF=0`$ for $`SU`$, in five dimensions there is only the gauge Chern-Simons and the gravitational Chern-Simons terms from (16) and there is no mixing term. It is clear that the pure gauge Chern-Simons piece takes the same expression (15) as in the flat case. This can indeed be expected from the beginning as the gauge Chern-Simons form is independent of the metric. It is also clear that there is no induced gravitational Chern-Simons term as it would be related to a gravitational anomaly in four dimensions, and gravitational anomalies exist only in $`4k+2`$ dimensions.
Now we need the particle spectrum of the type IIB string theory on $`AdS_5\times S^5`$ . The only explicitly known states are the KK states coming from the compactification of the 10 dimensional IIB SUGRA multiplet . So we will examine them first. Particles in $`AdS_5`$ are classified by their unitary irreducible representation of $`SO(2,4)`$. Since $`SO(2,4)`$ has the maximal compact subgroup $`SO(2)\times SU(2)\times SU(2)`$, irreducible representations are labelled by the quantum numbers $`(E_0,J_1,J_2)`$. The complete KK spectrum of the IIB SUGRA on $`AdS_5\times S^5`$ was obtained in together with information on the masses (in units of $`1/R`$) <sup>1</sup><sup>1</sup>1There seems to be a misprint for the masses of the spin 3/2 in the table of . and the representation content under $`SU(4)_R`$. We reproduce these results in the following table where we also give the $`SU(2)\times SU(2)`$ content for the fermionic towers.
$$\begin{array}{cccccc}& SU(2)\times SU(2)& \text{masses}\hfill & & SU(4)_R\hfill & \\ \psi _\mu \hfill & (1,1/2)& k+3/2\hfill & k0& \mathrm{𝟒},\mathrm{𝟐𝟎},\mathrm{}\hfill & \hfill \\ & (1,1/2)& (k+7/2)\hfill & k0& \mathrm{𝟒}^{},\mathrm{𝟐𝟎}^{},\mathrm{}\hfill & \\ \lambda \hfill & (1/2,0)& (k1/2)\hfill & k1& \mathbf{\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.17em}20}^{},\mathrm{}\hfill & \hfill \\ & (1/2,0)& k+11/2\hfill & k0& \mathrm{𝟒},\mathrm{𝟐𝟎},\mathrm{}\hfill & \\ \lambda ^{}\hfill & (1/2,0)& (k+3/2)\hfill & k0& \mathrm{𝟒}^{},\mathrm{𝟐𝟎}^{},\mathrm{}\hfill & \hfill \\ & (1/2,0)& k+7/2\hfill & k0& \mathrm{𝟒},\mathrm{𝟐𝟎},\mathrm{}\hfill & \\ \lambda ^{\prime \prime }\hfill & (1/2,0)& (k+9/2)\hfill & k0& \mathrm{𝟑𝟔},\mathrm{𝟏𝟒𝟎},\mathrm{}\hfill & \\ & (1/2,0)& k+5/2\hfill & k0& \mathrm{𝟑𝟔}^{},\mathrm{𝟏𝟒𝟎}^{},\mathrm{}\hfill & \end{array}$$
(17)
The fermions in this table are symplectic Majorana spinors. For simplicity, we have only listed half of the field content. The other half (“mirror”) consists of fields with conjugate $`SU(4)_R`$ content and with the $`SU(2)\times SU(2)`$ quantum numbers exchanged . The first member of the rows marked with an $``$ together form the fermionic sector of the $`𝒩=8`$ supergravity multiplet: 8 gravitini and 48 spin 1/2. We have chosen to list the “left handed” spinors here. The “chirality” refers to the anti-de Sitter group $`SU(2,2)`$. Since the Chern-Simons term in odd dimensions is related to the chiral anomaly in one lower dimension , the “chirality” thus allows us to fix the $`\pm `$ sign of the induced Chern-Simons term. In particular, the “right-handed” (“left-handed”) spinor generates a Chern-Simons term with + (-) sign in (15). We should also remember (8) that the conjugate representation $`𝐑^{}`$ has an opposite anomaly coefficient in five dimensions. To determine the net induced Chern-Simons terms from all these states, we should also sum over the “mirror”. Notice that the “right-handed” sector contributes exactly the same as the “left-handed” sector because relative to the “left” sector, they get one minus sign from the “chirality” and another one from $`A(𝐑^{})`$. Notice also that the induced Chern-Simons term from a symplectic Majorana spinor is half that of a Dirac spinor because it contains half as many degrees of freedom. Therefore effectively we can just sum over the spectrum in the “left handed” sector above using the formula
$$\mathrm{\Delta }\mathrm{\Gamma }=\frac{1}{2}A(𝐑)_{AdS_5}\omega _5$$
(18)
as if they are Dirac spinors.
It is clear that the towers of $`\psi _\mu ,\lambda ^{},\lambda ^{\prime \prime }`$ do not generate a net induced Chern-Simons term as the states in these towers all comes in pairs with their $`SU(4)`$ content conjugate of each other. However, as pointed out in , there is a missing $`k=0`$ state ($`\mathrm{𝟒}^{}`$) in the first tower of $`\lambda `$. There are similar “missing states” in the bosonic towers. Together they are identified with the doubleton multiplet of $`SU(2,2|4)`$ which consists of a gauge potential, six scalars and four complex spinors. These are nonpropagating modes in the bulk of $`AdS_5`$ and can be gauged away completely , which is the reason why they don’t appear in the physical spectrum. These modes are exactly dual to the $`U(1)`$ factor of the $`U(N)`$ SYM living on the boundary . Since from the SYM point of view, the -1 correction to the chiral anomaly is due the decoupling of this $`U(1)`$ to give $`SU(N)`$, we expect that from the dual point of view of the $`AdS_5`$ string theory, the -1 correction would also find an explanation entirely in terms of these gauge modes. Indeed, since there is an additional $`k=0`$ state (“left-handed” and in $`\mathrm{𝟒}`$) in the tower of $`\lambda `$ which is not balanced out, there will be a Chern-Simons term of -1/2 (in units of $`\omega _5`$) coming from this state. This is only half of the desired result. However this is not the complete story.
Let us recall the doubleton multiplet is absent because it has been gauged away by imposing the gravitino gauge fixing condition (see also for the gauging in the case of $`AdS_7\times S^4`$ case). Hence to properly quantize the system, one has to introduce the corresponding Faddeev-Popov ghosts. These ghosts will give another contribution of $`1/2`$ to the Chern-Simons coefficient. Indeed the ghost multiplet contains bosonic spinors in exactly the same $`SU(4)_R`$ representation as the spinors of the doubleton multiplet, i.e. in $`\mathrm{𝟒}^{}`$. Natively one would expect a contribution opposite to the one of the unbalanced $`k=0`$ state (in $`\mathrm{𝟒}`$) of the $`\lambda `$ tower. However, since the ghosts have opposite statistics, they actually give the same contribution, i.e. another -1/2. So altogether we get a total induced Chern-Simons term of -1,
$$\mathrm{\Delta }\mathrm{\Gamma }=_{AdS_5}\omega _5,$$
(19)
which is exactly the desired result. Notice that the induced Chern-Simons (coming with a constant integer coefficient) is independent of the radius $`R`$ and this is consistent with the AdS/CFT proposal since the anomaly and its corrections are independent of $`\lambda `$.
Bosonic contributions
There is another interesting effect related to the Chern-Simons action. It is known that in three dimensions, the gluons at one loop can modify the coefficient of the Chern-Simons action by an integer shift. However, the precise modification depends on the regularization scheme adopted <sup>2</sup><sup>2</sup>2 We thanks R. Stora for a useful discussion about the issues of regularization. . In five dimensions, it is easy to convince oneself that there is no induced Chern-Simons Lagrangian coming from the gluon loops. The reason is simple. Suppose one adopt some regularization scheme to compute the gluon loops that may generate an induced Chern-Simons term. Since the gluons are in the adjoint representation of the gauge group, the diagrams are proportional to the $`d`$-symbol of the adjoint representation, or $`A(\mathrm{𝐚𝐝𝐣})`$. For three dimensions, this is proportional to the quadratic Casimir, but it is zero in five dimensions. In general, (8) says that $`A(𝐑^{})=A(𝐑)`$ for $`4k+3`$ dimensions and $`A(𝐑^{})=A(𝐑)`$ for $`4k+1`$ dimensions. Hence $`A(\mathrm{𝐚𝐝𝐣})=0`$ for the present case. That the gauge bosons do not modify the Chern-Simons coefficient was already noted by Witten . Moreover, since the other members ($`\mathrm{𝟔𝟒},\mathrm{𝟏𝟕𝟓},\mathrm{}`$) of the spin 1 towers are all in real representations, they also don’t modify the Chern-Simons action.
We thus see that while gauge boson loops do not modify the magnitude of the Chern-Simons action, the spinor loops do. Due to the intimate relation with the chiral anomaly , one expects that there is a nonrenormalization theorem (counterpart of the Bardeen nonrenormalization theorem for the chiral anomaly) that protects the Chern-Simons action from further corrections beyond one loop.
Combining the above, we find that at finite $`N`$, the coefficient $`k`$ is shifted by
$$kk1\text{or}N^2N^21$$
(20)
due to the quantum effects of the full set of Kaluza-Klein states.
## 4 Discussion
We have seen that the finite $`N`$ dependence of the $`SU(4)_R`$ chiral anomaly is exactly accounted for by the quantum effects of the full set of KK states. A consistent truncation to the 5 dimensional supergravity states does not do the job and has no reason that it should. In fact, the total induced Chern-Simons term has its contribution coming from the $`k=0`$ state of the $`\lambda `$ tower and from the ghost spinors to the doubleton multiplet, both have nothing to do with the five dimensional supergravity multiplet. Thus the -1 shift at finite $`N`$ of the chiral anomaly find a direct explanation entirely in terms of the quantum effects of the doubleton multiplet in $`AdS_5`$, which is known to be dual to the decoupled $`U(1)`$ factor of the gauge theory. The whole picture is consistent.
For the AdS/CFT proposal to work at finite $`N`$, the other string states shouldn’t contribute any further corrections to the Chern-Simons coefficient. While we don’t have a proof of this statement, we would like to argue that this is plausible.
In the Kaluza-Klein origin of the Chern-Simons action in gauged supergravity theory was explored. In particular, it was shown that for the case of $`AdS_7\times S^4`$ compactification of the 11 dimensional SUGRA, the Chern-Simons term comes from the 11-dimensional Chern-Simons coupling $`CGG`$ upon compactification. The origin of the Chern-Simons term in $`AdS_5\times S^5`$ SUGRA was also discussed and it is expected to arise similarly from the compactification of the 10 dimensional IIB SUGRA. However, there are some technical subtleties, the major one being the lack of a covariant action of the IIB SUGRA. It would be very interesting to work it out explicitly, presumably the formalism of will be useful.
Given that the Chern-Simons term has its origin in the 10 dimensional SUGRA multiplet, it is natural to expect that the quantum corrections to the Chern-Simons action are also confined to the states related to the Kaluza-Klein compactification and we conjecture that this is indeed the case.
Since the trace anomaly and the R-symmetry anomaly are in the same SUSY multiplet, a similar finite $`N`$ correction of -1 should also appear in the trace anomaly. We expect that again the correction will have an explanation entirely in terms of the quantum effects related to the doubleton multiplet. It would be interesting to verify this explicitly. This will provide a cross check of our conjecture here.
Loop effects in $`AdS_5`$ supergravity are definitely highly nontrivial to compute. In fact, tree level calculations already call for the invention of many ingenuous techniques and methods. Were it not for the topological character of the Chern-Simons action, one will not be able to determine exactly the parity violating term induced at one loop. For other processes involving loops, in addition to the difficulty of evaluating complicated momentum integrals, one also has a lot more states propagating since a truncation of states to the 5 dimensional supergravity multiplet is not enough. It would be interesting to find other quantities that can be calculated at one or more loops in order to provide other tests of the AdS/CFT proposal at finite $`N`$.
Since Chern-Simons action exists generally in gauged supergravity on $`AdS_{p+1}`$ space, they all have interpretation as chiral anomaly in the dual CFT. For example, in the $`AdS_7\times S^4`$ case discussed in , the Chern-Simons action has a coefficient of $`N^3`$ and the $`1/N^2`$ correction would be of order $`N`$. We suspect that this correction would again find an explanation in terms of the doubleton multiplet, which now consists of a tensor gauge field, four spinors and five scalars.
Acknowledgments
We would like to thank L. Alvarez-Gaume, L. Bonora, J.-P. Derendinger, D. Matalliotakis, R. Russo, R. Stora and B. Zumino for discussions. We are particularly grateful to R. Russo for initial collaboration. Support by the Swiss National Science Foundation is gratefully acknowledged.
|
no-problem/9907/astro-ph9907331.html
|
ar5iv
|
text
|
# W Crv: The shortest-period Algol with non-degenerate components? Based on observations obtained at the David Dunlap Observatory, University of Toronto.
## 1 Introduction
Contact binary stars are common: According to the only currently available unbiased statistics – a by-product of the OGLE microlensing project – as discussed in Rucinski \[1997a\] and Rucinski \[1998b\], the spatial frequency of contact binaries among the main-sequence, galactic-disk stars of spectral types F to K (intrinsic colors $`0.4<VI_C<1.4`$) is about 1/100 to 1/80 (counting contact binaries as single objects, not as two stars). Most of them have orbital periods within $`0.25<P<0.7`$ days, and they are very rare for $`P>1.31.5`$ days \[1998a\]. These properties, as well as the spatial distribution extending all the way to the galactic bulge, with moderately large $`z`$ distances from the galactic plane, and the kinematic properties suggest an Old Disk population of Turn-Off-Point binaries, i.e. a population characterized by conditions conducive to rapid synchronization and formation of contact systems from close, but detached, binaries. The contact binaries are less common in open clusters which are younger than the galactic disk \[1998b\], a property indicating that they form over time of a few Gyrs. It is obviously of great interest to identify binaries which are related to, or precede the contact system stage, as the relative numbers would give us information on durations of the pre- and in-contact stages.
Lucy and Lucy & Wilson were the first to point out the observational importance of contact systems with unequally deep eclipses as possible exemplification of binaries which are to become contact systems or are in the “broken-contact” phase of the theoretically predicted Thermal Relaxation Oscillation (TRO) evolution of contact binary stars, as discussed by Lucy , Flannery and Robertson & Eggleton . Lucy & Wilson called such contact systems the B-type – as contrasted to the previously recognized W-type and A-type contact systems – because of the light curves resembling those of the $`\beta `$ Lyrae-type binaries. While the A-type are the closest to the theoretical model of contact binaries with perfect energy exchange and temperature equalization, the W-type show relatively small (but still unexplained) deviations in the sense that less-massive components have slightly higher surface brightnesses (or temperatures). Systems of the B-type introduced by Lucy & Wilson show large deviations from the contact model in that more massive components are hotter than predicted by the contact model. Thus, the energy transfer is inhibited or absent and the components of the B-type systems behave more like independent (or thermally de-coupled) ones. While light-curve-synthesis solutions suggest good geometrical contact, it has been suggested that these may be semi-detached binaries with hotter, presumably more-massive components filling their Roche lobes (we will call these SH following Eggleton ).
The same OGLE statistics that gave indications of the very high spatial frequency of contact binaries suggests that short-period binaries which simultaneously are in contact and show unequally-deep eclipses are relatively rare in space: Among 98 contact systems in the volume limited sample, only 2 have unequally deep minima indicating components of different effective temperatures \[1997b\]. Both of these systems (called there “poor-thermal-contact” or “PTC” systems, but which could be as well called B-type contact systems) have periods longer than 0.37 day and both show the first maximum (after the deeper eclipse) relatively higher of the two maxima. This type of asymmetry is dominant in the spatially much larger (magnitude limited) sample of systems available in the OGLE survey. As already pointed by Lucy & Wilson , this sense of asymmetry can be explained most easily as a manifestation of mass-transfer from the more-massive to the less-massive component. We add here that this can happen also in a non-contact SH system, with the continuum light emission from the interaction volume between stars contributing to the strong curvature of the light-curve maxima and mimicking the photometric effects of the tidally-elongated (contact) structure. Exactly this type of asymmetry is observed in a system which is absolutely crucial in the present context, V361 Lyr; it has been studied by Kałużny and Kałużny , and later convincingly shown by Hilditch et al. to be a semi-detached binary with matter flowing from the more massive to the less-massive component. The light curve asymmetry in the case of V361 Lyr is particularly large and stable. A similar asymmetry and somewhat similar mass-transfer effects (albeit involving much more massive components) are observed in the early-type system SV Cen where we have a direct evidence of a tremendous mass-transfer in a very large period change.
The subject of this paper, the close binary W Crv (GSC 05525–00352, BD$`12`$ 3565) is a relatively bright ($`V=11.1`$, $`BV=0.66`$) system with the orbital period of 0.388 day. For a long time, this was the short-period record holder among systems which appear to be in good geometrical contact, yet which show strongly unequally-deep eclipses indicating poor thermal contact. It was as one of the systems exemplifying the definition of contact systems of the B-type by Lucy & Wilson , although most often its type of variability has been characterized as EB or $`\beta `$ Lyrae-type. A system photometrically similar to W Crv with the period of 0.37 days, #3.012, has been identified in the OGLE sample \[1997b\], but it is too faint for spectroscopic studies.
Our radial velocity data which we describe in this paper are the first spectroscopic results for W Crv. Thus, it would be natural to combine them with the previous photometric studies. However, we will claim below that W Crv is more complex than the current light-curve synthesis codes can handle. The previous analyses of the system, without any spectroscopic constraints on the mass-ratio ($`q`$), encountered severe difficulties. A recent extensive study of several light curves of W Crv by Odell , solely based on photometric data found that the mass-ratio was practically indeterminable ($`0.5<q<2`$), admitting solutions ranging between the Algol systems (SC, for semi-detached with the cool, lower-mass component filling its Roche lobe) on one hand and all possible configurations which are conventionally used to explain the B-type light curves (SH, i.e. the broken-contact or pre-contact semi-detached systems as well as poor-thermal-contact systems) on the other hand. A value of $`q=0.9`$ and the more massive component being eclipsed at primary minimum were assumed by Odell mostly by plausibility arguments.
For a comprehensive summary of the theoretical issues related to pre- and in-contact evolution, the reader is suggested to refer to the review of Eggleton ; observational data for B-type systems similar to W Crv were collected and discussed in a five-part series by Kałużny, concluded with Kałużny , and in studies by Hilditch & King , Hilditch et al. and Hilditch .
## 2 Radial velocity observations
The radial velocity observations of W Crv were obtained in February – April 1997 at David Dunlap Observatory, University of Toronto using the 1.88 metre telescope and a Cassegrain spectrograph. The spectral region of 210 Å centered on 5185 Å was observed at the spectral scale of 0.2 Å/pixel or 12 km s<sup>-1</sup>/pixel. The entrance slit of the spectrograph of 1.8 arcsec on the sky was projected into about 3.5 pixels or 42 km s<sup>-1</sup>. The exposure times were typically 10 to 15 minutes. The radial velocity data are listed in Table 1 and are shown graphically in Figure 1. The component velocities have been determined by fitting gaussian curves to peaks in the broadening function obtained through a de-convolution process, as described in Lu & Rucinski . The mean standard deviations from the sine-curve variations are 7.7 km s<sup>-1</sup> for the primary (more-massive, subscript 1) component and 17.2 km s<sup>-1</sup> for the secondary (less-massive, subscript 2) component. These deviations give the upper limits to the measurement uncertainties because they contain the deviations of the component velocities from the simplified model of circular orbits without any proximity effects (i.e. without allowance for non-coinciding photometric and dynamic centres of the components).
The individual observations as well as the observed minus calculated $`(OC)`$ deviations from the sine-curve fits to radial velocities of individual components are given in Table 2. When finding the parameters of the fits, we assumed only the value of the period, following Odell , and determined the mean velocity $`V_0`$, the two amplitudes $`K_1`$ and $`K_2`$ as well as the moment of the primary minimum $`T_0`$. The remaining quantities in that table have been derived from the amplitudes $`K_i`$. The errors of the parameters have been determined by a bootstrap experiment based on 10,000 solutions with randomly selected observations with repetitions.
Among the spectroscopic elements in Table 2, the mass-ratio, $`q=0.682\pm 0.016`$, is the most important datum for proper interpretation of the light curves. Without external information on the mass-ratio, strong inter-parametric correlations in the light-curve analyses are known to frequently produce entirely wrong solutions (except for cases of total eclipses).
Before attempting a combined solution, we note that the spectroscopic data, as given in Table 2, describe the following system: The more-massive component is eclipsed in the deeper eclipse and hence is the hotter of the two. Judging by the relative depths of the eclipses, and noting the small light contribution of the secondary component (even if it fills its Roche lobe), we estimate – on the basis of the systemic colour at light maxima $`(BV)=0.66`$ – that the effective temperatures of the components are approximately 5700K and 4900K. The mass of the primary component is $`M_1\mathrm{sin}^3i=1.00M_{}`$, so that the primary is apparently a solar-type star, and the orbital inclination cannot be far from $`i=90^{}`$, although not exactly so as total eclipses are not observed. Obviously, the spectroscopic data cannot provide any constraint on the degree of contact in the system, i.e. whether it is a contact system with poor thermal contact or a semi-detached configuration with one of the components filling the Roche lobe or perhaps even a detached binary. There are no spectroscopic indications of any mass-transfer either, although – with the mutual proximity of components – one would not expect such obvious signatures of this process as a stream or an accretion disk; besides, the spectral region around 5185 Å would not normally show them in any case. We must seek for constraints on the system geometry in the light curve and its variations.
## 3 Attempts of a combined light curve and radial velocity solution
Four light curves discussed by Odell are currently available: the first from 1966 was obtained by Dycus , the remaining three in 1981, 1988 and 1993 were by Odell. The light curves were obtained with the same comparison star permitting direct comparison of the large curves. The large seasonal variations of the light curves were interpreted by Odell by star spots. We do not support the spot hypothesis by pointing out a curious property: A comparison of the seasonal light curves (Figure 2) indicates that all changes take place at light maxima and during the secondary eclipse when the cooler component is behind the hotter one, but that primary eclipse is surprisingly similar in all four curves. This constancy of the primary-eclipse shape remains irrespectively whether one considers the intensity or magnitude (relative intensity) units. We feel that we have here a strong indication that mass-exchange and accretion processes are operating between the stars. These processes would produce large areas of hot plasma, most probably on the inner face of the less-massive secondary component which is invisible during the primary minima. One can of course contrive a scenario involving dark spots appearing in certain areas, but never appearing on the outer side of the less-massive component, but the dark-spot hypothesis seems to be the most artificial of all possibilities. We note that an argument of the diminished brightness being accompanied by a redder colour is a weak one as such correlation is expected when plasma temperature effects are involved, irrespectively whether the spots are cool or hot.
With strong mass-transfer effects modifying its light curve, W Crv is not a typical contact system. In this situation, a blind application of light-curve synthesis codes may have led us to entirely wrong sets of parameters. For that reason, we did not attempt to obtain a light-curve solution of the system and used the popular light-curve synthesis program BinMak2 (as described by Bradstreet and Wilson ) to explore reasonable ranges of parameters in different geometrical configurations.
Attempts of conventional light-curve synthesis solutions of W Crv encounter several problems. First of all, the large amplitudes at both minima totally exclude a detached configuration. At least one of the components or possibly both contribute to the strong ellipticity of the light curve, which would not be surprising in view of the short orbital period and little space for expansion of components in the system. The system must be a contact one or must be described by one of the two possible semi-detached configurations. Arguably, durations of sub-contact phases of evolution are very short and the system should quickly reach a semi-detached stage. Let us call the three possibilities “C” for contact, “SH” for the one with the more massive component filling the Roche lobe and “SC” for an Algol configuration with the less massive component filling its lobe. The shapes of the orbital cross-sections of the components for these three possibilities are shown in Figure 3. We will discuss them in turn, in reference to Figures 4 and 5 which show the most symmetric 1981 light curve and then the four seasonal light curves. The 1981 curve was selected for its relatively symmetric shape, good phase coverage and absence of what was initially thought to be signatures of dark spots.
The parameters of the best-fitting synthesis models for the V-filter 1981 light curve are given in Table 3. The values of equipotentials $`\mathrm{\Omega }_i`$ are defined as in the Wilson-Devinney program and $`r_i`$ are the volume radii in units of the orbital centre separation. The following assumptions on the properties of the components of W Crv were made while generating the synthetic light curves: The limb darkening coefficients $`u_1=0.65`$ and $`u_2=0.75`$, the gravity exponents $`g_1=g_2=0.32`$ and the bolometric albedo $`A_1=A_2=0.5`$. The inner and outer equipotentials for $`q=0.682`$ were $`\mathrm{\Omega }_{in}=3.215`$ and $`\mathrm{\Omega }_{out}=2.821`$. The radii given in Table 3 are the volume radii.
Contact configuration (C): Conventional contact solutions make it abundantly clear that the strong curvature of light maxima and large amplitude of light variations require two properties: a large orbital inclination and a moderately strong contact, at least $`f0.150.25`$. However, the inclination cannot be exactly 90 degrees as then we would see a total eclipse in the secondary minimum. The contact-model fit is far from perfect because of the large seasonal changes, but also indicates a need of a “super-reflection” effect, with increased albedo not only above the currently most popular value of 0.5 for convective envelopes , but even above its physically allowed upper limit of unity. This is clearly visible in Figures 4 and 5 in the branches of the secondary minimum. Cases of the abnormal reflection were already discussed by Lucy & Wilson – including the case of W Crv – and by Kałużny , as indicating some abnormal brightness distribution between the stars (most probably, on the inner side of the secondary component) which could be linked to a mass-exchange phenomenon. Obvious presence of such effects would make the standard, light-curve synthesis model – which hides all energy and mass transfers deep inside the common contact envelope – entirely invalid.
Semi-detached configuration (SH): This is the preferred configuration for B-type systems, either in terms of a system before forming contact or in the broken-contact phase of the TRO oscillations. Photometrically, the model does not provide enough of the light-curve amplitude and curvature at maxima, even with $`i=90^{}`$. The dotted line in Figures 4 and 5 shows this deficiency. However, in this configuration, it would be natural to expect departures from the simple geometric model due to the mass exchange phenomena. The increased reflection effect could be then explained through an area on the secondary component which is directly struck by the in-falling matter from the primary component, while the strong curvature of maxima could be explained by a light contribution from the accretion region which is visible only at the quadratures, as is most likely the case for SV Cen . Although such a configuration cannot be modeled with the existing light-curve synthesis codes, it offers a prediction of the shortening of the orbital period; in Section 4 we present indications that the period is in fact getting longer. It is also consistent with the light curve variations almost entirely limited to the light maxima, with very small seasonal differences between portions at light minima. If the mass-transfer phenomena between the stars increase the light-curve amplitude, then the inclination could take basically any value. For $`i<90`$ degrees, the inner side of the secondary component would be partly visible at secondary minima explaining large light-curve variations at these phases.
Semi-detached Algol configuration (SC): Of the three geometrical models considered here, this one best fits the 1981 light curve in all parts except in the upper branches of the primary minimum which are wider than predicted. The large amplitudes of the light variations find a better explanation in this model than in the SH case. Also, most of the reflection effect can be explained with the conventional value of the albedo by the relatively larger area of the illuminated secondary component. The mass-transfer in this model should lead to a period lengthening, as in other Algols. This is what we apparently see in the times of minima of W Crv (see Section 4). If the light-curve maxima contain a light contribution of mass-transfer and/or accretion effects, then the second maximum (after the secondary minimum) would be expected – on the average – to be more perturbed by the Coriolis-force deflected stream, and this seems to be the case for W Crv (see Figure 2). Within the SC hypothesis, only one of the two components, the secondary, would be abnormal (oversize relative the main-sequence relation, see Tables 2 and 3), whereas the C and SH models predict mass-radius inconsistencies for both components. Thus, we feel that all the current data suggest that the short-period Algol configuration is the correct explanation for W Crv. The major problem, however, is with the theoretical explanation for such a configuration: There is simply no place for Algols with periods as short as 0.388 days within the present theories. We return to this problem in Section 5.
## 4 Period changes
Although known for almost 65 years, W Crv has not been extensively observed for moments of minima. Practically all extant data have been presented by Odell . Dr. Odell kindly sent very new, unpublished data and corrections to a few data points listed in Table 1 of his paper. These are given in Table 4. We have added to these the moment of minimum inferred from our new spectroscopic determination of $`T_0`$ (see Table 2). I what follows, we will use the ephemeris of Odell: $`JD(\mathrm{min})=2427861.3635+0.388080834\times E`$. The observed minus calculated $`(OC)`$ deviations from Odell’s ephemeris are shown in Figure 6. The moments secondary minima, which are based on shallower eclipses with stronger light-curve perturbations, are marked in the figure by open circles. Our spectroscopic result gives a significant, positive deviation of $`(OC)=+0.0093\pm 0.0015`$ days, in agreement with the newest data of Odell.
The available times-of-minima contain information about orbital period changes that have taken place over the 65 years. Disregarding presumably random and much smaller shifts in the eclipse centres caused by stellar-surface perturbations (whether we call them spots or mass-transfer affected areas), the observed deviations from the linear elements of Odell in Figure 6 can be interpretted as consisting of at least two streight segments or as forming a parabola. We do not consider a possibility that the discoverer of W Crv, Tsesevich , committed a gross error in the timing of the minima because he was one of the most experienced observers of variable stars ever. In W UMa-type systems, the abrupt changes of the type leading to the streight-segmented $`(OC)`$ diagrams take place in intervals of typically years; these changes may have some relation to the magnetic-activity cycles . They are very difficult to handle as they require very dense eclipse-timing coverage; such a coverage is not available for W Crv. It is easier to analyze the $`(OC)`$ deviations for a global quadratic trend using an expression: $`(OC)=a_0+a_1\times E+a_2\times E^2`$. Because of poor distribution of data points over time, the linear least-squares would give unreliable error estimates for the coefficients $`a_i`$. In view of this difficulty, the uncertainties have been evaluated using the bootstrap-sampling technique and are listed in Table 5 in terms of the median values at the 68 percent (for gaussian distributions, $`\pm 1`$-sigma) and 95 percent ($`\pm 2`$-sigma) confidence levels. The bootstrap technique reveals a strongly non-gaussian distribution of the uncertainties, as shown for the coefficient $`a_2`$ in the insert to Figure 6.
The quadratic coefficient $`a_2`$ is proportional to the second derivative of the times of minima hence to the period change through $`dP/dt=2a_2/P`$. For comparison with the theory of stellar evolution, it is convenient to consider the time-scale of the period change given by $`\tau =P/(dP/dt)=P^2/2a_2`$. The values of $`\tau `$ are given in the last column of Table 5. The data given in Table 5 indicate that the orbital period is becoming longer with the characteristic time scale of $`(1.55.3)\times 10^7`$ years, with the range based on the highly secure 95 percent confidence level. The sense of the period change is somewhat unexpected as it indicates – for the relative masses that we determined – that the mass transfer is from the less-massive component to the more-massive component, i.e. as in Algols (the configuration designated as SC). One would normally expect the other semi-detached configuration (SH) for the pre-contact or broken-contact phases of the TRO cycles. The period-lengthening argument for the Algol (SC) configuration is a stronger one than any based on the light curve analysis which seems to be hopelessly difficult for W Crv. The time-scale is exactly in the range expected for the Kelvin-Helmholtz or thermal time-scale evolution of solar-mass stars, $`\tau _{KH}=3.1\times 10^7(M/M_{})^2(R/R_{})^1(L/L_{})^1`$, which is characteristic for systems in the rapid stage of mass exchange such as $`\beta `$ Lyrae or SV Cen.
## 5 Discussion and conclusions
The present paper contains results of spectroscopic observations confirming the assumption of Odell that the more massive, hotter star is eclipsed in the primary minimum. However, this information and the value of the mass-ratio are not sufficient to understand the exact nature of the system mostly because of the strong light curve variability which may be interpreted as an indication of mass-exchange and accretion phenomena producing strong deviations from the standard binary-star model. We suggest – on the basis of the absence of light-curve perturbations within the primary minima – that the system is not a contact binary with components which mysteriously have different temperatures, but rather a semi-detached system. Furthermore, we suggest that W Crv, similarly to systems like V361 Lyr or SV Cen, has a light-producing volume between the stars or – more likely – on the inner face of the secondary component. In the case of V361 Lyr, there is apparently enough space for the stream of matter to be deflected by the Coriolis force and strike the less-massive on the side; in SV Cen, the photometric effects of a strong contact are probably entirely due to the additional light visible only in the orbital quadratures. In contrast to V361 Lyr and SV Cen, the mass-transfer phenomena in W Crv are visible at all orbital phases except at primary minima, that is when the inner side of the cooler component is directed away from the observer.
The general considerations of the light-curve fits in the presence of large brightness perturbations make both semi-detached configurations almost equally likely, but the semi-detached configuration of the Algol type for W Crv, i.e. the one with the less-massive, cooler component filling the Roche lobe (SC) is preferable for two reasons: (1) it is simpler, as it leads to only one component deviating from the main-sequence relation (since the inclination must be close to 90 degrees, the secondary would have $`0.92R_{}`$ and $`0.68M_{}`$, whereas the primary would be a solar-type star with $`1.01R_{}`$ and $`1.00M_{}`$), and (2) it can explain the observed lengthening of the orbital period in the thermal time-scale. This way, W Crv joins a group of well-known stars – such as SV Cen, V361 Lyr or the famous $`\beta `$ Lyrae – where large, systematic period changes are actually the final proof of our hypothesis of the Algol configuration. W Crv would be then the shortest-period (0.388 days) known Algol consisting of normal (non-degenerate) components. With such a short period, the system presents a difficulty to the current theories describing formation of low-mass Algols, as reviewed by Yungelson et al. , and of binaries related to contact systems, as reviewed by Eggleton . One can only note that Sarna & Fedorova , who considered formation of solar-type contact binaries through the Case A mass-exchange mechanism, pointed out the importance of the initial mass-ratio: For mass-ratio sufficiently close to unity, the rapid (hydrodynamical) mass exchange can be avoided and the system may evolve in the thermal time-scale. Although the mass-reversal has not been modeled, it is likely that W Crv is the product of such a process.
## Acknowledgments
We thank Dr. Andy Odell for providing the light curve and time-of-minima data and for extensive correspondence, numerous advices and suggestions and Drs. Bohdan Paczyński and Janusz Kałużny for a critical reading of the original version of the paper and several suggestions that improved the presentation of the paper.
|
no-problem/9907/nucl-ex9907003.html
|
ar5iv
|
text
|
# Coupled-channels analysis of the 16O+208Pb fusion barrier distribution
## I Introduction
Precise fusion cross-sections have been measured for many reactions, involving nuclei which exhibit different collective degrees of freedom. Their excitations, through coupling to the relative motion of the colliding nuclei, cause a splitting in energy of the single fusion barrier resulting in a distribution of barriers, which drastically alters the fusion probability from its value calculated assuming quantal tunnelling through a single barrier. It was shown by Rowley et al. that, under certain approximations, the distribution in energy of a discrete spectrum of barriers could be obtained from precise fusion cross-sections $`\sigma `$ by taking the second derivative with respect to the center-of-mass energy $`E_{\text{c.m.}}`$ of the quantity $`(E_{\text{c.m.}}\sigma )`$. When the effects of quantal tunnelling are considered, $`d^2(E_{\text{c.m.}}\sigma )/dE_{\text{c.m.}}^2`$ becomes continuous, and each barrier is smoothed in energy with a full width at half maximum (FWHM) of $`0.56\mathrm{}\omega `$, where $`\mathrm{}\omega `$ is the barrier curvature. The difference between a more realistic calculation of $`d^2(E_{\text{c.m.}}\sigma )/dE_{\text{c.m.}}^2`$ (where the angular momentum dependence of the curvature and barrier radius is taken into account) and the smoothed barrier distribution is small , and so it is convenient to refer to $`d^2(E_{\text{c.m.}}\sigma )/dE_{\text{c.m.}}^2`$ as the fusion barrier distribution.
The fusion barrier distribution can be a very sensitive ‘gauge’ of the dominant collective modes excited during the collision . Its shape is related to the nuclear structure of the reactants. Barrier distributions have been measured for nuclei with static deformations , for nuclei where vibrational degrees of freedom dominate , in systems where the effects of transfer channels and multi-phonon excitations are important, and where the influence of the projectile excitation is prominent .
The precise fusion data have stimulated advances in the quantitative application of the coupled-channels (CC) description of fusion, and many experimental barrier distributions have been well reproduced with various degrees of refinement of this model. The CC description is expected to be simpler for systems involving the fusion of closed-shell nuclei due to the presence of relatively few low-lying collective states. An example is the <sup>16</sup>O+<sup>144</sup>Sm system, where a good description of the experimental barrier distribution was obtained with a simplified CC model . This description was somewhat fortuitous in view of the approximations used in this model. An improvement in the description of the barrier distribution was achieved with more exact CC calculations which correctly treated the excitation energies and the phonon character of the coupled states.
Given the current level of knowledge of the theoretical description of heavy-ion fusion, and the success of calculations in reproducing the shape of the measured barrier distribution for <sup>16</sup>O on <sup>144</sup>Sm, it might be expected that present models should be able to describe the fusion of <sup>16</sup>O with the doubly-magic nucleus <sup>208</sup>Pb. The <sup>16</sup>O + <sup>208</sup>Pb system is also one of the few cases where there is existing knowledge of important particle transfer channels. The fusion barrier distribution for the <sup>16</sup>O + <sup>208</sup>Pb reaction has been measured previously , however it was not possible to obtain an adequate theoretical description of its shape. This could have been due to shortcomings in the experiment or the simplified CC analysis used in calculating the theoretical barrier distribution. Improvements in the available techniques of precise fission cross-section measurements, including the use of fragment-fragment coincidences, were reason to re-measure the fusion excitation function for the <sup>16</sup>O + <sup>208</sup>Pb reaction.
The purpose of the current work was to find the cause of the previous disagreement between theory and data by comparing the newly measured barrier distribution with more exact CC calculations, and to identify the dominant couplings in the fusion of <sup>16</sup>O+<sup>208</sup>Pb. The coupled-channels analysis of the new fusion data has proved to be more difficult than expected, and a complete description of the data has not yet been obtained.
## II Experimental Results
The re-measurement of the fission excitation function for <sup>16</sup>O + <sup>208</sup>Pb was performed at the Australian National University using <sup>16</sup>O beams from the 14UD Pelletron accelerator. The beams were pulsed with bursts of $`1`$ ns FWHM, separated by $`106.6`$ ns. Beam energies used were in the range $`75`$$`118`$ MeV, in increments of $`0.6`$ MeV up to $`88`$ MeV. The absolute beam energy was defined to better than $`0.05`$ MeV and the relative beam energy to better than a few keV . The target was $`40`$$`45`$ $`\mu `$gcm<sup>-2</sup> of <sup>208</sup>PbO deposited on a backing of $`10\mu `$gcm<sup>-2</sup> of C. The isotopic purity of the <sup>208</sup>Pb was $`99.0\pm 0.1\%`$. Fission fragments were detected in two of the large-area multiwire proportional counters (MWPCs) of the CUBE detector system. One was positioned in the backward hemisphere covering the scattering angles $`171^{}\theta _{\text{lab}}94^{}`$, and the other in the forward hemisphere with $`4^{}\theta _{\text{lab}}81^{}`$. The fission fragments were identified in an individual detector by their energy loss signal, and the time-of-flight measured relative to the pulsed beam.
In the measurement described in Ref. , only a single MWPC located in the backward hemisphere was used. However, in the present measurement, the front MWPC was operated in coincidence with the back fission detector, and the fission fragments were identified with the time-of-flight in one detector versus the time-of-flight in the other. This allowed good separation between the fission events from the <sup>16</sup>O+<sup>208</sup>Pb reaction and other reactions with the target, which were a problem for the low cross-sections in the earlier measurement. The fission cross-section was measured down to energies where the evaporation residue cross-sections were previously determined . Two silicon surface–barrier detectors, located at $`\pm 22.5^{}`$ to the beam axis, were used to monitor the Rutherford scattering for normalisation of the fission fragment yield. The fission fragment yields in the MWPCs were converted into fission cross-sections as described in Refs. .
The new fission excitation function is shown in Fig. 1(a), together with the results from the previous measurement , as indicated by the open circles in Fig. 1(a). The fusion cross-sections $`\sigma `$ for <sup>16</sup>O + <sup>208</sup>Pb were obtained by summing $`\sigma _{\text{fis}}`$ and the evaporation residue cross-sections published in Ref. , interpolating where necessary. The present data (solid circles) and previously published fusion cross-sections (open circles) are shown in Fig. 1(b). The fusion cross-sections from the new measurement are presented in Table I.
The fusion barrier distribution was obtained by evaluating the point difference formula of Ref. using an energy step of $`\mathrm{\Delta }E_{\text{c.m.}}=1.67`$ MeV. The resulting barrier distribution is shown in Fig. 2 by the solid circles. For comparison, the barrier distribution (open points) in Ref. is reproduced, where each symbol represents one of the three separate passes through the fusion excitation function. In Ref. , the barrier distribution was calculated with $`\mathrm{\Delta }E_{\text{c.m.}}=1.86`$ MeV. The difference in the two step lengths does not have any significant effect on the calculated barrier distributions since they are already smoothed by $`2`$ MeV due to quantum tunnelling effects .
The new data are generally in good agreement with the previous measurement, but give a better defined barrier distribution. This is mainly due to the improved statistics, the clean identification of fission events made possible by operating two detectors in coincidence, and better definition and consistency of the angle between the beam axis and the fission detectors. The slight disagreement between the two barrier distributions can be largely attributed to three errant points in the original excitation function at $`E_{\text{c.m.}}=73.8`$, $`74.3`$ and $`75.2`$ MeV, which differ from the current data by up to $`5\%`$. Since $`d^2(E_{\text{c.m.}}^i\sigma )/dE_{\text{c.m.}}^{i2}`$ at an energy $`E_{\text{c.m.}}^i`$ is evaluated with a three–point difference formula, each wayward cross-section affects a total of three points, that point at $`E_{\text{c.m.}}^i`$, and its two neighbouring points at $`(E_{\text{c.m.}}^i\pm 1.67)`$ MeV. For example, the old cross-section at $`E_{\text{c.m.}}=75.2`$ MeV was high with respect to the new measurement. This means that $`d^2(E_{\text{c.m.}}\sigma )/dE_{\text{c.m.}}^2`$ at $`E_{\text{c.m.}}=75.2`$ MeV is lower than the new barrier distribution, and $`d^2(E_{\text{c.m.}}\sigma )/dE_{\text{c.m.}}^2`$ at both $`E_{\text{c.m.}}=73.3`$ MeV and $`77.1`$ MeV are high (see the encircled points in Fig. 2).
Nevertheless, the general features, such as the height of the main peak, and shape of the two barrier distributions are in good agreement.
## III Coupled-channels analysis of the measured fusion barrier distribution
Several ingredients are required for a coupled-channels description of the fusion barrier distribution. Inputs to the model calculations include the nucleus-nucleus potential parameters, the coupling strengths of the vibrational states and their excitation energies. In addition, there are choices to be made regarding various assumptions and approximations used in the solution of the coupled equations.
### A The coupled-channels calculations
#### 1 Nuclear potential parameters
The nuclear potential parameters were determined with consideration of two constraints: (i) fitting the high-energy fusion cross-sections, (ii) choosing a sufficiently deep nuclear potential, which is consistent with the ingoing-wave boundary condition used in the CC calculations. The measured fusion cross-sections at energies above the average barrier were fitted using a single–barrier penetration model, with an energy-independent nuclear potential, Woods-Saxon in form, with
$$V(r)=V_0/(1+\mathrm{exp}[(rr_0A_P^{1/3}r_0A_T^{1/3})/a]),$$
(1)
where $`V_0`$ is the depth, $`r_0`$ is the radius parameter, and $`a`$ is the diffuseness of the nuclear potential. With $`V_0`$ chosen to be $`50`$ MeV, $`r_0`$ and $`a`$ were varied to obtain the best fit to $`\sigma `$. This resulted in the parameters $`V_0=50`$ MeV, $`r_0=1.159`$ fm and $`a=1.005`$ fm, giving an average barrier $`B_0=74.5`$ MeV at a barrier radius of $`R_B=11.3`$ fm with curvature for the average barrier of $`\mathrm{}\omega _0=3.07`$ MeV. The excitation function and fusion barrier distribution associated with these single–barrier (SB) parameters are shown by the dot-dot-dashed lines in Fig. 3(a) and (b), respectively.
The above values for $`V_0`$ and $`r_0`$ could not be used in the CC codes because the potential depth was too shallow causing high-$`\mathrm{}`$ partial waves that should have been absorbed (contributing to the fusion cross-section) to be reflected at the barrier. To ensure that all the ingoing flux was absorbed inside the fusion barrier, a new set of potential parameters was obtained with the diffuseness parameter fixed at $`a=1.005`$ fm, and $`V_0`$ was increased to $`200`$ MeV, compensated by a reduction in $`r_0`$ to $`0.978`$ fm to obtain the same fusion barrier $`B_0=74.5`$ MeV, which occurs at $`R_B=11.5`$ fm with a curvature $`\mathrm{}\omega _0=3.87`$ MeV. By making this adjustment in $`V_0`$, the quality of the fit to the high-energy fusion cross-sections is reduced. However, this is not of concern for the following reasons.
The main aim of this analysis is the reproduction of the shape of the measured barrier distribution, a quantity which is insensitive to small changes in the potential parameters. In comparison, the high-energy fusion cross-sections are very sensitive to height of the average barrier, and can always be fitted by adjusting the potential parameters. However, since there exists some sensitivity of the calculated high-energy fusion cross-sections to the couplings , this would mean the nuclear potential parameters would need to be adjusted for each different coupling scheme if the fit to the high energy data is to be retained. Rather than re-fitting the high energy data after each new coupling scheme, the CC calculations were performed without any further adjustment to the bare nuclear potential. This meant that the calculated fusion cross-sections overestimated the data in the high energy region, see for example the CC calculations in Fig. 3(a). The data in the high energy region could be re-fitted with a slightly higher average fusion barrier, corresponding to a different set of potential parameters, but this would cause only a shift up in energy of the whole barrier distribution, without any appreciable change in its shape.
The diffuseness parameter obtained from the above procedure is significantly larger than that deduced from elastic scattering measurements , a result common to other fusion analyses . The inconsistency between the diffuseness parameters obtained from fusion and elastic scattering data implies that the potential parameters obtained are specific to the data being fitted. It is also possible that the potential parameters obtained from a fit to the data in the high energy region are not applicable at energies in the barrier region, or below the lowest barrier. In this sense the potential parameters obtained are effective ones, and the true interaction potential remains an uncertainty in these calculations.
The effect of using a smaller diffuseness is shown in Fig. 3(a) and (b), where two calculations are compared, one with $`a=0.65`$ fm and the other with $`a=1.005`$ fm, both with the same average barrier $`B_0=74.5`$ MeV. Couplings to the single phonon states in <sup>208</sup>Pb are included in these CC calculations (see Sec. III B 1). For $`E_{\text{c.m.}}<B_0`$, the cross-section for the calculation with $`a=0.65`$ fm falls less rapidly than the $`a=1.005`$ fm case, since the smaller diffuseness gives a narrower barrier (larger $`\mathrm{}\omega _0`$) and hence a larger barrier penetrability. In the barrier region, a smaller diffuseness reduces the height of the main peak in $`d^2(E_{\text{c.m.}}\sigma )/dE_{\text{c.m.}}^2`$, due to the increase in the width of the tunnelling factor which smooths the barrier distribution (see Sec. III C). These calculations demonstrate the effect on the calculated barrier distribution of the uncertainty in the appropriate choice of the diffuseness parameter. Further experiments are required to address this problem.
#### 2 Approximations used in solving the coupled equations
In the coupled-channels calculations that follow, except for the FRESCO calculations, the no-Coriolis or isocentrifugal approximation was used. This approximation has been shown to be good for heavy-ion fusion reactions. The calculations included coupling to all orders in the deformation parameter for the nuclear coupling matrix. In the past, when making quantitative comparisons with the fusion data, the linear coupling approximation was often used. Here the nuclear coupling potential was expanded with respect to the deformation parameter keeping only the linear term. It was shown that the agreement between the measured and calculated fusion cross-sections was improved with the inclusion of second-order terms. Later, Hagino et al. demonstrated that, for heavy symmetric systems at least, the effect of inclusion of terms higher than second-order in the nuclear coupling potential was as significant as including the second-order term itself. Even though this effect was largest for heavy near-symmetric systems, it was also found to be significant for reactions involving lighter nuclei such as <sup>16</sup>O+<sup>144</sup>Sm.
The linear coupling approximation was retained for the Coulomb coupling potential since inclusion of terms of higher order has been shown to have only a very minor effect on the barrier distribution .
The excitation energies of the vibrational states were treated exactly in these calculations. Consequently, there were no approximations associated with the eigenchannel approach used in simplified CC analyses, such as those present in the code CCFUS .
### B Channel couplings
#### 1 Coupling to single-phonon states in <sup>208</sup>Pb
Both <sup>144</sup>Sm and <sup>208</sup>Pb are spherical, vibrational nuclei with similar low-lying collective states, so it might be expected that the coupling scheme which was successful in the description of the barrier distribution for <sup>16</sup>O + <sup>144</sup>Sm would also provide a good description of the <sup>16</sup>O + <sup>208</sup>Pb reaction. The measured barrier distribution for the <sup>16</sup>O + <sup>144</sup>Sm reaction was well described by coupling to the single-phonon states in <sup>144</sup>Sm , where the dominant channel is the single-octupole phonon state. The analogous calculation for <sup>16</sup>O + <sup>208</sup>Pb is shown by the solid lines in Fig. 4(a) and (b). The calculation was performed with the CC code CCFULL , where fusion is simulated using the ingoing-wave boundary condition. Coupling to the $`3_1^{}`$ and $`5_1^{}`$ single-phonon states in <sup>208</sup>Pb was included, with the relevant parameters summarized in Table II. This calculation fails to reproduce the shape of the measured barrier distribution \[see Fig. 4(b)\]. Although the calculation produces a two-peaked structure, mainly due to coupling to the $`3_1^{}`$ state in <sup>208</sup>Pb, there is still too much strength in the main peak of the theoretical barrier distribution, which implies that more coupling is required.
Additional coupling to other single-phonon states in <sup>208</sup>Pb produced no improvement in the agreement with the measured barrier distribution, due to the relative weakness of these couplings. In relation to the disagreement between theory and the data in Fig. 4(b), an initial impression is that the area of the calculated barrier distribution is larger than that of the measurement. This difference could be caused by a lower fusion yield resulting from a loss of flux due to incomplete fusion. Such an effect was recently observed in the fusion of <sup>9</sup>Be on <sup>208</sup>Pb. However, evaluation of the area under $`d^2(E_{\text{c.m.}}\sigma )/dE_{\text{c.m.}}^2`$, a quantity which should be approximately proportional to the geometric area $`\pi R_B^2`$, indicates that this is not the case. The area under the theoretical barrier distribution represented by the solid line in Fig. 4(b) is $`4227`$ mb, implying a value of $`R_B=11.6`$ fm for the average barrier radius, obtained by simply equating the area with $`\pi R_B^2`$. This compares with the area under the experimental barrier distribution of $`3981`$ mb, implying a radius $`R_B=11.3`$ fm. The difference between the theoretical and experimental areas is only $`6\%`$, $`3.6\%`$ of which is due to use of the larger potential depth, $`V_0=200`$ MeV, which has a radius $`R_B=11.5`$ fm instead of the best fit value of $`R_B=11.3`$ fm for $`V_0=50`$ MeV. Thus, the mismatch between experiment and theory to the level of $`2`$$`3\%`$, is not due to incomplete fusion.
To obtain a successful theoretical description of the <sup>16</sup>O+<sup>208</sup>Pb reaction, a coupling scheme that produces a barrier distribution with a shape corresponding to the measured one is required. Since the areas under the experimental and theoretical barrier distributions are in good agreement, the height of the main barrier in the distribution will be used as an indicator of the ability of theory to reproduce the overall shape of the experimental barrier distribution.
#### 2 The effects of coupling to particle transfers
Attempts have been made previously to ‘explain’ qualitatively deviations between theory and experiment as being due to neglect of transfer couplings. Such an approach has been taken because of the difficulty of treating the transfer process in a realistic way, and the lack of knowledge of transfer coupling strengths. However in <sup>16</sup>O + <sup>208</sup>Pb, some of the important transfer coupling strengths have been measured. To ascertain the significance of the effects of transfer couplings on fusion, both the transfer and inelastic channels (with coupling to all orders) should be considered simultaneously in the CC calculation. The effects of particle transfers on the fusion cross-sections and spin distributions for <sup>16</sup>O + <sup>208</sup>Pb have been calculated by Thompson et al. at 8 energies between $`E_{\text{lab}}=78`$ and $`102`$ MeV, using the coupled-channels code FRESCO . Here, those calculations have been repeated, with a minor modification to the nuclear potential in the entrance-channel mass partition, with coupling to all orders in the nuclear potential, and with smaller energy steps in order to obtain the barrier distribution. This was necessary since it was not possible to treat transfer correctly using the code CCFULL. The details of this calculation are discussed below.
Before proceeding with the transfer calculations, the results of the two coupled-channels codes used in this work were compared. The comparison was made with a FRESCO calculation using parameters identical to the single-phonon calculation described in Sec. III B 1. The FRESCO calculation was performed with version FRXX, which includes a new option allowing coupling to all orders in the nuclear coupling potential, as in the calculation described in Sec. III B 1. The barrier distribution from FRESCO is shown by the dashed line in Fig. 4(b). There is very good agreement between it and the barrier distribution calculated using CCFULL \[solid line in Fig. 4(b)\]. The small difference between the solid and dashed lines in Fig. 4(b) may be due to the isocentrifugal approximation which was used in the CCFULL calculation.
Having established the agreement between the above two calculations for inelastic couplings, the effects of coupling to transfer channels were examined with FRESCO. In addition to the inelastic couplings, the following three transfer couplings were included, which are those included in the previous analysis . The single-neutron pickup reaction (<sup>16</sup>O,<sup>17</sup>O) with $`Q=3.2`$ MeV, the single-proton stripping reaction (<sup>16</sup>O,<sup>15</sup>N) with $`Q=8.3`$ MeV and the $`\alpha `$-stripping reaction (<sup>16</sup>O,<sup>12</sup>C), where $`Q=20`$ MeV, were included. The spectroscopic factors for the single-nucleon transfers were taken from Ref. , and in the case of the $`\alpha `$-stripping couplings, were set to reproduce the measured transfer yield. Coupling to excited states in <sup>17</sup>O, <sup>15</sup>N, <sup>207</sup>Pb and <sup>209</sup>Bi was included as described in Ref. . The real and imaginary potential parameters for all three transfer partitions were $`V_0=78.28`$ MeV, $`r_0=1.215`$ fm, $`a=0.65`$ fm and $`V_i=10`$ MeV, $`r_{0i}=1.00`$ fm, $`a_i=0.40`$ fm, respectively.
The barrier distribution from the FRESCO calculation including transfer is shown by the dot-dot-dashed line in Fig. 4(b). Compared to the case with no transfer, the main peak of the barrier distribution is shifted down in energy and its height is reduced, whilst the second peak in the distribution is smoothed in energy. Of the three transfer couplings considered in this calculation, the neutron–pickup transfer has the largest effect on the barrier distribution, since it is the most strongly populated transfer. Using a set of potential parameters for the <sup>17</sup>O+<sup>207</sup>Pb mass partition different to those quoted above, with a real diffuseness of $`a=1.005`$ fm, had only a small effect on the shape of the barrier distribution. The $`0.5`$ MeV shift downwards in energy of the barrier distribution is not problematic, since there is freedom to renormalise the bare potential to a value which will shift the theoretical barrier distribution back to its original position. Of importance here is the ability to reproduce the shape of the barrier distribution, and although the coupling to the transfer channels reduces the height of the main peak in the barrier distribution, it is not sufficient, implying that further couplings are required.
Additional transfer channels, which have been neglected in the present calculation, are unlikely to significantly improve the agreement, since the above three transfer couplings represent the most strongly populated transfers. The effects of additional transfers on the fusion cross-section were investigated in Ref. , where it was found that the $`\alpha `$\- and triton-pickup transfers had no effect on $`\sigma `$. The 2-neutron pickup, with $`Q=1.9`$ MeV, did affect the fusion cross-section, although the increase in $`\sigma `$ was at most a factor of $`1.11`$ above the calculation without this transfer, at $`E_{\text{lab}}=78`$ MeV. This compares with an enhancement in $`\sigma `$ at the same energy of $`2.5`$ between the transfer calculation with neutron-pickup, proton and $`\alpha `$-stripping over the calculation without these transfer couplings.
#### 3 The effects of coupling to the $`3_1^{}`$ in <sup>16</sup>O
The treatment of projectile excitations in CC analyses deserves some comment. The measured barrier distributions for the reaction <sup>16</sup>O with various isotopes of samarium showed no specific features associated with excitation of the octupole state in <sup>16</sup>O. It was shown in Ref. that coupling to the $`3_1^{}`$ state in <sup>16</sup>O at $`6.13`$ MeV using the simplified CC code CCMOD , which uses the linear coupling approximation, resulted in a deterioration in the agreement with the measured barrier distribution. This effect is related to the neglect of the higher-order terms in the CC calculations . Since the transition strength of the $`3_1^{}`$ state in <sup>16</sup>O is large, higher-order terms should be included in the expression for the nuclear coupling potential. When the $`3_1^{}`$ state in <sup>16</sup>O was included with coupling to all orders in the nuclear potential, the theoretical barrier distribution was essentially restored to its shape before the inclusion of the projectile coupling . However, the whole barrier distribution was shifted down in energy by a few MeV. This shift has been explained in terms of the adiabaticity of the projectile excitation. When the excitation energy of a state is large, then the timescale of the intrinsic motion is short compared to the tunnelling time, allowing the projectile to respond to the nuclear force in such a way as to always be in the lowest energy configuration. This means that coupling to states like the $`3_1^{}`$ state in <sup>16</sup>O, only leads to a shift in the average fusion barrier, and so is equivalent to a renormalisation of the effective potential.
In order to confirm the above result for the <sup>16</sup>O+<sup>208</sup>Pb reaction, calculations were performed with coupling to the $`3_1^{}`$ state in <sup>16</sup>O at $`6.13`$ MeV using the code CCFULL. No better agreement with the shape of the measured barrier distribution resulted, causing only a shift in energy of the whole barrier distribution, without an appreciable change in its overall shape. An example of this effect is shown in Fig. 7(b).
In summary, the calculations described above, with a single-phonon plus transfer coupling scheme, were unable to describe the measured barrier distribution. In the next Section, the effects of a larger coupling space are explored. The following calculations result mostly from the code CCFULL. Due to the long computational time involved, FRESCO was used only to estimate the additional effects of coupling to transfer channels.
#### 4 Coupling to the 2-phonon states in <sup>208</sup>Pb
In the doubly magic nucleus <sup>208</sup>Pb, the energy of the first $`3^{}`$ state is at $`2.614`$ MeV and is interpreted as a collective octupole state because of its large $`B(E3)`$ value. In the harmonic vibrational model, the 2-phonon state would be expected at an energy twice that of the single-phonon excitation. Hence in <sup>208</sup>Pb, the 2-phonon state $`[3_1^{}3_1^{}]`$, consisting of the $`0^+`$, $`2^+`$, $`4^+`$, and $`6^+`$ quadruplet of states, is expected at the unperturbed energy of $`5.228`$ MeV. There have been a number of searches for members of the 2-phonon quadruplet, including a recent $`(n,n^{}\gamma )`$ measurement which found evidence for the existence of the $`0^+`$ state at $`5.241`$ MeV. A more recent measurement using Coulomb excitation, did not identify any new state around $`5.2`$ MeV, but was able to extract the $`B(E3,3_1^{}6_1^+)`$ value for the lowest known $`6^+`$ state at $`4.424`$ MeV, whose strength suggested a strong fragmentation of the 2-phonon state in <sup>208</sup>Pb.
Because of the expected strong collective nature of the low-lying octupole state in <sup>208</sup>Pb, it is likely that 2-phonon excitations play some role in the fusion of <sup>16</sup>O on <sup>208</sup>Pb. The effects of the inclusion of 2-phonon excitations on the fusion barrier distribution have been investigated theoretically by Kruppa et al. as well as Hagino et al. . Recent experimental evidence has come from a measurement of the barrier distribution for the <sup>58</sup>Ni+<sup>60</sup>Ni reaction , where it was demonstrated that fusion is sensitive to such complex multi-phonon excitations.
The barrier distribution shown by the solid line in Fig. 5 is a CCFULL calculation which includes, in addition to the $`3_1^{}`$ and $`5_1^{}`$ single-phonon states in <sup>208</sup>Pb, coupling to the double-octupole phonon in the target. This calculation was performed in the harmonic limit, where the energy of the $`[3_1^{}3_1^{}]`$ state was taken to be $`5.23`$ MeV, with the strength of coupling between the single- and 2-phonon states given by $`\sqrt{2}\beta _3`$, the coupling expected in the harmonic limit. The 2-phonon result produces a shoulder in the barrier distribution at $`E_{\text{c.m.}}76`$ MeV whilst reducing the height of the main barrier, leading to a minor improvement over the single-phonon coupling scheme. The inclusion of multiple excitations in the target, for example the $`[5_1^{}[3_1^{}3_1^{}]]`$ state, did not result in any significant difference to the barrier distribution given by solid line in Fig. 5, largely due to the fact that $`\beta _5`$ is very small. The additional inclusion of the $`3_1^{}`$ state in the projectile, and mutual excitations of the projectile and target, was also found to have little effect on the shape of the calculated barrier distribution.
The next obvious choice to consider is coupling to the 2-phonon states in <sup>208</sup>Pb plus the transfer channels. Such a CC calculation was performed with FRESCO, and the results are shown by the dashed line in Fig. 5. This causes a small shift in the barrier distribution to lower energies and an enhancement in the height of the shoulder at $`E_{\text{c.m.}}76`$ MeV over the single-phonon plus transfer calculation. Although the effect of these couplings are helpful, the resultant barrier distribution is still well short of a complete description of the data. One effect still not accounted for is multi-step transfer couplings. With the present CC codes, it was not possible to include transfer from the excited states in <sup>208</sup>Pb, and the effect of neglecting these channels on the barrier distribution is not known. However, it was possible to check if the anharmonicity of the 2-phonon states was responsible for the remaining disagreement. Below, the size of these effects are estimated.
#### 5 The anharmonicity of the 2-phonon quadruplet in <sup>208</sup>Pb
When 2-phonon states were included in the coupling scheme for <sup>16</sup>O+<sup>144</sup>Sm, using the harmonic vibrational model, the good agreement between the measured and calculated barrier distribution was lost . At first, this result was puzzling in that there is both theoretical and experimental evidence for the presence of double-octupole phonon states in <sup>144</sup>Sm . However, deviations from the pure harmonic vibration model are expected to occur and the assumption of vibrational harmonicity for the coupling in <sup>144</sup>Sm is not correct. Subsequently it was demonstrated within the framework of the interacting boson model, that when the anharmonicities of the double-phonon states were accounted for, the theoretical barrier distribution was restored to a shape matching the experiment. In fact, anharmonic coupling to the additional 2-phonon states marginally improved the agreement relative to the single-phonon description of the data.
It has been known for a long time that the $`3_1^{}`$ state in <sup>208</sup>Pb has a large quadrupole moment, which is indicative of the anharmonic effects in octupole vibrations . The anharmonic effects give rise to a splitting in energy of the $`0^+`$, $`2^+`$, $`4^+`$, and $`6^+`$ members of the 2-phonon quadruplet in <sup>208</sup>Pb. In the Coulomb excitation search for 2-phonon states in <sup>208</sup>Pb by Vetter et al. , the authors found that the lowest lying $`6^+`$ state populated had a transition strength only $`20\%`$ of the harmonic $`B(E3)`$ value, indicating a possible fragmentation of the octupole vibrational strength of the 2-phonon state. Such a result has been supported by recent theoretical work , where calculations showed a strong fragmentation of the $`6^+`$ member of the quadruplet.
The effect of the anharmonicities of the 2-phonon states in <sup>208</sup>Pb on the barrier distribution was estimated with a CCFULL calculation which included a reorientation term (see Eqs. (4) and (5) in Ref. ), with the spectroscopic quadrupole moment for the $`3_1^{}`$ state of $`Q_{3_1^{}}=0.34`$ eb . The results are shown in Fig. 6(a) for the case where the strength for the 2-phonon transition was $`\sqrt{2}\beta _3`$ (solid line) and when this strength was reduced by a factor $`0.85`$ \[dot-dot-dashed line in Fig. 6(a)\]. The reduction factor applied to the pure harmonic octupole coupling strength was obtained from the results of Ref. . The barrier distribution from the anharmonic calculation is a slight improvement over the harmonic result \[dashed line in Fig. 6(a)\] in region of $`76`$ MeV. Any further increase in the degree of anharmonicity of the 2-phonon states (by reducing the energy of the 2-phonon state, for example) leads to a barrier distribution closer in shape to the single-phonon result. This effect is shown in Fig. 6(b), where an anharmonic calculation (solid line), with the energy of the 2-phonon at $`4.424`$ MeV and the corresponding reduction in the coupling strength of $`0.28`$ times that of the harmonic strength, is compared with the harmonic calculation (dashed line) and the single-phonon calculation (dot-dot-dashed line). The reduction factor of $`0.28`$ was obtained in Ref. from experimental observed intensity limits, which were then used to set limits relative to the expected harmonic E3 strength as a function of the energy of various $`6^+`$ states in <sup>208</sup>Pb.
### C The effects of a smaller diffuseness parameter
As discussed earlier, the effects on the <sup>16</sup>O+<sup>208</sup>Pb barrier distribution of using a smaller diffuseness for the nuclear potential lead to a reduction in the height of the main barrier (an increase in its FWHM). Such an effect can be explained with reference to Eq. (8) in Ref. , since $`d^2(E_{\text{c.m.}}\sigma )/dE_{\text{c.m.}}^2`$ is proportional to $`\pi R_B^2/\mathrm{}\omega _0`$ (the FWHM of the main barrier is proportional to $`\mathrm{}\omega _0`$). In the <sup>16</sup>O+<sup>208</sup>Pb reaction, a decrease in the diffuseness from $`a=1.005`$ fm to $`a=0.65`$ fm (resulting in an increase of $`\mathrm{}\omega _0`$ from $`3.85`$ MeV to $`4.93`$ MeV) led to a reduction in the height of the main peak in the barrier distribution, as shown in Fig. 3(b). Even with this reduction to $`a=0.65`$, close to the value of $`a`$ obtained from fits to elastic scattering data , the height of the main peak in the experimental barrier distribution could not be successfully reproduced.
To obtain a reasonable reproduction of the measured barrier distribution, the diffuseness parameter had to be reduced to a value of $`a0.40`$ fm. However, this was done at the expense of the fit to the high-energy fusion cross-sections (see the discussion below). A CCFULL calculation with the potential parameters $`V_0=283.6`$ MeV, $`r_0=1.172`$ fm and $`a=0.40`$ fm, chosen to give an average barrier of $`B_0=77.6`$ MeV, is shown in Fig. 7(b) by the dotted line. Here coupling to the 2-phonon states was included with the anharmonic values of $`4.424`$ MeV for the energy of the 2-phonon states, and a reduction factor of $`0.28`$ for the 2-phonon coupling strength, as discussed earlier. No transfer couplings were included in these calculations. After inclusion of the adiabatic $`3_1^{}`$ state in <sup>16</sup>O, the barrier distribution shown by the solid line in Fig. 7(b) was obtained. The inclusion of the $`3_1^{}`$ state in <sup>16</sup>O shifts the barrier distribution down in energy to provide a reasonable representation of the data. The third barrier distribution shown in Fig. 7(b) (dashed line) is a CC calculation with the $`a=0.40`$ fm potential parameters, which give an average barrier of $`B_0=77.6`$ MeV, but without coupling to the 2-phonon excitations in <sup>208</sup>Pb. The difference between the 2-phonon (solid line) and single-phonon (dashed line) calculations for $`a=0.40`$ fm is not as significant as the difference between the equivalent calculations with $`a=1.005`$ fm, due to the additional smoothing of the barrier distributions that results from the smaller diffuseness (larger $`\mathrm{}\omega _0`$).
Such a small value for the nuclear diffuseness is problematic in that the experimental fusion cross-sections could not be reproduced either at energies above or below the average barrier. A diffuseness of $`a=0.40`$ fm, causes $`\sigma `$ to fall less rapidly than the data in the low energy region, as shown by the solid line in Fig. 7(a). And, in the high energy region, the calculation with $`a=0.40`$ fm significantly overestimates the data, see the inset of Fig. 7(a). With any of the above coupling schemes, no single set of potential parameters was found that could simultaneously reproduce the shape of the experimental barrier distribution and the fusion cross-sections in the low and high energy region.
The results from the detailed CC analysis presented in this work are puzzling in view of the success obtained from other recent analyses of fusion barrier distributions . In these results, the shapes of the theoretical barrier distributions matched well with the experimental ones after including the significant couplings expected to affect fusion. In contrast to this success, even after consideration of transfer and 2-phonon couplings in the <sup>16</sup>O + <sup>208</sup>Pb reaction, the theory was unable to reproduce the shape of the measured barrier distribution.
## IV Summary and Conclusion
In this work, fission cross-sections for the <sup>16</sup>O + <sup>208</sup>Pb reaction were re-measured with improved accuracy. The new data were found to be generally in good agreement with the earlier data, although some erroneous points in the original fission excitation function were identified. The barrier distribution resulting from the new data was found to be a smoothly falling function for energies above the average barrier.
In order to describe the shape of the measured barrier distribution, detailed CC calculations were performed, avoiding where possible less accurate approximations often used in simplified CC analyses, and exploiting existing knowledge of the particle transfers in the <sup>16</sup>O + <sup>208</sup>Pb system. It was found that coupling to the single-neutron pickup, single-proton and $`\alpha `$-stripping transfers had a significant affect on the barrier distribution, although coupling to these transfers in addition to the $`3_1^{}`$ and $`5_1^{}`$ single-phonon states in <sup>208</sup>Pb, was not sufficient to explain the data. Transfer from excited states in <sup>208</sup>Pb were not included in the present calculations, and their effect on the shape of the barrier distribution is not known.
The effects of additional coupling to 2-phonon states in <sup>208</sup>Pb was explored, both in the harmonic limit and for cases that considered the anharmonicity of the 2-phonon states. Inclusion of the 2-phonon states in <sup>208</sup>Pb resulted in some improvement but still fell short of a complete description of the experimental barrier distribution.
A better reproduction of the experimental barrier distribution was obtained with a very large reduction in the nuclear diffuseness parameter, from a value of $`a=1.005`$ to $`a=0.40`$ fm. This approach to fitting the data was found to be unsatisfactory, since it destroyed the fits to the fusion cross-sections in the high and low energy regions. Also, a value of $`0.40`$ fm for the nuclear diffuseness is significantly smaller than results obtained from analyses of elastic scattering data for the <sup>16</sup>O + <sup>208</sup>Pb system .
The results from fits to the high-energy fusion cross-sections for the <sup>16</sup>O + <sup>208</sup>Pb reaction, and other systems recently measured , also required a nuclear diffuseness larger than the value obtained from elastic scattering analyses. This result indicates that the procedure for determining the potential parameters used in this and the work of Ref. may not be appropriate in the analysis of fusion. In elastic scattering, the more peripheral nature of the interaction means the system probes mainly the exponential tail of the nuclear potential. In contrast, fusion probes the potential at distances much closer to the fusion barrier radius. In this region, the Woods-Saxon parameterisation may not be an adequate representation of the true nuclear potential. Further work is required to determine the diffuseness of the nuclear potential appropriate to the analysis of precise fusion data.
Using the best available model for the description of heavy-ion fusion, it has been shown that the measured barrier distribution for <sup>16</sup>O + <sup>208</sup>Pb could not be reproduced with couplings to the lowest lying single- and 2-phonon states in <sup>208</sup>Pb and the major particle transfers. In view of the precision of the data, and the quality of the coupled-channels model used in its description, the disagreement between experiment and theory is very significant. Further work on the appropriate choice of the nuclear diffuseness, and a global analysis of all available reaction data, are required in order to improve the coupled-channels description of fusion for the <sup>16</sup>O + <sup>208</sup>Pb system.
## Acknowledgements
K.H. and I.J.T. would like to thank the Australian National University for their warm hospitality and partial support where this work was carried out. M.D. acknowledges the support of a QEII Fellowship. K.H. acknowledges the support from the Japan Society for the Promotion of Science for Young Scientists.
|
no-problem/9907/astro-ph9907354.html
|
ar5iv
|
text
|
# Can a changing 𝛼 explain the Supernovae results?
## 1 Introduction
Two puzzling observations are challenging cosmologists. The Supernovae Cosmology Project and the High-z Supernova Search (Perlmutter et al (97); Garnavich et al (98); Schmidt (98); Riess et al (98)) have extended the reach of the Hubble diagram to high redshift and provided new evidence that the expansion of the universe is accelerating. This may imply that there exists a significant positive cosmological constant, $`\mathrm{\Lambda }`$. In separate work, the spacings between quasar (QSO) absorption lines were examined in Keck I data at medium redshifts, $`z1`$, (Webb et al (99)) and compared with those in the laboratory (see also Drinkwater et al (98); Damour and Dyson (96); Shylakhter (76); Barrow (87)). These observations are sensitive to time variations in the value of the fine structure constant $`\alpha =e^2/(\mathrm{}c)`$, (where $`e`$ is the electron charge, $`\mathrm{}`$ Planck’s constant, and $`c`$ the speed of light), at a rate one million times slower that the expansion rate of the universe. Evidence was found for a small variation in the value of $`\alpha `$ at redshifts $`1`$. This could be produced by intrinsic time variation or by some unidentified line-blending effect. In this Letter we assume that the variation is intrinsic and show that there may be a link between the observations of cosmological acceleration and varying $`\alpha `$.
If $`\mathrm{\Lambda }>0`$, then cosmology faces a very serious fine tuning problem, and this has motivated extensive theoretical work. There is no theoretical motivation for a value of $`\mathrm{\Lambda }`$ of currently observable magnitude; a value $`10^{120}`$ times smaller that the ’natural’ Planck scale of density is needed if $`\mathrm{\Lambda }`$ becomes important near the present time. Such a small non-zero value of $`\mathrm{\Lambda }`$ is ’unnatural’ in the sense that making it zero reduces the symmetry of spacetime. A tentative solution is quintessence (Zlatev et al (99)): the idea that $`\mathrm{\Lambda }`$ might be a rolling scalar field exhibiting very long transients. Here we introduce another explanation.
There are a variety of possible physical expressions of a changing $`\alpha `$. Bekenstein proposed a varying $`e`$ theory (Bekenstein (82)). An alternative is the varying speed of light (VSL) theory (Moffat (93); Albrecht & Magueijo (99); Barrow (99)) in which varying $`\alpha `$ is expressed as a variation of the speed of light. The choice between these two types of theory transcends experiment, and merely reflects theoretical convenience in the choice of units (Barrow & Magueijo (98)). The simplest cosmology following from VSL is known to contain an attractor in which $`\mathrm{\Lambda }`$ and matter remain at fixed density ratios throughout the life of the universe (Barrow & Magueijo (99)). Such attractor solves the fine tuning problem forced upon us by the supernovae results. Hence there is scope for the observed changing $`\alpha `$ to be related to the observed acceleration of the universe. In this Letter we propose a model which leads to good quantitative agreement, given experimental errors, between the observations of acceleration and varying $`\alpha `$. In Section 2 we examine the construction of the Hubble diagram in VSL theories, and the interpretation of varying-$`\alpha `$ experiments. Then in Section 3 we present an example of a VSL model which can jointly explain the supernovae results and the Webb et al varying-$`\alpha `$ results. We conclude with a discussion of some further aspects of the model proposed, to be investigated elsewhere.
## 2 The VSL Hubble diagram
The Hubble diagram is a plot of luminosity distance against redshift. The purpose is to map the expansion factor $`a(t)`$, where $`t`$ is the comoving proper time. Redshifts provide a measurement of $`a`$ at the time of emission. If the objects under observation are “standard candles” (as Type Ia supernovae are assumed to be), their apparent brightness gives their (luminosity) distance, which, if we know $`c`$, tells us their age. By looking at progressively more distant objects we can therefore map the curve $`a(t)`$.
We now examine how this construction is affected by a changing $`c`$. In Albrecht & Magueijo (99) we showed that $`Ec^2`$ for photons in free flight. We also showed that quantum mechanics remains unaffected by a changing $`c`$ if $`\mathrm{}c`$ (in the sense that quantum numbers are adiabatic invariants). Then all relativistic energies scale like $`c^2`$. If for non-relativistic systems $`\mathrm{}1/c`$, the Rydberg energy $`E_R=m_ee^4/(2\mathrm{}^2)`$ also scales like $`c^2`$. Hence all absorption lines, ignoring the fine structure, scale like $`c^2`$. When we compare lines from near and far systems we should therefore see no effects due to a varying $`c`$; the redshift $`z`$ is still $`1+z_e=a_o/a_e`$, where $`o`$ and $`e`$ label epochs of observation and emission.
In order to examine luminosity distances, we need to reassess the concept of standard candles. For simplicity let us first treat them as black bodies. Then their temperature scales as $`Tc^2`$ (Albrecht & Magueijo (99)), their energy density scales as $`\rho T^4/(\mathrm{}c)^3c^2`$, and their emission power as $`P=\rho /cc`$, implying that standard candles are brighter in the early universe if $`\dot{c}<0`$. However, the power emitted by these candles, in free flight, scales like $`c`$; each photon’s energy scales like $`c^2`$, its speed like $`c`$, and therefore its energy flux like $`c`$. The received flux, as a function of $`c`$, therefore scales like:
$$P_r=\frac{P_ec^2}{4\pi r^2c}c$$
(1)
where $`r`$ is the conformal distance to the emitting object, and the subscripts $`r`$ and $`e`$ label received and emitted. In an expanding universe we therefore still have
$$P_r=\frac{P_{e0}}{4\pi r^2a_0^2}\left(\frac{a}{a_o}\right)^2,$$
(2)
where $`P_{e0}`$ is the emitting power of standard candles today. Notice that the above argument is still valid if the candles are not black bodies; it depends only on the scaling properties of emitted and received power.
We can now set up the Hubble diagram. Consider the Taylor expansion
$$a(t)=a_0[1+H_0(tt_0)\frac{1}{2}q_0H_0^2(tt_0)^2+\mathrm{}]$$
(3)
where $`H_0=\dot{a}_0/a_0`$ is the Hubble constant, and $`q_0=\ddot{a}_0a_0/\dot{a}_0^2`$ is the decceleration parameter. Hence up to second order $`z=H_0(t_0t)+(1+q_0/2)H_0^2(tt_0)^2`$, or
$$t_0t=\frac{1}{H_0}[z(1+q_0/2)z^2+\mathrm{}].$$
(4)
From (2) we find that the luminosity distance $`d_L`$ is
$$d_L=\left(\frac{P_{e0}}{4\pi P_0}\right)^{1/2}=a_0^2\frac{r}{a}=a_0r(1+z_e).$$
(5)
The conformal distance to the emitting object is given by $`r=_t^{t_0}c(t)𝑑t/a(t).`$ From (3) we have that
$$r=c_0[(t_0t)+\frac{1n}{2}H_0(t_0t)^2+\mathrm{}]$$
(6)
where we have assumed that locally $`c=c_0a^n`$ (that is $`c=c_0[1+nH_0(tt_0)+\mathrm{}]`$). Substituting (4) we finally have <sup>1</sup><sup>1</sup>1Had we assumed that $`\mathrm{}c`$ for all systems we would have got instead $`d_L=(c_0/\stackrel{~}{H}_0)[z+\frac{1}{2}(1(q_0(1+4n)+n))z^2]`$, with $`\stackrel{~}{H}_0=(14n)H_0`$. This does not affect any of the conclusions. :
$$d_L=\frac{c_0}{H_0}[z+\frac{1}{2}(1(q_0+n))z^2+\mathrm{}]$$
(7)
We see that besides the direct effects of VSL upon the expansion rate of the universe, it also induces an effective acceleration in the Hubble diagram as an “optical illusion” (we are assuming that $`c`$ decreases in time: $`n<0`$). This is easy to understand. We have seen that VSL introduces no intrinsic effects in the redshifting spectral line or in the dimming of standard candles with distance and expansion. The only effect VSL induces on the construction of the Hubble diagram is that for the same redshift (that is, the same distance into the past) objects are farther away from us because light travelled faster in the past. But an excess luminosity distance, for the same redshift, is precisely the hallmark of cosmological acceleration. However, we need to consider the other experimental input to our work: the Webb et al (99) results. By measuring the fine structure in absorption systems at redshifts $`zO(1)`$ we can also map the curve $`c(t)`$. Since $`c=c_0[1+nH_0(tt_0)+\mathrm{}]`$ we have $`c=c_0[1nz+\mathrm{}]`$, and so to first order $`\alpha =\alpha _0[1+2nz+\mathrm{}]`$. However, the results presented in Webb et al (99) show that $`n`$ is at most of order $`10^5.`$ This means that the direct effects of varying $`c`$ permitted by the QSO absorption system observations are far too small to explain the observed acceleration. We need to look at a fully self-consistent generalisation of general relativity containing the scope for varying $`c`$.
## 3 The model
We start with some general properties of the dynamics of $`c`$. Drawing inspiration from dilaton theories (like Brans-Dicke gravity) we take $`\psi =\mathrm{log}(c/c_0)`$ as the dynamical field associated with $`c`$. Indeed, powers of $`c`$ appear in all coupling constants, which in turn can be written as $`e^\varphi `$, where $`\varphi `$ is the dilaton. Another theory using a similar dynamical variable is the changing $`\alpha `$ theory of Bekenstein (82) (which uses $`\mathrm{log}(\alpha )`$).
We then endow $`\psi `$ with a dynamics similar to the dilaton. The left-hand side for the $`\psi `$ equation should therefore be $`\mathrm{}\psi `$ (in the preferred Lorentz frame - to be identified with the cosmological frame). This structure ensures that the propagation equation for $`\psi `$ is second-order and hyperbolic, i.e. propagation is causal. Since VSL breaks Lorentz invariance other expressions would be possible - but then the field $`\psi `$ would propagate non-causally. An example is $`(g_{\mu \nu }+u_\mu u_\nu )(_\mu \psi )(_\nu \psi )`$, where $`u^\mu `$ is the tangent vector of the local preferred frame.
On the other hand one need not choose (as in Brans-Dicke theories) the source term to be $`\rho 3p`$, where $`\rho `$ and $`p`$ are the energy density and pressure of matter respectively. Without the requirement of Lorentz invariance other expressions are possible, and using them does not conflict with local causality. If $`T^{\mu \nu }`$ is the stress-energy tensor we can choose as a source term $`T^{\mu \nu }(g_{\mu \nu }+u_\mu u_\nu )`$; that is changes in $`c`$ are driven by the matter pressure. We find this choice is the one that gives interesting effects.
For a homogeneous field in an expanding universe we therefore have $`\ddot{\psi }+3(\dot{a}/a)\dot{\psi }=4\pi G\omega p/c^2`$, where $`p`$ is the total pressure of the matter fields and $`\omega `$ is a coupling constant (distinct from the Brans Dicke coupling constant). The full self-consistent system of equations in a matter-plus-radiation universe containing a cosmological constant stress $`\rho _\mathrm{\Lambda }=(\mathrm{\Lambda }c^2)/(8\pi G)`$ is therefore
$`\ddot{\psi }+3{\displaystyle \frac{\dot{a}}{a}}\dot{\psi }`$ $`=`$ $`4\pi G\omega {\displaystyle \frac{\rho _\gamma }{3}},`$ (8)
$`\dot{\rho }_\gamma +4{\displaystyle \frac{\dot{a}}{a}}\rho _\gamma `$ $`=`$ $`2\rho _\mathrm{\Lambda }\dot{\psi },`$ (9)
$`\dot{\rho }_\mathrm{\Lambda }`$ $`=`$ $`2\rho _\mathrm{\Lambda }\dot{\psi },`$ (10)
$`\dot{\rho }_m+3{\displaystyle \frac{\dot{a}}{a}}\rho _m`$ $`=`$ $`0,`$ (11)
$`\left({\displaystyle \frac{\dot{a}}{a}}\right)^2`$ $`=`$ $`{\displaystyle \frac{8\pi G}{3}}(\rho _m+\rho _\gamma +\rho _\mathrm{\Lambda }),`$ (12)
where subscripts $`\gamma `$ and $`m`$ denote radiation and matter respectively. We have assumed that the sink term (10) is reflected in a source term in (9) (and not in (11)). This is due to the fact that this term is only significant very early on, when even massive particles behave like radiation. We have ignored curvature terms because in the quasi-lambda dominated solutions we are about to explore we know that these are smaller than $`\rho _\mathrm{\Lambda }`$, (Barrow & Magueijo (99)). Here, in complete contrast to Brans-Dicke theory, the field $`\psi `$ is only driven by radiation pressure in the dust-dominated era. In other words, only conformally invariant forms of matter couple to the field $`\psi `$.
In a radiation-dominated universe the behaviour of this system changes at the critical value $`\omega =4`$. For $`\omega <4`$ we reach a flat $`\rho _\mathrm{\Lambda }=0`$ attractor as $`t\mathrm{}`$. For $`4<\omega <0`$ we have attractors for which $`\rho _\mathrm{\Lambda }`$ and $`\rho _\gamma `$ maintain a constant ratio (see Barrow & Magueijo (99)). In Fig. 1 we plot a numerical solution to this system, with $`\omega =4.4`$ (a $`10\%`$ tuning below the critical value $`\omega =4`$) and $`n=2.2`$ during the radiation epoch. As expected from Barrow & Magueijo (99), this forces $`\mathrm{\Omega }_\mathrm{\Lambda }`$ to drop to zero, while the expansion factor acquires a radiation-dominated form, with $`at^{1/2}`$. By the time the matter-dominated epoch is reached, $`\mathrm{\Omega }_\mathrm{\Lambda }`$ is of order $`10^{12}`$. During the matter epoch, the source term for $`\psi `$ disappears in eq. (8), $`n`$ starts to approach zero, $`\mathrm{\Omega }_\mathrm{\Lambda }`$ starts to increase, and the expansion factor takes on the $`at^{2/3}`$ dependence of a matter-dominated universe. A few expansion times into the matter epoch, $`\mathrm{\Omega }_\mathrm{\Lambda }`$ becomes of order 1 and the universe begins accelerating. By the time this happens $`n`$ is of order $`10^5`$, in agreement with the expectations of Webb et al (99). This type of behaviour can be achieved generically, for different initial conditions, with a tuning of $`\omega `$ that never needs to be finer than a few percent.
We can provide an approximate argument explaining why this theory should display this type of behaviour and why we need so little fine tuning of $`\omega `$ to explain the supernovae experiments. If we neglect changes in $`c`$ after matter-radiation equality, $`t_{eq}`$, we are going to require $`\rho _\mathrm{\Lambda }(t_{eq})/\rho (t_{eq})z_{eq}^310^{12}`$. Let $`c=c_0a^{n(t)}`$, with $`n=2\delta `$, and $`n=\omega /2`$ during the radiation epoch. We can integrate the conservation equations to give
$$\frac{\rho }{\rho _\mathrm{\Lambda }}=\frac{A}{a^4\rho _\mathrm{\Lambda }}\frac{n}{n+2},$$
(13)
with $`A`$ constant, from which it follows that
$$\frac{\rho }{\rho _\mathrm{\Lambda }}=\frac{2}{\delta }\left[\left(1+\frac{\delta }{2}\frac{\rho _i}{\rho _{\mathrm{\Lambda }i}}\right)\left(\frac{a}{a_i}\right)^{2\delta }1\right].$$
(14)
We see that assymptotically $`\rho /\rho _\mathrm{\Lambda }`$ grows to infinity, if $`\delta >0`$ (the flat $`\rho _\mathrm{\Lambda }=0`$ attractor of Barrow & Magueijo (99)). However the growth is very slow even if $`\delta `$ is not very small. Our theory displays very long transients, and a very slow convergence to its attractor, a property similar to quintessence models (Zlatev et al (99)). It is therefore possible to achieve $`\rho _\mathrm{\Lambda }/\rho 10^{12}`$ at the end of the radiation epoch, with $`\delta `$ chosen to be of order $`0.1`$.
Now, why is the change in $`c`$ of the right order of magnitude to explain the results of Webb et al (99)? With a solution of the form $`c=c_0a^{n(t)}`$ we find that
$$n(t)\frac{\omega \rho _\gamma }{3(\rho _m+2\rho _\mathrm{\Lambda })}$$
(15)
valid in the matter dominated era, regardless of the details of the radiation to matter transition. With $`\omega 4`$ we therefore have
$$n(t_0)\frac{4}{3}\frac{2.3\times 10^5}{h^2(1+\mathrm{\Omega }_\mathrm{\Lambda })}$$
(16)
of the right order of magnitude. The order of magnitude of the index $`n10^5`$, observed by Webb et al (99), is therefore fixed by the ratio of the radiation and the matter energy densities today.
## 4 Discussion
In this Letter we proposed a theory relating the supernovae results and the observations by Webb et al (99). The theory we have proposed is one example within a class whose members exhibit similar behaviour. In these theories the gravitational effect of the pressure drives changes in $`c`$, and these convert the energy density in $`\mathrm{\Lambda }`$ into radiation. Thus $`\rho _\mathrm{\Lambda }`$ is prevented from dominating the universe during the radiation epoch. As the universe cools down, massive particles eventually become the source of pressureless matter and create a matter-dominated epoch. In the matter epoch the variation in $`c`$ comes to a halt, with residual effects at $`z15`$ at the level observed by Webb et al. As the $`c`$ variation is switched off, the $`\mathrm{\Lambda }`$ stress resurfaces, and dominates the universe for a few expansion times in the matter-dominated era, in agreement with the supernovae results.
In a forthcoming publication we shall address other aspects of this theory, beyond the scope of this Letter. We mention nucleosynthesis, the location in time of a quantum epoch, and perturbations around the homogeneous solution discussed here (see Barrow & O’Toole (99)). Nucleosynthesis in particular may provide significant constraints on this class of models. However we expect a variation in $`\alpha `$ to require variations in other couplings if some unification exists. Nucleosynthesis involves many competing effects contributed by weak, strong and electromagnetic, and gravitational interactions and we do not know how to incorporate all the effects self-consistently. Studies of the effects of varying constants coupled by Kaluza-Klein extra dimensions have been made by Kolb et al (86) and Barrow (87). The most detailed study to date was conducted by Campbell and Olive (95).
## Acknowledgements
JDB is partially supported by a PPARC Senior Fellowship. JM would like to thank K. Baskerville and D. Sington for help with this project.
|
no-problem/9907/hep-ph9907302.html
|
ar5iv
|
text
|
# New effects observed in central production by experiment WA102 at the CERN Omega Spectrometer
## 1 INTRODUCTION
There is considerable current interest in trying to isolate the lightest glueball. Several experiments have been performed using glue-rich production mechanisms. One such mechanism is Double Pomeron Exchange (DPE) where the Pomeron is thought to be a multi-gluonic object. Consequently it has been anticipated that production of glueballs may be especially favoured in this process .
The WA102 experiment at the CERN Omega Spectrometer studies centrally produced exclusive final states formed in the reaction
$$ppp_fX^0p_s,$$
(1)
where the subscripts $`f`$ and $`s`$ refer to the fastest and slowest particles in the laboratory frame respectively and $`X^0`$ represents the central system.
## 2 A COUPLED CHANNEL ANALYSIS OF THE $`K\overline{K}`$ AND $`\pi \pi `$ SYSTEMS
Recently the WA102 collaboration has published the results of a partial wave analysis of the centrally produced $`K^+K^{}`$ and $`\pi ^+\pi ^{}`$ channels. Fig. 1 shows the the $`S_0^{}`$-wave from the $`K^+K^{}`$ and $`\pi ^+\pi ^{}`$ channels. The $`S_0^{}`$-wave from the $`K^+K^{}`$ channel shows a threshold enhancement; the peaks at 1.5 GeV and 1.7 GeV are due to the $`f_0(1500)`$ and $`f_J(1710)`$ with J = 0 . In order to obtain a satisfactory fit to the $`S_0^{}`$ wave from the $`\pi ^+\pi ^{}`$ channel from threshold to 2 GeV it has been found to be necessary to use Breit-Wigners to describe the $`f_0(980)`$, $`f_0(1370)`$ and $`f_0(1500)`$ and $`f_J(1710)`$ .
A coupled channel fit has been performed to the $`K^+K^{}`$ and $`\pi ^+\pi ^{}`$ S-wave distributions and the results are shown in fig. 1. The sheet II pole positions for the resonances observed are
| $`f_0(980)`$ | M | = | ($`\mathrm{\hspace{0.33em}\hspace{0.33em}987}`$ | $`\pm `$ | $`\mathrm{\hspace{0.33em}\hspace{0.33em}6}\pm \mathrm{\hspace{0.33em}\hspace{0.33em}6}`$) | $`i`$ | ($`\mathrm{\hspace{0.33em}\hspace{0.33em}48}`$ | $`\pm `$ | 12 $`\pm \mathrm{\hspace{0.33em}\hspace{0.33em}8}`$) | MeV |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| $`f_0(1370)`$ | M | = | (1312 | $`\pm `$ | 25 $`\pm `$ 10) | $`i`$ | (109 | $`\pm `$ | 22 $`\pm `$ 15) | MeV |
| $`f_0(1500)`$ | M | = | (1502 | $`\pm `$ | 12 $`\pm `$ 10) | $`i`$ | ($`\mathrm{\hspace{0.33em}\hspace{0.33em}49}`$ | $`\pm `$ | $`\mathrm{\hspace{0.33em}\hspace{0.33em}9}`$ $`\pm `$ 8) | MeV |
| $`f_J(1710)`$ | M | = | (1727 | $`\pm `$ | 12 $`\pm `$ 11) | $`i`$ | ($`\mathrm{\hspace{0.33em}\hspace{0.33em}63}`$ | $`\pm `$ | $`\mathrm{\hspace{0.33em}\hspace{0.33em}8}`$ $`\pm `$ 9) | MeV |
These parameters are consistent with the PDG values for these resonances. For the $`f_0(980)`$ the couplings were determined to be $`g_\pi `$ = 0.19 $`\pm `$ 0.03 $`\pm `$ 0.04 and $`g_K`$ = 0.40 $`\pm `$ 0.04 $`\pm `$ 0.04.
The branching ratios for the $`f_0(1370)`$, $`f_0(1500)`$ and $`f_J(1710)`$ have been calculated to be:
$$\frac{f_0(1370)K\overline{K}}{f_0(1370)\pi \pi }=0.46\pm 0.15\pm 0.11$$
$$\frac{f_0(1500)K\overline{K}}{f_0(1500)\pi \pi }=0.33\pm 0.03\pm 0.07$$
$$\frac{f_J(1710)K\overline{K}}{f_J(1710)\pi \pi }=5.0\pm 0.6\pm 0.9$$
these values are to be compared with the PDG values of 1.35 $`\pm `$ 0.68 for the $`f_0(1370)`$, 0.19 $`\pm `$ 0.07 for the $`f_0(1500)`$, which comes from the Crystal Barrel experiment , and 2.56 $`\pm `$ 0.9 for the $`f_J(1710)`$ which comes from the WA76 experiment .
## 3 A GLUEBALL-$`q\overline{q}`$ FILTER IN CENTRAL PRODUCTION ?
The WA102 experiment studies mesons produced in double exchange processes. However, even in the case of pure DPE the exchanged particles still have to couple to a final state meson. The coupling of the two exchanged particles can either be by gluon exchange or quark exchange. Assuming the Pomeron is a colour singlet gluonic system if a gluon is exchanged then a gluonic state is produced, whereas if a quark is exchanged then a $`q\overline{q}`$ state is produced . In order to describe the data in terms of a physical model, Close and Kirk , have proposed that the data be analysed in terms of the difference in transverse momentum ($`dP_T`$) between the particles exchanged from the fast and slow vertices. The idea being that for small differences in transverse momentum between the two exchanged particles an enhancement in the production of glueballs relative to $`q\overline{q}`$ states may occur.
The ratio of the number of events for $`dP_T`$ $`<`$ 0.2 GeV to the number of events for $`dP_T`$ $`>`$ 0.5 GeV for each resonance considered has been calculated . It has been observed that all the undisputed $`q\overline{q}`$ states which can be produced in DPE, namely those with positive G parity and $`I=0`$, have a very small value for this ratio ($`0.1`$). Some of the states with $`I=1`$ or G parity negative, which can not be produced by DPE, have a slightly higher value ($`0.25`$). However, all of these states are suppressed relative to the the glueball candidates the $`f_0(1500)`$, $`f_J(1710)`$, and $`f_2(1930)`$, together with the enigmatic $`f_0(980)`$, which have a large value for this ratio .
## 4 THE AZIMUTHAL ANGLE BETWEEN THE OUTGOING PROTONS
The azimuthal angle $`\varphi `$ is defined as the angle between the $`p_T`$ vectors of the two protons. Naively it may be expected that this angle would be flat irrespective of the resonances produced. Fig. 2 shows the $`\varphi `$ dependence for two resonances with $`J^{PC}`$ = $`0^+`$ (the $`\eta `$ and $`\eta ^{}`$), two with $`J^{PC}`$ = $`1^{++}`$ (the $`f_1(1285)`$ and $`f_1(1420)`$), two with $`J^{PC}`$ = $`2^{++}`$ (the $`f_2(1270)`$ and $`f_2^{}(1525)`$) and two with $`J^{PC}`$ = $`0^{++}`$ (the $`f_0(1500)`$ and $`f_J(1710)`$). The $`\varphi `$ dependence is clearly not flat and considerable variation is observed between resonances with different $`J^{PC}`$s.
Several theoretical papers have been published on these effects . All agree that the exchanged particle (Pomeron) must have J $`>`$ 0 and that J = 1 is the simplest explanation. Using $`\gamma ^{}\gamma ^{}`$ collisions as an analogy Close and Schuler have calculated the $`\varphi `$ dependencies for the production of resonances with different $`J^{PC}`$s. They have found that for a $`J^{PC}`$ = $`0^+`$ state
$$\frac{d^3\sigma }{d\varphi dt_1dt_2}t_1t_2sin^2\varphi $$
(2)
as can be seen from fig. 2 the $`\varphi `$ distributions are proportional to $`sin^2\varphi `$ and it has been found experimentally that $`d\sigma /dt`$ is proportional to $`t`$ . For the $`J^{PC}`$ = $`1^{++}`$ states this model predicts that $`J_Z`$ = $`\pm 1`$ should dominate, which has been found to be correct , and
$$\frac{d^3\sigma }{d\varphi dt_1dt_2}(\sqrt{t_2}\sqrt{t_1})^2+\sqrt{t_1t_2}sin^2\varphi /2$$
(3)
As can be seen from fig. 2 the $`\varphi `$ distributions are proportional to $`\alpha +\beta sin^2\varphi /2`$. In addition equation(3) would predict that when $`|t_2t_1|`$ is small $`d\sigma /d\varphi `$ should be proportional to $`sin^2\varphi /2`$ while when $`|t_2t_1|`$ is large $`d\sigma /d\varphi `$ should be constant. As shown in fig. 3 this trend is observed in the data.
The aim now is to study the $`\varphi `$ dependences of other known $`q\overline{q}`$ states in order to understand more about the nature of the Pomeron and then to use this information as a probe for non-$`q\overline{q}`$ states.
## 5 SUMMARY
In conclusion, a partial wave analysis of the centrally produced $`K^+K^{}`$ and $`\pi ^+\pi ^{}`$ systems has been performed. The striking feature is the observation of the $`f_J(1710)`$ with J = 0 in the $`S_0^{}`$-wave.
A study of centrally produced pp interactions show that there is the possibility of a glueball-$`q\overline{q}`$ filter mechanism ($`dP_T`$). All the undisputed $`q\overline{q}`$ states are observed to be suppressed at small $`dP_T`$, but the glueball candidates $`f_0(1500)`$, $`f_J(1710)`$, and $`f_2(1930)`$ , together with the enigmatic $`f_0(980)`$, survive. In addition, the production cross section for different resonances depends strongly on the azimuthal angle between the two outgoing protons which may give information on the nature of the Pomeron.
|
no-problem/9907/nucl-th9907036.html
|
ar5iv
|
text
|
# Coulomb Distortion Effects for (𝑒,𝑒'𝑝) Reactions at High Electron Energy
## Abstract
We report a significant improvement of an approximate method of including electron Coulomb distortion in electron induced reactions at momentum transfers greater than the inverse of the size of the target nucleus. In particular, we have found a new parametrization for the elastic electron scattering phase shifts that works well at all electron energies greater than 300 $`MeV`$. As an illustration, we apply the improved approximation to the $`(e,e^{}p)`$ reaction from medium and heavy nuclei. We use a relativistic “single particle” model for $`(e,e^{}p)`$ as as applied to $`{}_{}{}^{208}Pb(e,e^{}p)`$ and to recently measured data at CEBAF on $`{}_{}{}^{16}O(e,e^{}p)`$ to investigate Coulomb distortion effects while examining the physics of the reaction.
Electron scattering has long been acknowledged as a useful tool for investigating nuclear structure and nuclear properties, especially in the quasielastic region. One of the primary attributes of electron scattering as usually presented is the fact that in the electron plane-wave Born approximation, the cross section can be written as a sum of terms each with a characteristic dependence on electron kinematics and containing various bi-linear products of the Fourier transform of charge and current matrix elements. That is, various structure functions for the process can be extracted from the measured data by so-called Rosenbluth separation methods. The trouble with this picture is that when the Coulomb distortion of the electron wavefunctions arising from the static Coulomb field of the target nucleus is included exactly by partial wave methods, the structure functions can no longer be extracted from the cross section, even in principle.
Electron Coulomb distortion in elastic and inelastic scattering for various processes has been included with various approximations in the past. In the early 90’s Coulomb distortion for the reactions $`(e,e^{})`$ and $`(e,e^{}p)`$ in quasielastic kinematics was treated exactly by the Ohio University group using partial wave wave expansions of the electron wave functions. Such partial wave treatments are referred to as the distorted wave Born approximation (DWBA) since the static Coulomb distortion is included exactly by numerically solving the radial Dirac equation containing the Coulomb potential for a finite nuclear charge distribution to obtain the distorted electron wave functions. The induced transition by a virtual photon is included to first order (the Born Approximation). While this calculation permits the comparison of various nuclear models to measured cross sections and provides an invaluable check on various approximate techniques of including Coulomb distortion effects, it is numerically challenging and computation time increases rapidly with higher incident electron energy. And, as noted above, it is not possible to separate the cross section into various terms containing the structure functions and develop insights into the role of various terms in the charge and current distributions.
In all of our DWBA investigations of $`(e,e^{})`$ and $`(e,e^{}p)`$ reactions in the quasielastic region, we used a relativistic treatment based on the $`\sigma \omega `$ model for the nucleons involved. In particular, for the $`(e,e^{}p)`$ reaction we use a relativistic Hartree single particle model for a bound state and a relativistic optical model for an outgoing proton combined with the free space relativistic current operator
$$J^\mu =\gamma ^\mu +i\frac{\kappa }{2M}\sigma ^{\mu \nu }_\nu .$$
(1)
Using this model, we compared our DWBA calculations with experimental data measured at various laboratories for $`(e,e^{})`$, and for $`(e,e^{}p)`$ and have found excellent agreement with the data. We concluded that the relativistic nuclear models are in excellent agreement with the measured data and do not need to invoke meson exchange effects and other two-body terms in the current that are necessary in a Schrödinger description that uses a non-relativistic reduction of the free current operator . Therefore, in this brief report we will continue to use our relativistic “single-particle” model to investigate Coulomb distortion effects and to compare to the newly measured data from CEBAF.
To avoid the numerical difficulties associated with DWBA analyses at higher electron energies and to look for a way to still define structure functions, our group developed an approximate treatment of the Coulomb distortion based on the work of Knoll and the work of Lenz and Rosenfelder. Knoll examined approximations of the Green function valid for large momentum transfers (that is, valid for $`qR>1`$ where $`R`$ is the size of the target) while Lenz and Rosenfelder constructed plane-wave-like electron wavefunctions which included Coulomb distortion effects. We were able to greatly improve some previous attempts along this line where various additional approximations were made which turned out not to be valid. We did have the advantage of having the exact DWBA calculation available for incident electron energies up to $`400500MeV`$ for checking our approximations. We compared our approximate treatment of Coulomb distortion to the exact DWBA results for the reaction $`(e,e^{}p)`$ and found good agreement (at about the 1-2$`\%`$ level) near the peaks of cross sections even for heavy nuclei such as $`{}_{}{}^{208}Pb`$. The agreement was not so good away from the peaks.
As discussed in our previous papers, one of the ingredients of our approximate electron wavefunction is a parameterization of the elastic scattering phase shifts in terms of the angular momentum. In this paper, we briefly review our previous approximation of the Coulomb distorted electron wavefunction and present a greatly improved parametrization of the phase shifts which works well at all incident electron energies greater than 300 $`MeV`$. In addition, we will compare our relativistic “single-particle” model to new experimental data from CEBAF.
Our approximate method of including the static Coulomb distortion in the electron wavefunctions is to write the wave functions in a plane-wave-like form;
$$\mathrm{\Psi }^\pm (𝐫)=\frac{p^{}(r)}{p}e^{\pm i\delta (𝐋^2)}e^{i\mathrm{\Delta }}e^{i𝐩^{}(r)𝐫}u_p,$$
(2)
where the phase factor $`\delta (𝐋^2)`$ is a function of the square of the orbital angular momentum operator, $`u_p`$ denotes the Dirac spinor, and the local effective momentum $`𝐩^{}(𝐫)`$ is given in terms of the Coulomb potential of the target nucleus by
$$𝐩^{}(𝐫)=\left(p\frac{1}{r}_0^rV(r)𝑑r\right)\widehat{𝐩}.$$
(3)
We refer to this $`r`$-dependent momentum as the Local Electron Momentum Approximation (LEMA). The $`adhoc`$ term $`\mathrm{\Delta }=a[\widehat{𝐩}^{}(r)\widehat{r}]𝐋^2`$ denotes a small higher order correction to the electron wave number which we have written in terms of the parameter $`a=\alpha Z(\frac{16MeV/c}{p})^2`$. The value of 16 MeV/c was determined by comparison with the exact radial wave functions in a partial wave expansion.
The elastic scattering phase shifts are labelled by the Dirac quantum number $`\kappa `$ which takes on plus minus integer values beginning with one. The eigenvalues of $`𝐉^2`$ are $`j(j+1)`$ which equals $`\kappa ^2\frac{1}{4}`$. The basic idea of our approximation is to calculate the elastic scattering phases and fit them to function of $`\kappa ^2`$. Then to replace the discrete values of $`\kappa ^2`$ with the total angular momentum operator $`𝐉^2`$ which we then replace by the orbital angular momemtum operator $`𝐋^2`$ since the low $`\kappa `$ terms where the difference between $`j`$ and $`l`$ is significant contribute very little to the cross section. The removal of any spin dependence apart from what is in the Dirac spinor $`u_p`$ is crucial for defining modified structure functions.
Based on earlier work by others we fitted the elastic scattering phases shifts to a power series in $`\kappa ^2`$ up to second order;
$$\delta _\kappa =b_0+b_2\kappa ^2+b_4\kappa ^4,$$
(4)
where the coefficients, $`b_0`$, $`b_2`$, $`b_4`$ are extracted from a best fit for $`\kappa `$ values up to about $`3pR`$ where $`R`$ is the nuclear radius. Note that this procedure requires calculating the elastic scattering phase shifts for the incident and outgoing electron energies up to $`\kappa `$ values of order $`3pR`$, which for high electron energies can be quite demanding computationally. We refer to these phases as the $`\kappa ^2`$-dependence phases. This fit to the phases worked very well for $`\kappa `$ values up to approximately $`\kappa =3pR35`$ at medium or low energy, but did not fit the exact phases shifts very well for higher energies where $`\kappa =3pR50`$ or more. Since we were primarily looking at electron energies in the 300-600 MeV range in our previous work, this discrepancy did not present a significant problem.
However, with CEBAF type energies we need a fit to the phases that will work at any incident energy where the overall approximation can be used; that is, for incident electron energies greater than about 300 $`MeV`$ and processes with momentum transfers greater than about $`1/R`$. In addition, we would like to avoid calculating all of the elastic phase shifts, particularly the very high ones. A reasonable solution is to make use of the fact that the higher $`\kappa `$ phase shifts appoach the point Coulomb phases which have a simple analytical form at high energy. At the other extreme, the low $`\kappa `$ phases corresponding to orbitals which penetrate the nucleus are linear in $`\kappa ^2`$ which was the basis or our initial parametrization. The difficult phases to fit correspond to $`\kappa `$ values of order $`pR`$ which, from a classical point of view, corresponds to scattering from the nuclear surface. Moreover, it is well known that in electron induced reactions the spatial region around the surface gives the largest contribution to the cross section, so it is important to fit the intermediate range as well as possible.
Another goal is to reduce the computer time needed, so we decided to seek a parametrization of the elastic scattering phases shifts in terms of $`\kappa ^2`$ which has the correct large $`\kappa ^2`$ behaviour and becomes linear in $`\kappa ^2`$ at low angular momentum. Since we have the correct large $`\kappa `$ behaviour, we need only calculate the exact scattering phase shifts for $`\kappa `$ values of up to of order $`pr`$. The large $`\kappa `$ and small $`\kappa `$ behaviour are quite different, so we chose to write the expression for the phase shift as the sum of two terms with an exponential factor which suppresses one of the terms at small $`\kappa `$ values and the other at large $`\kappa `$ values. After some experimentation, we find that the following parametrization of elastic scattering phase shift describes the exact phase shifts very well:
$`\delta (\kappa )`$ $`=`$ $`[a_0+a_2{\displaystyle \frac{\kappa ^2}{(pR)^2}}]e^{\frac{1.4\kappa ^2}{(pR)^2}}`$ (6)
$`{\displaystyle \frac{\alpha Z}{2}}(1e^{\frac{\kappa ^2}{(pR)^2}})\times \mathrm{ln}(1+\kappa ^2),`$
where $`p`$ is the electron momentum and we take the nuclear radius to be given by $`R=1.2A^{1/3}0.86/A^{1/3}`$. We fit the two constants $`a_0`$ and $`a_2`$ to two of the elastic scattering phase shifts ($`\kappa =1`$ and $`\kappa =Int(pR)+5`$). To a very good approximation, $`a_0=\delta (1)`$ and $`a_2=4\delta (Int(pR)+5)+\alpha Zln(2pR)`$, where $`Int(pR)`$ replaces $`pR`$ by the integer just less than $`pR`$. Note that this parametrization only requires the value of the exact scattering phase shift for $`\kappa =1`$ and $`\kappa =Int(pR)+5`$. As shown in Fig. 1, the $`\kappa ^2`$-dependence phase parametrization breaks down for high $`\kappa `$ values and has large deviations for mid-range $`\kappa `$ values. The new phase parametrization fits the exact phases very well for electron energy of $`E=2400`$ MeV on $`{}_{}{}^{16}O`$, although the new phase parametrization does still show some small deviations from the exact phases for $`\kappa `$ values around $`20`$ to $`30`$ which is in the surface region. Clearly additional terms could be added to the parametrization to obatin a better fit. However, as we shall see below, the simple fit that we have used reproduces the cross section quite well.
Using the new phase shift parametrization and the local effective momentum approximation we construct plane-wave-like wave functions for the incoming and outgoing electrons. Since the only spinor dependence is in the Dirac spinor all of the Dirac algebra goes through as usual and we end up with a Møller-like potential which contains an $`r`$-dependent momentum transfer. It is then straightforward to calculate the $`(e,e^{}p)`$ cross sections and modified structure functions. Please see our previous papers for details.
In most $`(e,e^{}p)`$ experiments, there is sufficient energy resolution that protons knocked out of different shells can be examined. It is common to report the experimental results in terms of the reduced cross section $`\rho _m`$ as s function of missing momentum $`p_m`$, which is defined by
$$\rho _m(p_m)=\frac{1}{PE_p\sigma _{eP}}\frac{d^3\sigma }{dE_fd\mathrm{\Omega }_fd\mathrm{\Omega }_P},$$
(7)
where the missing momentum is determined by the kinematics $`𝐩_m=𝐏𝐪`$ where $`𝐏`$ is the outgoing proton momentum and $`𝐪`$ is the asymptotic momentum transfer from the electron defined by $`𝐪=𝐩_i𝐩_f`$. For plane wave protons in the final state $`\rho _m`$ is related to the probability that a bound proton from a given shell have momentum $`𝐩_m`$. For the off-shell electron-proton cross section $`\sigma _{eP}`$ we use the form ‘cc1’ given by de Forest . For distorted outgoing protons, this reduced cross section is just a convenient way of comparing experiment and theory since the theory results for the cross section can have the same factors removed. Note that all calculations will be carried out in the laboratory system (target fixed frame).
While there are two experimental kinematic arrangements commonly used in $`(e,e^{}p)`$ experiments with designations of parallel kinematics and perpendicular kinematics, in the present work, we consider only perpendicular kinematics. In perpendicular kinematics, the momentum transfer $`𝐪`$ is held fixed along with the magnitude of the momentum of the outgoing proton while the angle between $`𝐪`$ and $`𝐏`$ is varied. The calculated reduced cross section is compared (by means of a linear least squares fit) to the similarly reduced experimental cross section to extract an overall scale factor which is the spectroscopic factor. The spectroscopic factor contains two factors, the occupation probability of a proton in a given orbit and the overlap of the residual nucleus with the $`A1`$ nucleons in the target.
As a test case, we calculate the reduced cross sections with the new phases for a heavy nucleus, $`{}_{}{}^{208}Pb`$. Figure 2 shows the reduced cross section as a function of the missing momentum $`p_m`$ for knocking protons from the $`3s_{1/2}`$ orbital of $`{}_{}{}^{208}Pb`$. The incident electron energy $`E_i=412`$ MeV, and the outgoing proton kinetic energy is $`T_p=100`$ MeV. We have chosen $`P=q`$ which corresponds to an electron scattering angle of $`\theta _e=74^o`$. The solid line is the result of the full DWBA , the dashed curve is the result with the new phase shift parametrization, and the dotted curve is the result with $`\kappa ^2`$-dependence phase shift parametrization. The dashed curve obtained by using the new phases clearly reproduces the exact result much better than the previous $`\kappa ^2`$-dependence phase parametrization over the whole region.
We also apply the new phase shift paramtetrization for the case of high energy electron scattering on the light nucleus $`{}_{}{}^{16}O`$ where protons are removed from the $`p_{1/2}`$ and $`p_{3/2}`$ orbits. The incident electron energy $`E_i=2441.6`$ MeV and the outgoing proton kinetic energy $`T_p=427`$ MeV as shown Fig. 3. In this figure, the solid curves are the approximate DWBA results using the new phase shift parametrization, the dotted curves are the PWBA results without Coulomb distortion, and the data are newly measured from CEBAF as reported in the dissertation of Gao . Note that our exact DWBA code cannot evaluate such high energy processes without extensive modification which we have not done.
As expected, the effect of Coulomb distortion on such a high energy electron induced process is very small except possibly at large missing momentum. Note that the Coulomb effects for $`{}_{}{}^{16}O`$ in the medium energy region (500 MeV) was of the order of $`3\%`$ . This fit to the experimental data using our relativistic “single particle” model for the nucleon wavefunctions results in spectroscopic factors of 61$`\%`$ for the $`p_{1/2}`$ orbital and 70$`\%`$ for the $`p_{3/2}`$ orbital. In our analyis of Saclay data at lower electron energies using a similar nuclear model we found spectroscopic factors of 54% and 57% respectively .
In summary, we have improved our previous approximate method of including Coulomb distortion effecs in $`(e,e^{}p)`$ reactions from nuclei. The improvement involves a better parametrization of the elastic scattering phases shifts which has the correct behaviour for large angular momenta and requires the calcuation of only two phase shifts (for $`\kappa =1`$, and for $`\kappa `$ equal to $`Int(pR)+5`$). We showed that even for $`(e,e^{}p)`$ on $`{}_{}{}^{208}Pb`$ the cross section calculated with our approximation using the improved parametrization of the phase shifts agrees with the exact DWBA result quite well even out beyond the second maxima. This is a significant improvement over our previous approximation for the phase shifts. In addition, we compared our relativistic “single-particle” model for $`(e,e^{}p)`$ from $`{}_{}{}^{16}O`$ to the recently measured cross section at Thomas Jefferson Lab and found excellent agreement for the removal of a proton from the $`p_{3/2}`$ and $`p_{1/2}`$ shells with reasonable spectroscopic factors.
Our improved approximate method of including Coulomb distortion in electron scattering reactions works for high energy electrons as well as for more moderate energies ($`300500MeV`$), and for experiments at the few percent level this approximate way of including Coulomb distortion is adequate. More importantly, as discussed in our previous paper, this “plane-wave-like” approximation permits the extraction of “structure functions” even in the presence of strong Coulomb effects and thus provides a very good tool for looking into the response of the nucleus to “longitudinal” and “transverse” photons.
|
no-problem/9907/cond-mat9907286.html
|
ar5iv
|
text
|
# New Features of the Morphotropic Phase Boundary in the PbZr1-xTixO3 System
## I Introduction
The basic features of the PbZr<sub>1-x</sub>Ti<sub>x</sub>O<sub>3</sub> (PZT) phase diagram were determined in the 1950’s. The ceramic PZT system has the cubic perovskite ABO<sub>3</sub> structure at high temperatures. On lowering the temperature the materials undergo a phase transition to a ferroelectric phase for all compositions except those close to pure PbZrO<sub>3</sub>, where they become antiferroelectric. The ferroelectric region is divided in two phases with different symmetries by a morphotropic phase boundary (MPB), nearly vertical in temperature, occurring at a composition close to x= 0.47. The Ti-rich region has tetragonal symmetry ($`F_T`$, space group P4mm) and the Zr-rich region has rhombohedral symmetry ($`F_R`$). The latter is divided into high-temperature ($`F_{R(HT)}`$, space group R3c) and low-temperature ($`F_{R(LT)}`$, space group R3m) zones respectively .
Most of the studies on PZT have been performed for compositions around the MPB, motivated both by the interesting physical properties and the technologically-useful applications, such as high electromechanical coupling factors and permittivities, exhibited by PZT at this boundary . Due to compositional fluctuations, the MPB often appears as an ill-defined region of phase coexistence, instead of a well-defined boundary, whose size depends of the processing conditions . This fact has for many years hindered a detailed interpretation of the nature of the $`F_RF_T`$ phase transition and the MPB itself. In order to explore the MPB in more detail, we have embarked upon a systematic structural study using high-resolution synchrotron x-ray powder diffraction techniques and dielectric measurements to characterize the samples.
A recent unexpected result obtained from high-resolution x-ray measurements on a sample of high compositional homogeneity with x=0.48, was the discovery of a ferroelectric phase with monoclinic symmetry below approximately 250 K . The monoclinic unit cell is such that $`a_m`$ and $`b_m`$ lie along the tetragonal \[$`\overline{1}\overline{1}`$0\] and \[1$`\overline{1}`$0\] directions ($`a_mb_ma_t\sqrt{2}`$), and $`c_m`$ is close to the axis ($`c_mc_t`$). The temperature dependence of the monoclinic angle $`\beta `$, the angle between $`a_m`$ and $`c_m`$, corresponds to the evolution of the order parameter for the tetragonal-monoclinic ($`F_TF_M`$) phase transition. Two other compositions prepared under slightly different conditions with x = 0.47 and 0.50 were also studied at that time, and in the present paper we report results for these materials and propose a preliminary modification of the PZT phase diagram around the MPB.
## II Experimental
Two different compositions of PZT with Ti contents of 0.47 and 0.50 were prepared by a solid-state reaction from PbO<sub>2</sub>, ZrO<sub>2</sub> and Nb-free TiO<sub>2</sub> with chemical purities better than 99.9%. The mixed powders were calcined at 790<sup>o</sup>C, remilled, isostatically pressed at 200 Mpa, and sintered at 1200<sup>o</sup>C for 2h, with heating and cooling rates of 3<sup>o</sup>C/min. To minimize the volatilization of lead oxide, alumina crucibles with tightly-fitting covers were used, and a mixture of PbZrO<sub>3</sub>+5 wt% ZrO<sub>2</sub> was used as a lead source in the crucible. The densities measured by the liquid displacement method were $``$98% of the theoretical values.
High-resolution synchrotron x-ray powder diffraction measurements were made at beam line X7A at the Brookhaven National Synchrotron Light Source. A Ge(111) double-crystal monochromator was used in combination with a Ge(220) analyser, with wavelengths of 0.6896 Å for x= 0.47 and 0.7995 Å for x= 0.50. In this configuration, the instrumental resolution, $`\mathrm{\Delta }2\theta `$, is slightly better than 0.01<sup>o</sup> in the $`2\theta `$ region 0-30<sup>o</sup>, an order-of-magnitude better than that of a conventional laboratory instrument. The pellets were mounted in symmetric reflection geometry and scans made over selected peaks in the low-angle region of the pattern. Since lead is a strong absorber, the penetration depth below the surface of the pellet at $`2\theta =20^o`$ is only about 2 $`\mu `$m. Measurements were made at various temperatures between 20-790 K for x= 0.47, and from 20-300 K for x= 0.50.
Measurements above room temperature were performed with the pellet mounted on a flat BN sample holder inside a wire-wound BN tube furnace. The accuracy of the temperature was estimated to be within 5 K, and the temperature stability was $``$2 K. For measurements below room temperature, the sample was mounted on a flat Cu sample holder in a closed-cycle He cryostat. In this case, the estimated accuracy of the temperature was 1 K, with a stability of $``$0.1 K. The angular regions scanned were chosen so as to cover the pseudo-cubic (100), (110), (111), (200), (220) and (222) reflections, with a 2$`\theta `$ step interval of 0.005 or 0.01<sup>o</sup> depending on the peak widths. Dielectric measurements were performed with a precision LCR meter (Hewlett Packard-4284A) varying temperature at a constant rate of 0.5 K/min with a temperature accuracy better than 0.1 K.
## III Results and Discussion
Based on the analysis of the diffraction data for the PZT compositions studied in this work (x= 0.47 and 0.50) and that previously reported for x= 0.48 , we propose a modification of the PZT phase diagram around its MPB, which includes the new monoclinic phase as shown in Fig. 1, where temperatures below 300 K are also shown. As can be seen, the MPB in Jaffe’s phase diagram corresponds to the phase boundary between the $`F_T`$ and the $`F_M`$ phases, but the $`F_MF_R`$ phase boundary is still not well defined.
Fig. 2 shows that the monoclinic phase is found to exist for x= 0.50 below room temperature. From peak fits based on a pseudo-Voigt peak function, at 20 K the (111)<sub>c</sub> pseudocubic reflection is found to consist of three different peaks corresponding to the monoclinic ($`\overline{2}`$01), (021)and (201) reflections, while the pseudocubic (220)<sub>c</sub> is split into four peaks corresponding to the ($`\overline{2}`$22),(222), (400) and (040) monoclinic reflections. The last two are fairly close to each other, indicating that the difference between $`a_m`$ and $`b_m`$ is quite small. As the temperature increases the monoclinic splitting becomes less evident. For T= 150 K, (400) and (040) at 2$`\theta 32.62^{}`$ cannot be resolved, showing that $`a_mb_m`$. For T$`>`$ 200 K, the monoclinic features, if any, cannot be detected, and the observed reflections can be indexed on the basis of a tetragonal unit cell. The evolution of the lattice parameters for temperatures below 300 K is shown in Fig. 3.
The PZT composition with x= 0.47 was found to be rhombohedral from 20-300 K. Fig. 4 shows the temperature evolution of the (111)<sub>c</sub> and (200)<sub>c</sub> pseudo-cubic reflections between 300-787 K. At 300 K (111)<sub>c</sub> is split into rhombohedral (111) and (11$`\overline{1}`$) peaks, while (200)<sub>c</sub> remains a single peak, corresponding to rhombohedral (200). For 300 K$`<`$ T$`<`$ 440 K there is a region where the peak profiles broaden in a complex way, suggesting the gradual evolution of a second phase which is difficult to characterize (see T= 372 and 425 K in Fig. 4).
This could simply reflect the coexistence of rhombohedral and tetragonal phases accompanied by a considerable amount of internal strain, but it is certainly not possible to rule out the formation of small local regions of the monoclinic phase. For T above $``$450 K, the tetragonal phase can be clearly identified although there is still some residual diffuse scattering in the vicinity of rhombohedral (200). Finally, at T$`665`$ K, the cubic phase appears. The temperature evolution of the lattice parameters is shown in Fig. 5. We note that the peak profiles in the cubic region are about twice as broad as those found for the x = 0.48 sample , indicative of a smaller crystallite size and wider range of compositonal inhomogeneity. This is probably associated with the slightly different sintering temperatures used for the preparation of the samples, 1200 and 1250<sup>o</sup>C respectively.
The dielectric permittivity, $`\epsilon `$, was measured along the axis of the pellets at 1, 10 and 100 kHz for both compositions as the temperature was raised from 140 K. $`\epsilon ^1`$ is plotted in Fig. 6 for x= 0.50 in the interval 140 K $`<`$ T$`<`$ 350 K. Two changes of slope are clearly observed as temperature values decrease, the first one at $``$310 K, and the second at $``$190 K. It is possible that the first of these two anomalies could correspond to the onset of a local monoclinic distortion even though we were unable to resolve any monoclinic splitting ($`a_m=b_m`$). The anomaly at T$``$ 190 K would then correspond to the onset of the long-range distortion below which $`a_mb_m`$.
The inverse of the dielectric permittivity with increasing temperature is plotted in Fig. 7 for x= 0.47. In the region below the cubic-tetragonal transition at 640 K, there is an increase in slope in the region around 440 K as indicated by the broken lines, while at lower temperatures the slope continues to increase, but without any sharp discontinuities indicative of a well-defined transition. The results are generally consistent with the diffraction evidence for phase coexistence and the possible existence of a monoclinic phase. The permittivity data are in good agreement with those recently reported by Zhang et al., who also found thermal hysteresis effects between 300-400 K which they interpreted as the coexistence of rhombohedral and tetragonal phases. A R-T coexistence region between 473-533 K was also inferred by Mishra et al. for a sample with x=0.535 on the basis of planar coupling coefficient measurements and laboratory x-ray data.
This work clearly demonstrates the need for both excellent compositional homogeneity and high instrumental resolution for the determination of the features of the PZT phase diagram around the MPB. Further work along these lines is in progress.
## IV Acknowledgments
We thank L.E. Cross and R. Guo for their stimulating discussions. Support by NATO (R.C.G.970037), the Spanish CICyT (PB96-0037) and the U.S. Department of Energy (contract No. DE-AC02-98CH10886) is also acknowledged.
|
no-problem/9907/hep-ph9907314.html
|
ar5iv
|
text
|
# Triangular Textures for Quark Mass Matrices
## Abstract
The hierarchical quark masses and small mixing angles are shown to lead to a simple triangular form for the $`U`$\- and $`D`$-type quark mass matrices. In the basis where one of the matrices is diagonal, each matrix element of the other is, to a good approximation, the product of a quark mass and a CKM matrix element. The physical content of a general mass matrix can be easily deciphered in its triangular form. This parameterization could serve as a useful starting point for model building. Examples of mass textures are analyzed using this method.
One of the most puzzling problems confronting the Standard Model (SM) concerns the family structure of quarks and leptons. To date there is very little understanding of the phenomena of hierarchical fermion masses and their mixing angles. A common approach to the problem is to postulate phenomenological mass matrices with certain simple textures. Usually these matrices are assumed to be hermitian for simplicity, although non-hermitian mass matrices have also been studied . In practice, non-hermitian mass matrices arise naturally in many models .
In this Letter, we suggest a new parameterization of fermion mass matrices which is non-hermitian and, more specifically, triangular in form. It should be emphasized that any mass matrix can be easily brought into a triangular form by a right-handed rotation, whereas it is non-trivial to make it hermitian. Also, the condition of minimal parameterization can be satisfied for this type of texture. In fact, we believe that in the minimal parameter basis the triangular form is the simplest to study with all unphysical features eliminated from the start. If one considers the case when, say, the $`U`$-quark mass matrix is diagonal, but the $`D`$-quark mass matrix is triangular (which in general contains six real numbers and a phase), there will be only ten parameters in the two mass matrices. These account for the six quark masses, three mixing angles and a phase in the Cabibbo-Kobayashi-Maskawa (CKM) matrix. Furthermore, we will show that when the matrix is “upper-triangular”, with zeros in the lower-left part, each of the matrix elements is approximately equal to the product of a quark mass and a CKM matrix element. Note also that one can always choose a basis where one type is diagonal and the other triangular.
Thus the triangular mass matrix offers not only the most economical, but also the most physical parameterization. Starting from any proposed mass matrix, a transformation into the triangular form enables one to read off the physical parameters immediately. Conversely, all viable mass matrices can be obtained there from by a suitable rotation. Before we proceed to study specific triangular textures, it is worth mentioning that there are four types of triangular textures with the three zeros in the upper-left, upper-right, lower-left, and lower-right corners of the mass matrix. All these can be transformed into each other by exchanging rows and columns, which amounts to a change of basis for the left-handed and right-handed quarks, respectively. The first two types do not yield simple relations in terms of the quark masses and mixing angles . The latter two are simply related by exchanging the first and third family right-handed quarks. We are naturally led to choose the texture with zeros in the lower-left corner of the mass matrix, as will become evident later.
Let us now start by writing both the $`U`$\- and $`D`$-type quark mass matrices in the upper-triangular form:
$$T_f=\left(\begin{array}{ccc}a_1& a_2e^{i\varphi _1}& a_3e^{i\varphi _3}\\ 0& b_1& b_2e^{i\varphi _2}\\ 0& 0& c\end{array}\right),$$
(1)
where the phases are explicitly shown ($`a_1`$, $`b_1`$, and $`c`$ are chosen to be positive using phase rotations of the right-handed quarks). We have suppressed the index $`f=U,D`$ from the mass parameters in Eq. (1). As stated before, any mass matrix, $`M`$, can be rotated by a series of right-handed rotations into this upper-triangular form. For example, an appropriate rotation $`U=R_{13}R_{23}R_{12}`$ with $`T=MU`$, can make the (3,1), (3,2), and (2,1) elements vanish in succession.
We turn to the diagonalization of the matrix $`H_f=T_fT_f^{}=M_fM_f^{}`$ which is given explicitly by
$$H_f=\left(\begin{array}{ccc}a_1^2+a_2^2+a_3^2& a_2b_1e^{i\varphi _1}+a_3b_2e^{i(\varphi _3\varphi _2)}& ca_3e^{i\varphi _3}\\ a_2b_1e^{i\varphi _1}+a_3b_2e^{i(\varphi _2\varphi _3)}& b_1^2+b_2^2& cb_2e^{i\varphi _2}\\ ca_3e^{i\varphi _3}& cb_2e^{i\varphi _2}& c^2\end{array}\right).$$
(2)
In the basis where $`M_U`$ is diagonal and $`M_D`$ is of the triangular form (1), we can solve for $`M_D`$ in terms of quark masses, mixing angles and the $`CP`$-violating phase. Recall that in this basis $`V_{\mathrm{CKM}}^{}H_DV_{\mathrm{CKM}}=D_D^2`$, where $`V_{\mathrm{CKM}}`$ is the CKM matrix and $`D_D^2\mathrm{diag}(m_d^2,m_s^2,m_b^2)`$. Denoting $`x_{ij}\left(V_{\mathrm{CKM}}D_D^2V_{\mathrm{CKM}}^{}\right)_{ij}`$, the following relations between the triangular mass matrix parameters and the physical parameters can be derived:
$`c`$ $`=`$ $`\sqrt{x_{33}}=m_bV_{tb}\left(1+O(\lambda ^8)\right),`$ (3)
$`b_2e^{i\varphi _2}`$ $`=`$ $`x_{23}/c=m_bV_{cb}\left(1+O(\lambda ^4)\right),`$ (4)
$`a_3e^{i\varphi _3}`$ $`=`$ $`x_{13}/c=m_bV_{ub}\left(1+O(\lambda ^4)\right),`$ (5)
$`b_1`$ $`=`$ $`\sqrt{x_{22}b_2^2}=m_sV_{cs}\left(1+O(\lambda ^4)\right),`$ (6)
$`a_2e^{i\varphi _1}`$ $`=`$ $`\left(x_{12}a_3b_2e^{i(\varphi _3\varphi _2)}\right)/b_1=m_sV_{us}\left(1+O(\lambda ^4)\right),`$ (7)
$`a_1`$ $`=`$ $`\sqrt{x_{11}a_2^2a_3^2}=m_dm_sm_b/(b_1c)={\displaystyle \frac{m_d}{V_{ud}}}\left(1+O(\lambda ^4)\right),`$ (8)
where the last relation is derived by using the determinant of the mass matrix $`a_1b_1c=m_dm_sm_b`$, and $`\lambda =0.22`$ as in the Wolfenstein parameterization . To be consistent with Eq. (1), $`V_{ud}`$, $`V_{cs}`$, and $`V_{tb}`$ are all chosen to be positive. In a nutshell, $`M_D`$ has the simple expression (to $`O(\lambda ^4)`$),
$$M_D\left(\begin{array}{ccc}m_d/V_{ud}& m_sV_{us}& m_bV_{ub}\\ 0& m_sV_{cs}& m_bV_{cb}\\ 0& 0& m_bV_{tb}\end{array}\right),(M_U\mathrm{diagonal}).$$
(9)
It is the hierarchical structure of the masses and the CKM matrix which entails this simple form for the mass matrix. One can easily verify that $`V_{\mathrm{CKM}}^{}M_D\mathrm{diag}(m_d,m_s,m_b)`$. Also, under phase transformations of the quark fields, $`M_D`$ is invariant in form if we make the replacement $`m_d/V_{ud}m_d/V_{ud}^{}`$. Given any mass matrix $`M`$, after a rotation into the upper-triangular form (1), one can read off directly the approximate mass eigenvalues and mixings. Of course, exact solutions can also be obtained from Eq. (3). We have compared numerical solutions with Eq. (9) and found them to agree very well.
The relations (3) can be easily inverted to give the physical quantities in terms of the triangular matrix parameters. Up to $`O(\lambda ^4)`$ corrections: $`m_bc,m_sb_1\left(1+\frac{1}{2}\lambda ^2\right),m_da_1\left(1\frac{1}{2}\lambda ^2\right),\left|V_{ub}\right|a_3/c,\left|V_{cb}\right|b_2/c,\left|V_{us}\right|a_2/b_1\left(1\frac{1}{2}\lambda ^2\right).`$ Also, we can compute the Jarlskog invariant
$`J`$ $`=`$ $`{\displaystyle \frac{i}{2}}{\displaystyle \frac{\mathrm{det}[H_U,H_D]}{\left(m_i^2m_j^2\right)}}={\displaystyle \frac{a_2a_3b_1b_2c^2}{(m_b^2m_s^2)(m_b^2m_d^2)(m_s^2m_d^2)}}\mathrm{sin}\mathrm{\Phi }_D`$ (10)
$``$ $`{\displaystyle \frac{a_2a_3b_2}{b_1c^2}}\mathrm{sin}\mathrm{\Phi }_D,`$ (11)
where $`\mathrm{\Phi }_D=\varphi _1+\varphi _2\varphi _3`$. CP violating effects depend only on the combination $`\mathrm{\Phi }_D`$. In general, one can put the phase $`\mathrm{\Phi }_D`$ in Eq. (1) at any of the (1,2), (2,2), (2,3), or (1,3) positions. Up to $`O(\lambda ^4)`$ correction, $`\mathrm{\Phi }_D`$ is equal to the $`\gamma `$ angle of the unitarity triangle, $`\mathrm{\Phi }_D\gamma \mathrm{arg}\left[\frac{V_{ud}V_{ub}^{}}{V_{cd}V_{cb}^{}}\right].`$
If we take $`M_D`$ to be diagonal while $`M_U`$ is assumed to have an upper-triangular form as in Eq. (1), the corresponding relations between the $`M_U`$ parameters and the physical parameters can be derived in the same way as before. The approximate form for $`M_U`$ (to $`O(\lambda ^4)`$) is found to be
$`M_U`$ $``$ $`\left(\begin{array}{ccc}m_u/V_{ud}& m_cV_{cd}^{}& m_tV_{td}^{}\\ 0& m_cV_{cs}& m_tV_{ts}^{}\\ 0& 0& m_tV_{tb}\end{array}\right),(M_D\mathrm{diagonal}).`$ (12)
Again, here $`V_{ud}`$, $`V_{cs}`$, and $`V_{tb}`$ are chosen to be positive. The Jarlskog invariant is now given by
$`J`$ $`=`$ $`{\displaystyle \frac{a_2a_3b_1b_2c^2}{(m_t^2m_c^2)(m_t^2m_u^2)(m_c^2m_u^2)}}\mathrm{sin}\mathrm{\Phi }_U{\displaystyle \frac{a_2a_3b_2}{b_1c^2}}\mathrm{sin}\mathrm{\Phi }_U,`$ (13)
where $`\mathrm{\Phi }_U=\varphi _1+\varphi _2\varphi _3`$ and all variables refer to $`M_U`$. The CP phase $`\mathrm{\Phi }_U`$ is related to the $`\beta `$ angle of the unitarity triangle: $`\mathrm{\Phi }_U\beta \beta _s\beta \left(1+O(\lambda ^2)\right)`$, where $`\beta \mathrm{arg}\left[\frac{V_{cd}V_{cb}^{}}{V_{td}V_{tb}^{}}\right]`$ and $`\beta _s=\mathrm{arg}\left[\frac{V_{ts}V_{tb}^{}}{V_{cs}V_{cb}^{}}\right]`$.
Note that for mass matrices with hierarchical eigenvalues and small mixings, it is always possible to rotate them into the upper-triangular form with the largest element at the (3,3) position. This is the main difference between triangular matrices with zeros in the lower-left and upper-right, the latter does not have this simple hierarchical structure .
If one starts with the case when neither $`M_U`$ nor $`M_D`$ is diagonal, one can convert both into hierarchical triangular forms through right-handed rotations. It may be first necessary to extract a large, common left-handed rotation from both $`M_U`$ and $`M_D`$, which cancels out in $`V_{\mathrm{CKM}}`$, to ensure that only the (3,3) element is large. This is illustrated in the first example below. Furthermore, the diagonal elements are simply the mass eigenvalues if $`a_{1,2}b_1`$. The CKM matrix can then be obtained directly,
$$V_{\mathrm{CKM}}=V_U^{}V_D,$$
(14)
where $`V_U`$ and $`V_D`$ are obtained from $`M_U`$ and $`M_D`$, as in Eq. (12) and Eq. (9). Both $`V_U`$ and $`V_D`$ are given approximately by $`R_{23}\left(b_2e^{i\varphi _2}/c\right)R_{13}\left(a_3e^{i\varphi _3}/c\right)R_{12}\left(a_2e^{i\varphi _1}/b_1\right)`$.
Examples:
We now show that triangularization is a very useful tool to study the physical content of any mass matrix. Three examples are given: the first is based on the democratic mass texture , the second deals with a realization of the Fritzsch texture. The last example is a new texture that we found based on $`SU_H(3)`$ horizontal symmetry.
\- Democratic Mass Texture: The quark mass matrices are taken to be democratic (all the Yukawa couplings are the same for each quark sector). For example, the $`U`$-quark mass matrix given in is
$$M_U=\frac{K_U}{3}\left[\left(\begin{array}{ccc}1& 1& 1\\ 1& 1& 1\\ 1& 1& 1\end{array}\right)+\left(\begin{array}{ccc}ϵ& 0& \delta \\ 0& ϵ& \delta \\ \delta & \delta & 0\end{array}\right)\right],$$
(15)
where the second term expresses small effects which violate the democratic texture ($`ϵ\delta 1`$). In the limit when $`\delta ,ϵ0`$, this matrix (also $`M_D`$) is diagonalized by the unitary matrix
$$A=\left(\begin{array}{ccc}\frac{1}{\sqrt{2}}& \frac{1}{\sqrt{2}}& 0\\ \frac{1}{\sqrt{6}}& \frac{1}{\sqrt{6}}& \frac{2}{\sqrt{6}}\\ \frac{1}{\sqrt{3}}& \frac{1}{\sqrt{3}}& \frac{1}{\sqrt{3}}\end{array}\right).$$
(16)
We may then rotate $`AM_UA^1`$ into the following triangular form:
$$T_U\frac{K_U}{3}\left(\begin{array}{ccc}\frac{ϵ^2}{2\delta ^2}& \frac{ϵ}{\sqrt{3}}& \sqrt{\frac{2}{3}}ϵ\\ 0& \frac{2}{3}\delta ^2& \sqrt{2}\delta \\ 0& 0& 3\end{array}\right),$$
(17)
The mass eigenvalues ($`m_u\frac{ϵ^2K_U}{6\delta ^2},m_c\frac{2}{9}\delta ^2K_U,m_tK_U`$) can be easily read off from the diagonal elements and the $`U`$-quark mixing angles are simply given by $`\theta _U^{12}\frac{\sqrt{3}}{2}\frac{ϵ}{\delta ^2}`$, $`\theta _U^{23}\frac{\sqrt{2}}{3}\delta `$, and $`\theta _U^{13}\frac{2}{3\sqrt{6}}ϵ`$, in agreement with the results in . A similar analysis can be done for the $`D`$-quark mass matrix.
\- Fritzsch Mass Texture: The quark mass matrices studied in are given by
$$M_{D,U}=\left(\begin{array}{ccc}0& \sqrt{m_1m_2}e^{i\delta _1}& 0\\ \sqrt{m_1m_2}e^{i\delta _1}& m_2& \sqrt{m_1m_3}e^{i\delta _2}\\ 0& \sqrt{m_1m_3}e^{i\delta _2}& m_3\end{array}\right),$$
(18)
where $`m_i(i=1,2,3)`$ are the quark masses for the $`U`$ and $`D`$ sectors. The corresponding triangular form after performing right-handed rotations is given by
$$T_{D,U}\left(\begin{array}{ccc}m_1\left(1+\frac{m_1}{2m_2}\right)& \sqrt{m_1m_2}\left(1\frac{m_1}{2m_2}\right)e^{i\delta _1}& m_1\sqrt{\frac{m_2}{m_3}}e^{i(\delta _1+\delta _2)}\\ 0& m_2\left(1\frac{m_1}{2m_2}\right)& \sqrt{m_1m_3}\left(1+\frac{m_2}{m_3}\right)e^{i\delta _2}\\ 0& 0& m_3\end{array}\right).$$
(19)
Note the appearance of $`m_1`$ at the (1,1) position. The CKM matrix which follows agrees with the results of .
\- New $`SU(3)`$ motivated Texture: Consider the following texture:
$$M_D=m_b\left(\begin{array}{ccc}a& a& ae^{i\varphi _d}\\ b& b+2a& 2b+2a\\ 0& 0& 1\end{array}\right).$$
(20)
The $`U`$-type mass matrix is taken to be diagonal and real. To examine whether this form of $`M_D`$ is viable, we first convert it into the triangular form:
$$T_D=m_b\left(\begin{array}{ccc}\frac{2a^2}{B}& \frac{2a\left(a+b\right)}{B}& ae^{i\varphi _d}\\ 0& B& 2b+2a\\ 0& 0& 1\end{array}\right),$$
(21)
where $`B=\sqrt{2b^2+4ab+4a^2}`$. Comparing Eq. (21) to Eq. (9), we obtain the following relations (up to $`O(\lambda ^4)`$ corrections):
$`a`$ $``$ $`\sqrt{{\displaystyle \frac{m_dm_s}{2m_b^2}}},`$ (22)
$`b`$ $``$ $`{\displaystyle \frac{m_s}{\sqrt{2}m_b}}\left(1\sqrt{{\displaystyle \frac{m_d}{m_s}}}{\displaystyle \frac{m_d}{m_s}}\right),`$ (23)
$`\left|V_{cb}\right|`$ $``$ $`2(a+b)\sqrt{2}{\displaystyle \frac{m_s}{m_b}}\left(1{\displaystyle \frac{m_d}{m_s}}\right),`$ (24)
$`\left|V_{ub}\right|`$ $``$ $`a\sqrt{{\displaystyle \frac{m_dm_s}{2m_b^2}}},`$ (25)
$`\left|V_{us}\right|`$ $``$ $`\sqrt{{\displaystyle \frac{m_d}{m_s}}}\left(1{\displaystyle \frac{m_d}{2m_s}}\right).`$ (26)
Moreover, we have the prediction
$$\left|\frac{V_{us}V_{cb}}{V_{ub}}\right|2.$$
(27)
Numerically, using the central values of the $`D`$-quark masses at $`M_Z`$ and $`\gamma 60^{}`$ , we obtain
$$a_D=4.93\times 10^3,b_D=1.60\times 10^2,\varphi _d=60^{}.$$
(28)
The CKM matrix is
$$\left|V_{\mathrm{CKM}}\right|=\left(\begin{array}{ccc}0.976& 0.219& 0.00493\\ 0.219& 0.975& 0.0418\\ 0.00792& 0.0414& 0.999\end{array}\right),$$
(29)
in good agreement with the experimental values. The Jarlskog parameter calculated from $`M_D`$ is given by $`J=4.4\times 10^5\mathrm{sin}\varphi _d`$.
We close with a few concluding remarks. The known hierarchy of the quark masses and mixing angles leads to a very elegant triangular texture for the mass matrices. When either $`M_U`$ or $`M_D`$ is diagonal, one can directly read off the masses and the CKM mixing from the mass matrices. The one independent phase therein corresponds to the angles $`\gamma `$ and $`\beta `$ in the unitarity triangle, respectively. If both $`M_U`$ and $`M_D`$ are triangular, the CKM matrix is a simple product of two unitary matrices. This result is very useful for model building. Given any model mass matrix, triangularization offers an immediate criterion for its viability. We have illustrated this with several examples, where in each case the result was obtained quickly and simply. Clearly, one could also reverse the process by rotating the triangular form to generate viable model mass matrices. We hope that this parameterization can help suggest models which are both phenomenologically correct and theoretically justifiable. The application of triangular mass matrices to the charged lepton sector is immediate, whereas for the neutrino sector, the interpretation of the triangular mass texture may not be as simple due to the large mixings and the possibility of nearly degenerate neutrino masses. Work along this direction is in progress.
###### Acknowledgements.
T. K. and G. W. are supported by the DOE, Grant no. DE-FG02-91ER40681. S. M. is supported by the Purdue Research Foundation.
|
no-problem/9907/gr-qc9907064.html
|
ar5iv
|
text
|
# Untitled Document
Quantum Creation of Topological Black Hole
Zhong Chao Wu
Dept. of Physics, Beijing Normal University
Beijing 100875, China
and
Dept. of Applied Mathematics, University of Cape Town
Rondebosch 7700, South Africa
Abstract
The constrained instanton method is used to study quantum creation of a vacuum or charged topological black hole. At the $`WKB`$ level, the relative creation probability is the exponential of a quarter sum of the horizon areas associated with the seed instanton.
PACS number(s): 98.80.Hw, 98.80.Bp, 04.60.Kz, 04.70.Dy
Keywords: quantum cosmology, constrained gravitational instanton, black hole creation, topological black hole
e-mail: wu@axp3g9.icra.it
The cosmological $`C`$ metrics contain many topological black hole spacetimes. In this paper we shall study the quantum creation of a topological black hole. Since one can count the Schwarzschild black hole, for example, twice from two spatial infinities, one can interpret it as a pair of black. In the no-boundary universe , we shall use the constrained instanton method to evaluate the relative creation probability . The constrained instanton can mediate the creation of a black hole. The regular instanton can be considered as a special case thereof.
The Euclidean metric of a nonrotating topological black hole can be written in a simple form
$$ds^2=\mathrm{\Delta }d\tau ^2+\mathrm{\Delta }^1dr^2+r^2d\mathrm{\Omega }_2^2,$$
(1)
where
$$\mathrm{\Delta }=p\frac{2m}{r}+\frac{Q^2}{r^2}\frac{\mathrm{\Lambda }r^2}{3}.$$
(2)
Here, $`m`$, $`Q`$, $`\mathrm{\Lambda }`$ are the mass parameter, magnetic or electric charge and cosmological constant, respectively, and $`p`$ is the sign of the constant curvature of the two dimensional space $`d\mathrm{\Omega }_2^2`$
$$d\mathrm{\Omega }_2^2=d\theta ^2+\mathrm{sin}^2\theta d\varphi ^2p=1,$$
$$d\mathrm{\Omega }_2^2=d\theta ^2+d\varphi ^2p=0,$$
$$d\mathrm{\Omega }_2^2=d\theta ^2+\mathrm{sinh}^2\theta d\varphi ^2p=1.$$
(3)
They correspond to the 2-sphere of genus $`g=0`$, the torus of genus $`g=1`$ and the 2-hyperboloid, respectively. The open 2-hyperboloid can be compactified into a homogeneous space with genus $`g2`$ by identifying opposite sides of an regular polygon of $`4g`$ sides with area $`4\pi (g1)`$ on the 2-hyperboloid. The gauge field is
$$F=Q\mathrm{sin}\theta d\theta d\varphi p=1,$$
$$F=Qd\theta d\varphi p=0,$$
$$F=Q\text{sinh}\theta d\theta d\varphi p=1$$
(4)
for the magnetically charged case, and
$$F=\frac{iQ}{r^2}d\tau dr$$
(5)
for the electrically charged case. We do not consider the dyonic case.
At the $`WKB`$ level, the Lorentzian spacetime is created from a constrained instanton as the seed. It is a compact section of a complex manifold with a stationary action under the constraints that in the created universe a 3-geometry and the matter field on it are given. For our case, we shall use the complex form of solution (1)-(5) to construct a compact section by cutting and pasting. In general, it will lead to some conical singularities. The validity of the $`WKB`$ approximation requires the solution to have a stationary action. The existence of the singularity implies that the manifold is not of a stationary action under no constraint. However, we shall show that its action is stationary under the constraints.
All Lorentzian sections obtained through analytic continuations from the constrained instanton can be interpreted as being created quantum mechanically from the seed. Its relative creation probability is
$$P\mathrm{exp}I_r,$$
(6)
where $`I_r`$ is the real part of the Euclidean version of the action $`I`$. It can written as
$$I=\frac{1}{16\pi }_M(R2\mathrm{\Lambda }F^2)\frac{1}{8\pi }_MK,$$
(7)
where $`R`$ is the scalar curvature of the spacetime $`M`$, $`K`$ is the trace of the second form of the boundary $`M`$, and $`F^2=F_{\mu \nu }F^{\mu \nu }`$ is the Lagrangian of the Maxwell field.
We now construct the constrained instanton. One can factorize $`\mathrm{\Delta }`$ into
$$\mathrm{\Delta }(r)=\frac{\mathrm{\Lambda }}{3r^2}(rr_0)(rr_1)(rr_2)(rr_3),$$
(8)
where $`r_i(i=0,1,2,3)`$ are zeros in the ascending order of their real parts. We assume that, for physics motivation, among these roots, at least two roots are real and positive. This condition can be met in a range of these parameters. The other pair can be real or a pair of complex conjugates. If a pair of the roots are equal, or complex conjugates, then the order is not essential. All these zeros are identified as horizons in a general sense.
The roots $`r_l`$ satisfy the following relations:
$$\underset{i}{}r_i=0,$$
(9)
$$\underset{i>j}{}r_ir_j=\frac{3p}{\mathrm{\Lambda }},$$
(10)
$$\underset{i>j>k}{}r_ir_jr_k=\frac{6m}{\mathrm{\Lambda }},$$
(11)
$$\underset{i}{}r_i=\frac{3Q^2}{\mathrm{\Lambda }}.$$
(12)
The surface gravity $`\kappa _i`$ of $`r_i`$ is
$$\kappa _i=\frac{\mathrm{\Lambda }}{6r_i^2}\underset{j=0,1,2,3,(ji)}{}(r_ir_j).$$
(13)
The complex constrained gravitational instanton is formed by identifying two sections of constant values of time $`\tau `$ between two complex horizons $`r_i`$ and $`r_j`$ . The pasting introduces two $`f_l`$-fold $`(l=i,j)`$ covers around the horizons. The $`f_l`$-fold cover then turns the $`(\tau r)`$ plane into a cone with a deficit angle $`2\pi (1f_l)`$ there. Both $`f_i`$ and $`f_j`$ can take any pair of complex numbers with the condition
$$f_i\beta _i(1)^{i+j}f_j\beta _j=0,$$
(14)
where $`\beta _l=2\pi \kappa _l^1`$. If $`f_i`$ or $`f_j`$ is different from $`1`$, then the conical singularity contributes to the action expressed by a degenerate form of the surface term in (7).
The singularity contribution to the action is
$$I_h=\underset{l=i,j}{}ϵ\pi r_l^2(1f_l).$$
(15)
The volume contribution to the action is
$$I_v=\frac{ϵ\mathrm{\Lambda }\mathrm{\Delta }\tau }{6}(r_j^3r_i^3)\pm \frac{ϵQ^2\mathrm{\Delta }\tau }{2}(r_i^1r_j^1),$$
(16)
where $`\mathrm{\Delta }\tau =f_i\beta _i`$ is the identification time period in the pasting, $`+()`$ is for the magnetic (electric) case, and $`ϵ`$ is $`1`$ for case $`p=1,0`$ and is $`g1`$ for case $`p=1`$. Here the unit torus area is chosen to be $`4\pi `$, for convenience.
One can use the joint section $`\tau =0`$ and $`\tau =\mathrm{\Delta }\tau /2`$ i.e. $`\tau =\mathrm{\Delta }\tau /2`$ as the equator, and then make a series of analytical continuations to obtain the wave function for the Lorentzian universe created. The action form (7) is suitable for the wave function for a given magnetic charge, the charge is evaluated by the surface integral of the gauge field over $`S^2`$ $`(\theta \varphi )`$ space divided by $`ϵ`$.
From Eqs. (14)(15)(16) one obtains the total action
$$I=ϵ\pi (r_i^2+r_j^2),$$
(17)
which is independent of the parameter $`\mathrm{\Delta }\tau `$. $`\mathrm{\Delta }\tau `$ is the only degree left under the restriction that the 3-metric $`h_{ij}`$ and the magnetic charge at the equator are given. Therefore, the manifold pasted is qualified as a constrained instanton.
However, the configuration of the wave function at the equator is not proper for the electrically charged case. Instead, the Maxwell part of the action in (7) is set for the variation under the condition that the following variable conjugate to the charge is fixed
$$\omega =ϵA,$$
(18)
where the integral of the vector potential $`A`$ is around the $`S^1`$ direction of the equator in the $`(\tau r)`$ space. Or equivalently, the wave function obtained from the path integral using the action (7) is $`\mathrm{\Psi }(\omega ,h_{ij})`$. The most convenient choice of the gauge potential for the calculation is
$$A=\frac{iQ}{r^2}\tau dr.$$
(19)
In order to obtain the wave function $`\mathrm{\Psi }(Q,h_{ij})`$ for a given electric charge, one has to appeal to a representation transformation
$$\mathrm{\Psi }(Q,h_{ij})=\frac{1}{2\pi }_{\mathrm{}}^{\mathrm{}}𝑑\omega e^{i\omega Q}\mathrm{\Psi }(\omega ,h_{ij}).$$
(20)
This Fourier transformation is equivalent to a multiplication of an extra factor
$$\mathrm{exp}\left(\frac{ϵ\mathrm{\Delta }\tau Q^2(r_i^1r_j^1)}{2}\right)$$
(21)
to the wave function. Or equivalently, this introduces an extra term into the action, which turns the $``$ sign in the matter term of (16) to $`+`$, and the effective action takes the same form as that for the magnetic case. One of the motivations for the Fourier transformation is to recover the duality between magnetic and electric black hole creations .
We have closely followed in deriving (17), using (9)(12). It is noted that the derivation of the result is independent of the condition (10), which is the only place $`p`$ is involved. It is not surprising that the action should be a linear function of $`\mathrm{\Delta }\tau `$. What is surprising is that the action is independent of $`\mathrm{\Delta }\tau `$. Consequently, the action is the negative of the entropy associated with the two horizons of the instanton . Our approach confirms that the black hole entropy is one quarter their horizon area . This approach is not sensitive to the procedure used in the background subtraction .
In the no-boundary universe, the true constrained instanton for the same universe created should have the largest action in comparison with the rest of the instantons .
Therefore, for the case $`\mathrm{\Lambda }>0`$, if $`Q=0`$, $`r_1`$ disappears and one has to use the instanton with horizons $`r_2,r_3`$, these horizons are identified as the black hole and cosmological horizons. If $`Q0`$, one has to use the inner and outer black hole horizons $`r_1,r_2`$ for the instanton.
For the case $`\mathrm{\Lambda }<0`$, one has to use the instanton with complex conjugate horizons $`r_0,r_1`$. $`r_2`$ and $`r_3`$ are the inner and outer black hole horizons. If $`Q=0`$, then the inner horizon $`r_2`$ disappears.
From Eqs. (9)(10) one finds that the sum of all horizon areas is a constant
$$\underset{i=0,1,2,3}{}S_i=\underset{i=0,1,2,3}{}4\pi ϵr_i^2=\frac{24\pi ϵp}{\mathrm{\Lambda }}.$$
(22)
Since the relative creation probability is, at the $`WKB`$ level, the exponential of the negative of the action, or of the entropy. For the case $`\mathrm{\Lambda }>0,Q=0(Q0)`$, it is the exponential of a quarter of the sum of the outer black hole and cosmological (inner black hole) horizon areas. For the case $`\mathrm{\Lambda }<0,Q=0(Q0)`$, one can use (22) and conclude that the relative probability is the exponential of a negative quarter of the black hole horizon area (a quarter of the negative sum of inner and outer black hole horizon areas). These results are very similar to those of their cousins, the Kerr-Newman-(anti-)de Sitter black holes .
We are also interested in the creation of black hole with noncompact horizons. One can begin with metric (1) for $`p=1`$. Instead of the previous compactification approach, one can make an analytic continuation letting $`\chi =i\theta `$, then the metric takes the form
$$ds^2=\mathrm{\Delta }d\tau ^2+\mathrm{\Delta }^1dr^2r^2(d\chi ^2+\mathrm{sin}^2\chi d\varphi ^2),$$
(23)
The constrained instanton is similar to that for case of $`S^2`$ horizon with the modified signature. The creation probability should be the same as that for the case of compactified 2-hyperboloid horizon with $`g=2`$.
At the $`WKB`$ level, one can obtain the total creation probability of the black hole for $`\mathrm{\Lambda }<0`$ with given charge and mass and all different topologies
$$P=P_1+P_0+P_1\left(1+\frac{1}{1P_1}\right),$$
(24)
where $`P_1,P_0`$ and $`P_1`$ are the probabilities for the cases of horizons with topology 2-sphere, 2-torus and compactified 2-hyperboloid of $`g=2`$, respectively. $`P_1`$ is also the creation probability included for the case of open 2-hyperboloid horizon. The fraction term accounts for all cases of a compactified 2-hyperboloid horizon.
The case $`\mathrm{\Lambda }=0`$ can be considered as the limiting case as $`\mathrm{\Lambda }`$ approaches zero from below.
All known results of topological black hole creations mediated by regular instantons become special cases of the consideration here. It is noted that for the black hole creation in the open space background we do not require the presence of domain wall for the compactification of spacetime as in .
We have investigated the quantum creation of the black hole with 2-dimensional horizon of topology of genus $`g`$. There exists another kind of constant curvature black hole. This is called the 4-dimensional version of the $`BTZ`$ black hole . The case of the original 3-dimensional $`BTZ`$ black hole and its higher dimensional version is dealt with in a separate publication.
Acknowledgements
I would like to thank G.F.R. Ellis of University of Cape Town for his hospitality.
References:
1. J.F. Plebanski and M. Demianski, Ann. Phys. 98, 98 (1976).
2. J.B. Hartle and S.W. Hawking, Phys. Rev. D28, 2960 (1983).
3. Z.C. Wu, Int. J. Mod. Phys. D6, 199 (1997), gr-qc/9801020.
4. Z.C. Wu, Phys. Lett. B445, 274 (1999); gr-qc/9810077.
5. S.W. Hawking and S.F. Ross, Phys. Rev. D52, 5865 (1995).
6. R.B. Mann and S.F. Ross, Phys. Rev. D52, 2254 (1995).
7. Z.C. Wu, Gene. Relativ. Grav. 31, 1097 (1999), gr-qc/9812051.
8. D. Brill, J. Louko and P. Peldan, Phys. Rev. D56, 3600 (1997).
9. L. Vanzo, Phys. Rev. D56, 6475 (1997).
10. R.B. Mann, Nucl. Phys. B 516, 357 (1998).
11. P.R. Caldwell, A. Chamblin and G.W. Gibbons, Phys. Rev. D53, 7103 (1996).
12. M. Ba$`\stackrel{~}{n}`$ados, C. Teitelboim and J. Zanelli, Phys. Rev. Lett. 69, 1849 (1992).
13. M. Ba$`\stackrel{~}{n}`$ados, Phys. Rev. D57, 1068 (1998).
|
no-problem/9907/hep-ph9907400.html
|
ar5iv
|
text
|
# References
UCRHEP-T259
July 1999
Stability of Neutrino Mass Degeneracy
Ernest Ma
Department of Physics
University of California
Riverside, California 92521
## Abstract
Two neutrinos of Majorana masses $`m_{1,2}`$ with mixing angle $`\theta `$ are unstable against radiative corrections in the limit $`m_1=m_2`$, but are stable for $`m_1=m_2`$ (i.e. opposite $`CP`$ eigenstates) with $`\theta =45^{}`$ which corresponds to an additional symmetry.
Pick two neutrinos, say $`\nu _e`$ and $`\nu _\mu `$. Assume their mass eigenstates to be
$$\nu _1=\nu _e\mathrm{cos}\theta \nu _\mu \mathrm{sin}\theta ,\nu _2=\nu _e\mathrm{sin}\theta +\nu _\mu \mathrm{cos}\theta ,$$
(1)
with eigenvalues $`m_1`$ and $`m_2`$ respectively. Neutrino oscillations may then occur if both $`\mathrm{\Delta }m^2=m_2^2m_1^2`$ and $`\mathrm{sin}^22\theta `$ are nonzero. However, it is entirely possible that the hierarchy
$$\mathrm{\Delta }m^2<<m_{1,2}^2$$
(2)
actually exists, so that the smallness of $`\mathrm{\Delta }m^2`$ for neutrino oscillations does not necessarily preclude a much larger common mass for the two neutrinos. In fact, this idea is often extended to all three neutrinos. On the other hand, since the charged-lepton masses are all different, radiative corrections to $`m_1`$ and $`m_2`$ will tend to change $`\mathrm{\Delta }m^2`$ as well as $`\theta `$. This is especially important for the vacuum oscillation solution to the observed solar neutrino deficit which requires $`\mathrm{\Delta }m^210^{10}`$ eV<sup>2</sup> and $`\mathrm{sin}^22\theta 1`$. In the following I show that whereas the limit $`m_1=m_2`$ is unstable against radiative corrections, the limit $`m_1=m_2`$ and $`\theta =45^{}`$ is stable because it is protected by an additional symmetry. \[A negative mass here means that the corresponding Majorana neutrino is odd under $`CP`$ after a $`\gamma _5`$ rotation to remove the minus sign.\]
Consider the $`2\times 2`$ mass matrix spanning $`\nu _e`$ and $`\nu _\mu `$:
$$=\left(\begin{array}{cc}A& B\\ B& C\end{array}\right).$$
(3)
It has eigenvalues
$$m_{1,2}=\frac{1}{2}(C+A)\frac{1}{2}\sqrt{(CA)^2+4B^2}$$
(4)
where
$`A`$ $`=`$ $`m_1\mathrm{cos}^2\theta +m_2\mathrm{sin}^2\theta ,`$ (5)
$`B`$ $`=`$ $`(m_2m_1)\mathrm{sin}\theta \mathrm{cos}\theta ,`$ (6)
$`C`$ $`=`$ $`m_1\mathrm{sin}^2\theta +m_2\mathrm{cos}^2\theta .`$ (7)
The mixing angle $`\theta `$ is related to $``$ according to
$$\mathrm{tan}\theta =\frac{2B}{(CA)+\sqrt{(CA)^2+4B^2}},$$
(8)
and
$$\mathrm{\Delta }m^2=(C+A)\sqrt{(CA)^2+4B^2}.$$
(9)
In the above, I have used the convention $`m_2>|m_1|`$ and $`0\theta 45^{}`$.
With radiative corrections, the mass matrix is changed:
$$AA(1+2\delta _e),BB(1+\delta _e+\delta _\mu ),CC(1+2\delta _\mu ).$$
(10)
If both $`e`$ and $`\mu `$ have only gauge interactions, then $`\delta _e=\delta _\mu `$ and $``$ is simply renormalized by an overall factor, resulting in
$$\mathrm{\Delta }m^2\mathrm{\Delta }m^2(1+2\delta )^2,$$
(11)
and $`\mathrm{tan}\theta `$ is unchanged. However, because $`e`$ and $`\mu `$ have Yukawa interactions proportional to their masses, nontrivial changes do occur in $``$. Let
$$\delta =(\delta _\mu +\delta _e)/2,\mathrm{\Delta }\delta =\delta _\mu \delta _e,$$
(12)
then
$$\mathrm{\Delta }m^2[(m_2+m_1)(1+2\delta )+(m_2m_1)\mathrm{\Delta }\delta \mathrm{cos}2\theta ]D,$$
(13)
and
$$\mathrm{tan}\theta \frac{(m_2m_1)\mathrm{sin}2\theta (1+2\delta )}{(m_2m_1)\mathrm{cos}2\theta (1+2\delta )+(m_2+m_1)\mathrm{\Delta }\delta +D},$$
(14)
where
$$D=\sqrt{(m_2m_1)^2(1+2\delta )^2+2\mathrm{\Delta }m^2(1+2\delta )\mathrm{\Delta }\delta \mathrm{cos}2\theta +(m_2+m_1)^2(\mathrm{\Delta }\delta )^2}.$$
(15)
There are two ways for $`\mathrm{\Delta }m^2`$ to approach zero:
$$(1)m_2m_1<<m_2+m_1=2m,$$
(16)
and
$$(2)m_2+m_1<<m_2m_1=2m.$$
(17)
In Case (1),
$$D2m\sqrt{\left(\frac{\mathrm{\Delta }m^2}{4m^2}\right)^2+2\left(\frac{\mathrm{\Delta }m^2}{4m^2}\right)\mathrm{\Delta }\delta \mathrm{cos}2\theta +(\mathrm{\Delta }\delta )^2}.$$
(18)
Hence if $`\mathrm{\Delta }\delta >>\mathrm{\Delta }m^2/4m^2`$, then
$$\mathrm{\Delta }m^24m^2\mathrm{\Delta }\delta ,\mathrm{tan}\theta 0,$$
(19)
i.e. this situation is unstable. Of course, if $`\mathrm{\Delta }m^2/4m^2>>\mathrm{\Delta }\delta `$, there is no problem. For example, if $`\mathrm{\Delta }m^210^3`$ eV<sup>2</sup> for atmospheric neutrino oscillations and $`m1`$ eV, then this is easily satisfied. The model-independent contribution to $`\mathrm{\Delta }\delta `$ from the renormalization of the neutrino wavefunctions is
$$\mathrm{\Delta }\delta =\frac{G_F(m_\mu ^2m_e^2)}{16\pi ^2\sqrt{2}}\mathrm{ln}\frac{\mathrm{\Lambda }^2}{m_W^2},$$
(20)
where $`\mathrm{\Lambda }`$ is the scale at which the original mass matrix $``$ is defined. Other model-dependent contributions to the mass terms themselves may be of the same order. If $`m_\mu `$ is replaced by $`m_\tau `$ in Eq. (20), $`\mathrm{\Delta }\delta `$ is of order 10<sup>-5</sup>. In that case, only the small-angle matter-enhanced solution to the solar neutrino deficit appears to be stable for $`m1`$ eV.
In Case (2),
$$D2m(1+2\delta )\left[1+\left(\frac{\mathrm{\Delta }m^2}{4m^2}\right)\frac{\mathrm{\Delta }\delta \mathrm{cos}2\theta }{(1+2\delta )}\right],$$
(21)
hence
$$\mathrm{\Delta }m^2\mathrm{\Delta }m^2(1+2\delta )^2+4m^2\mathrm{\Delta }\delta \mathrm{cos}2\theta (1+2\delta ),$$
(22)
and
$$\mathrm{tan}\theta \mathrm{tan}\theta \left[1\left(\frac{\mathrm{\Delta }m^2}{4m^2}\right)\mathrm{\Delta }\delta \right].$$
(23)
This means that $`\theta `$ is stable and that $`\mathrm{\Delta }m^2`$ is also stable if $`\mathrm{cos}2\theta 0`$, i.e. $`\theta 45^{}`$. More precisely, the condition
$$\mathrm{\Delta }\delta \mathrm{cos}2\theta <<\frac{\mathrm{\Delta }m^2}{4m^2}$$
(24)
is required.
Whereas the general form of $``$ given by Eq. (3) has no special symmetry for the entire theory, the limit $`m_1=m_2`$ and $`\theta =45^{}`$, i.e.
$$=\left(\begin{array}{cc}0& m\\ m& 0\end{array}\right)$$
(25)
is a special case which allows the entire theory to have the additional global symmetry $`L_eL_\mu `$. Hence small deviations are protected against radiative corrections, as shown by Eqs.(22) and (23).
The zero $`\nu _e\nu _e`$ entry of Eq. (25) also has the well-known virtue of predicting an effective zero $`\nu _e`$ mass in neutrinoless double beta decay. This means that $`m`$ may be a few eV even though the above experimental upper limit is one order of magnitude less. Hence neutrinos could be candidates for hot dark matter in this scenario.
In conclusion, neutrino mass degeneracy is theoretically viable and phenomenologically desirable provided that $`m_1m_2`$ and $`\theta 45^{}`$.
ACKNOWLEDGEMENT
I thank V. Berezinsky and J. W. F. Valle for discussions. This work was supported in part by the U. S. Department of Energy under Grant No. DE-FG03-94ER40837.
|
no-problem/9907/astro-ph9907133.html
|
ar5iv
|
text
|
# References
The extra high energy cosmic rays spectrum in view of the decay of proton at the Planck scale
D.L. Khokhlov
Sumy State University, R.-Korsakov St. 2
Sumy 244007 Ukraine
e-mail: khokhlov@cafe.sumy.ua
## Abstract
The structure of the extra high energy cosmic rays spectrum in view of the decay of proton is considered. The time required for proton travel from the source to the earth defines the limiting energy of proton. Protons with the energies more than the limiting energy decay and do not give contribution in the EHECRs spectrum. It is assumed that proton decays at the Planck scale. Depending on the range of distances to the EHECRs sources, the range of the limiting energies of proton is determined. This allows one to explain the structure of the EHECRs spectrum.
The energy spectrum of extra high energy cosmic rays (EHECRs) above $`10^{10}\mathrm{eV}`$ can be divided into three regions: two ”knees” and one ”ankle” . The first ”knee” appears around $`3\times 10^{15}\mathrm{eV}`$ where the spectral power law index changes from $`2.7`$ to $`3.0`$. The second ”knee” is somewhere between $`10^{17}\mathrm{eV}`$ and $`10^{18}\mathrm{eV}`$ where the spectral slope steepens from $`3.0`$ to around $`3.3`$. The ”ankle” is seen in the region of $`3\times 10^{18}\mathrm{eV}`$ above which the spectral slope flattens out to about $`2.7`$.
Consider the structure of the EHECRs spectrum in view of the decay of proton. The lifetime of proton relative to the decay of proton at the scale of the mass $`M`$ is given by
$$t_p=\frac{M^4}{E^5}.$$
(1)
From this the time required for proton travel from the source to the earth defines the limiting energy of proton
$$E_{lim}=\left(\frac{M^4}{t}\right)^{1/5}.$$
(2)
Within the time $`t`$, protons with the energies more than the limiting energy $`E>E_{lim}`$ decay and do not give contribution in the EHECRs spectrum.
Let us assume that proton decays at the Planck scale $`m_{Pl}=1.2\times 10^{19}\mathrm{GeV}`$. Determine the range of the limiting energies of proton depending on the range of distances to the EHECRs sources. Take the maximum and minimum distances to the source as the size of the universe and the thickness of our galactic disc respectively. For the lifetime of the universe $`t_U=1.06\times 10^{18}\mathrm{s}`$ , the limiting energy is equal to $`E_U=6.7\times 10^{15}\mathrm{eV}`$. This corresponds to the first ”knee” in the EHECRs spectrum. For the thickness of our galactic disc $`300\mathrm{pc}`$, the limiting energy is equal to $`E_G=1.1\times 10^{18}\mathrm{eV}`$. This corresponds to the second ”knee” in the EHECRs spectrum. Thus the range of the limiting energies of proton due to the decay of proton at the Planck scale lies between the first ”knee” $`E3\times 10^{15}\mathrm{eV}`$ and the second ”knee” $`E10^{17}10^{18}\mathrm{eV}`$.
From the above consideration it follows that the decrease of the spectral power law index from $`2.7`$ to $`3.0`$ at the first ”knee” $`E3\times 10^{15}\mathrm{eV}`$ and from $`3.0`$ to around $`3.3`$ at the second ”knee” $`E10^{17}10^{18}\mathrm{eV}`$ can be explained as a result of the decay of proton at the Planck scale. From this it seems natural that, below the ”ankle” $`E<3\times 10^{18}\mathrm{eV}`$, the EHECRs events are mainly caused by the protons. Above the ”ankle” $`E>3\times 10^{18}\mathrm{eV}`$, the EHECRs events are caused by the particles other than protons.
|
no-problem/9907/nucl-th9907110.html
|
ar5iv
|
text
|
# An inversion procedure for coupled-channel scattering: determining the deuteron-nucleus tensor interaction.
## I Introduction
Various methods for carrying out $`S`$-matrix to potential inversion are now available, see for example , but, until recently, it has been possible only for cases with channel-spin zero or 1/2. However, there have been many accurate experiments involving spin-one polarised particles and these provide a powerful motivation to develop an efficient technique for inversion in cases with higher channel spin, i.e. coupled-channel scattering. With such a technique, one can exploit the very large volume of polarisation data which has accumulated. This includes vector and tensor analysing powers and polarisation transfer observables.
Over the years, we have developed a practical and widely generalisable procedure, the iterative perturbative, IP, method and we recently demonstrated an extension to spin-1 projectiles for the first time. Ref. demonstrates the method in one specific application, but does not give details or a derivation of the method. In this paper we present details of our extension of the IP $`S`$-matrix to potential inversion method to the coupled-channel case of spin-1 projectiles and present further evaluation of it. We also test and evaluate an important extension of the IP method, single step data-to-potential inversion for coupled-channel scattering.
In Ref. we applied single step inversion to analyse real data. We note here that there are also many ways in which IP $`SV`$ inversion can contribute to understanding nucleus-nucleus interactions. Perhaps the most obvious application is the inversion of $`S`$-matrix elements found by phase shift analysis of experimental data. However, there are also many important applications which involve the inversion of theoretical $`S`$-matrix elements, i.e. elastic channel $`S`$ derived from coupled channel calculations, Glauber model and resonating group model calculations. The potential found in this way contains information concerning the contribution of tensor components to dynamic polarization and exchange processes to inter-nuclear potentials. For the particular case of spin-polarised deuteron and <sup>6</sup>Li scattering, obvious applications include the study of the influence of reaction channels and distortion effects on the projectile-nucleus potential, especially in its non-central components. There are a number of longstanding puzzles relating to spin-polarised deuteron scattering, including the anomalously small real part of the tensor interaction, which can be studied using these methods.
Coupled-channel inversion represents a significant development in inversion techniques since a non-diagonal potential is derived from a non-diagonal $`S`$-matrix. Such a potential couples channels of the same conserved quantum numbers but different values of orbital angular momentum. Spin-1 inversion is therefore the first example of coupled channel inversion in which a non-diagonal $`S`$-matrix is made to yield a non-diagonal potential. Apart from the general derivation, the present paper is framed for a very specific two-channel case: deuterons scattering from a spin zero nucleus with inversion determining a tensor interaction involving the non-diagonal operator $`T_\mathrm{R}`$ as given below. In spite of the rather specific nature of the present application, we believe this work opens the way to a fully general class of coupled-channel inversion situations involving the determination of a coupling potential from a non-diagonal $`S`$-matrix.
This paper presents in detail only those aspects of the IP inversion formalism which are connected with the specific coupled channel generalisation to spin-1 scattering. In other respects it calls upon previous publications in which the general aspects of the IP inversion procedure are described. Because of the specific application to spin-1 projectiles, we establish our notation by beginning in Section II with a brief review of basic aspects of spin-1 scattering.
An important feature of the IP $`SV`$ inversion procedure is the natural way in which it can be convoluted with $`(\mathrm{data})S`$ fitting to give an overall $`(\mathrm{data})V`$ algorithm. This provides a new and efficient data analysis tool which in many cases obviates the need for independent $`(\mathrm{data})S`$ inversion. Important advantages follow when fitting data for many energies since the underlying potential model guarantees that the energy dependence of the $`S`$-matrix will be smooth without the need to postulate parameterized forms for $`S(E)`$. Indeed, we have shown that $`(\mathrm{data})V`$ inversion can provide a powerful alternative method for phase shift fitting of multi-energy scattering data for light nuclei.
Ref. contained a restricted analysis of low energy $`\stackrel{}{\mathrm{d}}`$ \+ <sup>4</sup>He data. A future paper will present a much more exhaustive analysis of the very large collection of data for this system. At a later stage we hope to present an analysis of $`{}_{}{}^{6}\stackrel{}{\mathrm{Li}}+^4`$He data, including tensor analysing powers.
## II The scattering of spin-1 nuclei from spin-0 targets
### A Formalism for spin-1 scattering
In order to establish our notation, we outline the standard formalism for the elastic scattering of spin-1 projectiles from a spin-0 target in the presence of tensor forces. The key feature introduced by the tensor interaction of $`T_\mathrm{R}`$ type (see below for a classification of tensor forces) is that it couples channels of different orbital angular momentum $`l`$. Specifically, for particular values of the conserved quantities $`J`$, the total angular momentum, and $`\pi `$, the parity, whenever $`\pi =(1)^{J+1}`$, then two values of orbital angular momentum, $`l=J1`$ and $`J+1`$, are coupled by $`T_\mathrm{R}`$.
For total angular momentum $`J`$ and orbital angular momentum $`l^{}`$ the radial wavefunction $`\psi _{l^{}l}^J`$ satisfies the coupled equations,
$$\left[\frac{\mathrm{d}^2}{\mathrm{d}r^2}+k^2\frac{2\mu }{\mathrm{}^2}(l^{}J|V|l^{}J)\frac{l^{}(l^{}+1)}{r^2}\right]\psi _{l^{}l}^J(k,r)=\underset{l^{\prime \prime }l^{}}{}\frac{2\mu }{\mathrm{}^2}(l^{}J|V|l^{\prime \prime }J)\psi _{l^{\prime \prime }l}^J(k,r)$$
(1)
where $`\mu `$ is the reduced mass of the system and $`(l^{}J|V|l^{\prime \prime }J)`$, a function of $`r`$, is the matrix element of the inter-nuclear interaction $`V`$ integrated over all angular and internal degrees of freedom. The second subscript, $`l`$, on $`\psi `$ identifies the incoming orbital angular momentum. This is determined by imposing on the solution of the coupled equations, the following asymptotic boundary conditions :
$$\psi _{l^{}l}^J(k,r)\delta _{l^{}l}I_l^{}(kr)S_{l^{}l}^JO_l^{}(kr).$$
(2)
Here, $`I_l(r)`$ and $`O_l(r)`$ are the incoming and outgoing asymptotic Coulomb radial wavefunctions, often written $`H_l(r)^{}`$ and $`H_l(r)`$ respectively as in Satchler , namely:
$$I_l(kr)=G_l(kr)\mathrm{i}F_l(kr);I_l(kr)=G_l(kr)\mathrm{i}F_l(kr);$$
where $`F_l`$ and $`G_l`$ are regular and irregular Coulomb wavefunctions respectively. Note that the boundary conditions given in Eq. 2 differ by a factor from those adopted by Satchler. Where there is no ambiguity, we suppress the $`J`$ superscript. When $`\pi =(1)^J`$, $`V`$ is diagonal and Eq. 1 is uncoupled.
In general, $`S`$ will not be unitary, but will be subject to the unitarity limits: $`|S_{11}|^2+|S_{12}|^21`$ and $`|S_{22}|^2+|S_{21}|^21`$, where, of course $`S_{12}=S_{21}`$. These limits present no particular problem for $`SV`$ inversion where $`S`$ can be assumed to satisfy them, but they can represent a significant problem in the case of data to potential inversion, see Section VI.
### B The $`T_\mathrm{R}`$ interaction and its effect.
Non-diagonal matrix elements $`(l^{}J|V|l^{\prime \prime }J)`$ occur in Eqn. 1 for elastic scattering of spin-1 projectiles with certain types of tensor force. The possible forms of local tensor interaction have been classified by Satchler who defined $`T_\mathrm{R}`$, $`T_\mathrm{L}`$ and $`T_\mathrm{P}`$ interactions. The $`T_\mathrm{L}`$ interaction is believed to be very small, at least below 50 MeV/u, and is in any case diagonal in $`l`$. The $`T_\mathrm{P}`$ interaction could well be substantial but appears to be hard to distinguish phenomenologically from $`T_\mathrm{R}`$. The gradient operators within $`T_\mathrm{P}`$ make calculations harder, and the present inversion method does not apply to it.
#### 1 The $`T_\mathrm{R}`$ operator
In this work we assume that the inter-nucleus potential $`V`$ contains a tensor force component of $`T_\mathrm{R}`$ form :
$$T_\mathrm{R}V_R(r)((𝐬\widehat{𝐫})^\mathrm{𝟐}2/3)V_R(r).$$
(3)
We quote the matrix elements of the interaction $`T_\mathrm{R}`$ for future reference. The diagonal matrix elements of $`T_\mathrm{R}`$ are:
| | $`l=J1`$ | $`l=J`$ | $`l=J+1`$ |
| --- | --- | --- | --- |
| $`<Jl|T_\mathrm{R}|Jl>`$ | $`\frac{1}{3}\frac{J1}{2J+1}`$ | $`\frac{1}{3}`$ | $`\frac{1}{3}\frac{J+2}{2J+1}`$ |
and the non-diagonal matrix elements are:
$$<JJ1|T_\mathrm{R}|JJ+1>=<JJ+1|T_\mathrm{R}|JJ1>=\frac{[J(J+1)]^{1/2}}{2J+1}.$$
(4)
#### 2 The radial form of the $`T_\mathrm{R}`$ interaction
The derivation of $`V_R(r)`$ from the folding model has been discussed at length long ago by Keaton and his colleagues and also by Raynal . Within the folding model, the deuteron $`T_\mathrm{R}`$ interaction arises directly from the D-state component. The overall general success of folding models for central and spin-orbit interactions suggests that folding model calculations of $`V_R(r)`$ should give at least approximately the correct radial form and overall magnitude, but this has not been borne out in the case of $`T_\mathrm{R}`$ according to extensive phenomenological studies, e.g.. The overall conclusion is that the real part of $`V_R(r)`$ predicted by the folding model is much too strong for heavy nuclei, of the right order of magnitude for light target nuclei, and actually about three times too strong for a <sup>4</sup>He target.<sup>*</sup><sup>*</sup>*Ref exploits the inversion formalism presented here to give an alternative analysis of d + <sup>4</sup>He scattering, a theme elaborated in later papers. These facts, together with a large literature discussing breakup and reaction channel contributions, suggest that we have no generally applicable reliable knowledge of $`V_R(r)`$. There is reason to doubt even the general arguments, based on folding models, that it should be small in the interior of heavier nuclei, away from the nuclear density gradients in the surface. Such gradients define the angle between the projectile spin $`𝐬`$ and the vectorial position $`𝐫`$ of the projectile with respect to the nuclear centre, see Eq. 3.
## III $`SV`$ inversion for spin-1 projectiles on spin-0 targets
### A General background of IP inversion
The IP method has been successful for $`S_{lj}V(r)+𝐥𝐬V_{\mathrm{so}}(r)`$ inversion for spin half projectiles, and we now present its generalisation to spin-1 inversion. The only restriction is to a $`T_\mathrm{R}`$ tensor interaction. Certain features of the IP method, to our knowledge not shared by other inversion methods, will be of particular importance in the particular systems to which we shall apply spin-1 inversion. These include the ability to find an explicitly energy dependent potential from phase shifts for a range of energies, the ability to handle a range of energies simultaneously and to include Majorana terms for all potential components. For many applications the important property is that mentioned in the introduction, i.e. that IP inversion lends itself to direct observable to potential inversion. This not only avoids the need for independently determined phase shifts (or $`S`$-matrix), but actually provides an advantageous method of determining such phase shifts. For a full description of IP inversion as applied in the spin-1/2 case see Refs.. The formalism presented in Ref., whereby energy dependent potentials are obtained from multi-energy datasets, can be used with spin-1 inversion as described here, although energy dependence is not actually exploited in the test cases. A brief general account of IP inversion is given in the next section.
### B IP inversion for the coupled channel case; application to spin-1
Our notation must reflect the fact that the outcome of inversion will be a potential with many components. We therefore label each component with an index $`p`$ which identifies central, spin-orbit or tensor terms, each real or imaginary. The number of components doubles when the potential is parity dependent. (Parity dependence is particularly important for light nuclei at lower energies.)
The IP method commences with a ‘starting reference potential’, SRP, and proceeds by iteratively correcting each component $`p`$ of the potential:
$$V^{(p)}V^{(p)}+\underset{n}{}\alpha _n^{(p)}v_n^{(p)}(r)$$
(5)
where $`\alpha _n^{(p)}`$ are coefficients to be determined and $`v_n^{(p)}(r)`$ are the functions comprising the ‘inversion basis’, (which, if required, can be chosen differently for different $`p`$). The amplitudes $`\alpha _n^{(p)}`$ are determined at each iteration from linear equations, based on an SVD algorithm, which successively reduce the ‘phase shift distance’ $`\sigma `$ defined by:
$$\sigma ^2=|S_k^\mathrm{t}S_k^\mathrm{c}|^2.$$
(6)
For each partial wave $`k`$, $`S_k^\mathrm{t}`$ is the ‘target’ $`S`$-matrix and $`S_k^\mathrm{c}`$ is for the potential at the current iteration. Here the label $`k`$ is a single index which identifies the partial wave angular momentum $`l`$ as well as the energy $`E_i`$ when multi energy sets of $`S_l(E_i)`$ are simultaneously inverted. It also includes non-diagonal elements of $`S_{ll^{}}^J`$ in the spin-1 case described later.
The linear equations are based on the (usually) very linear response , $`\mathrm{\Delta }S`$, of the complex $`S`$-matrix to small changes $`\mathrm{\Delta }V`$ in the potential. The expression for this is well known in the uncoupled case and is very simple:
$$\mathrm{\Delta }S_l=\frac{\mathrm{i}m}{\mathrm{}^2k}_0^{\mathrm{}}(\psi _l(r))^2\mathrm{\Delta }V(r)dr.$$
(7)
In Eq.7, the $`S`$-matrix $`S_l`$ is written in terms of the asymptotic form of the regular radial wave function as $`\psi _l(r)I_l(r)S_lO_l(r)`$ where $`I_l`$ and $`O_l`$ are incoming and outgoing Coulomb wave functions as before. When inverting $`S_l(E_k)`$ over a series of energies $`E_k`$, the energy label $`E_k`$ is implicit in these equations with index $`k`$ subsumed with orbital angular momentum $`l`$ to give an overall channel label. In the case of spin-1/2, the $`j`$ label is also subsumed in the same way . Linear algebraic equations for local variations of $`\alpha _n^{(p)}`$ follow from the minimisation of $`\sigma ^2`$.
We now present the generalized linear response relationship which applies to the non-diagonal $`S`$-matrix for spin-1 elastic scattering. The derivation is given in Section III B 1. For any given set of conserved quantum numbers, certain channels will be coupled by the nucleus-nucleus interaction and we use labels $`\kappa ,\lambda ,\mu ,\nu `$ for these channels. Thus the matrix element of the nucleus-nucleus interaction $`V`$ between the wavefunctions for channels $`\kappa `$ and $`\lambda `$, corresponding to integrating over all coordinates but $`r`$, will be written $`V_{\kappa \lambda }(r)`$. The increment $`\mathrm{\Delta }S_{\kappa \lambda }`$ in the non-diagonal S-matrix which is due to a small perturbation $`\mathrm{\Delta }V_{\kappa \lambda }(r)`$, is
$$\mathrm{\Delta }S_{\kappa \lambda }=\frac{\mathrm{i}\mu }{\mathrm{}^2k}\underset{\mu \nu }{}_0^{\mathrm{}}\psi _{\mu \kappa }(r)\mathrm{\Delta }V_{\mu \nu }(r)\psi _{\nu \lambda }dr$$
(8)
where $`\psi _{\nu \kappa }`$ is the $`\nu `$th channel (first index) component of that coupled channel solution for the unperturbed non-diagonal potential for which there is in-going flux in channel $`\kappa `$ (second index) only. The normalisation is $`\psi _{\nu \kappa }\delta _{\kappa \nu }I_{l_\kappa }S_{\nu \kappa }O_{l_\nu }`$ where $`I_l`$ and $`O_l`$ are incoming and outgoing Coulomb wavefunctions for orbital angular momentum $`l`$; there is no complex conjugation in the integral. Starting from Eq.8, spin-one inversion becomes a straightforward generalisation of the procedure outlined above and described in Refs. . The method is implemented in the code IMAGO where the linearity relations have been exhaustively tested by the gradient method.
A convenient feature of the IP method is that one can judge from the behaviour of $`\sigma ^2`$ as the iteration proceeds whether a satisfactory inversion has been achieved. A low value of $`\sigma ^2`$ obviously guarantees that a potential closely reproducing the input $`S_l`$ has been found. Because the IP method is implemented interactively, there is an opportunity to examine the potential for oscillatory features. These might well be spurious and result from over-fitting noisy data. In such a case, one can reduce the basis dimensionality or raise the SVD limit and this generally allows one to achieve a smooth potential, often with only a small increase in $`\sigma ^2`$. One must bear in mind that genuine oscillatory features, corresponding to non-locality in an $`L`$-independent local potential or to $`L`$-dependence of the underlying potential, can be necessary to achieve a precise representation of $`S_{lj}`$ or $`S_{l^{}l}^J`$.
We stress here that, although the context of our discussion is the determination of a tensor interaction from the non-diagonal $`S`$-matrix elements describing the scattering of spin-1 projectiles, the range of application is much more general.
#### 1 Derivation of non-diagonal perturbation expression.
We now outline the derivation of Eq.8. The derivation can be applied to the general coupled channel inversion from non-diagonal $`S`$-matrix to non-diagonal potential. Our starting point is Eq.1 which we shall write with a simplified notation for two channels. Until the last step in the argument, we shall assume we are using units in which $`\mathrm{}^2/2m=1`$.
The radial wavefunctions in channel $`i`$ with incoming waves in channel $`\lambda `$ are written as $`\psi _{i\lambda }`$ and have asymptotic behaviour at $`r\mathrm{}`$:
$$\psi _{i\lambda }\delta _{i\lambda }I_\lambda S_{\lambda i}O_i$$
(9)
where for simplicity we write $`O_i`$ for the outgoing Coulomb wavefunction with orbital angular momentum $`\mathrm{}`$ appropriate to channel $`i`$, and similarly for the ingoing wavefunction $`I_i`$. For brevity we omit labels for conserved quantum numbers $`J`$ and $`\pi `$.
Absorbing the centrifugal interaction within the potential, we can write the coupled equations for the radial wavefunctions appropriate to incoming waves in channel $`\lambda `$ as:
$$\psi _{i\lambda }^{\prime \prime }=\underset{j}{}(V_{ij}E\delta _{ij})\psi _{j\lambda },i=1,2.$$
(10)
In the case considered here, $`V_{ij}`$ with $`ij`$ arises entirely from the tensor interaction, the matrix $`V_{ij}`$ being symmetric. Now, denoting by $`\overline{\psi }`$ the wavefunction arising from a (symmetric) perturbation in the potential $`V_{ij}V_{ij}+\mathrm{\Delta }V_{ij}`$, we can write:
$$\overline{\psi }_{i\lambda }^{\prime \prime }=\underset{j}{}(V_{ij}+\mathrm{\Delta }V_{ij}E\delta _{ij})\overline{\psi }_{j\lambda },i=1,2.$$
(11)
Multiplying Eq.10 by $`\overline{\psi }_{i\mu }`$ and Eq.11 by $`\psi _{i\mu }`$, summing over $`i`$ and then subtracting the second one from the first one, we get:
$$\underset{i}{}\frac{\mathrm{d}}{\mathrm{d}r}(\psi _{i\lambda }^{}\overline{\psi }_{i\mu }\overline{\psi }_{i\lambda }^{}\psi _{i\mu })=\underset{ij}{}\psi _{i\mu }\mathrm{\Delta }V_{ij}\overline{\psi }_{j\lambda },$$
(12)
as all terms including $`V_{ij}`$ in the right hand side vanish due to symmetry of $`V_{ij}`$.
Integrating Eq.(12) from $`r=0`$ to the asymptotic region and using the usual Wronskian relationship $`W[I_{\mathrm{}},O_{\mathrm{}}]=2\mathrm{i}k`$, we get:
$$2\mathrm{i}k(\overline{S}_{\mu \lambda }S_{\lambda \mu })=\underset{ij}{}_0^{\mathrm{}}\psi _{i\mu }\mathrm{\Delta }V_{ij}\overline{\psi }_{j\lambda }dr.$$
(13)
Since the coupled channel equations with a symmetrical potential matrix give a symmetrical $`S`$-matrix $`S_{\lambda \mu }=S_{\mu \lambda }`$, the left hand side of Eq.(13) is equal
$$2\mathrm{i}k(\overline{S}_{\mu \lambda }S_{\mu \lambda })=2\mathrm{i}k\mathrm{\Delta }S_{\mu \lambda }$$
(14)
Hence we find, reinstating the $`2m/\mathrm{}^2`$ factor on the right hand side:
$$\mathrm{\Delta }S_{\lambda \mu }=\frac{\mathrm{i}m}{\mathrm{}^2k}\underset{ij}{}_0^{\mathrm{}}\psi _{i\lambda }\mathrm{\Delta }V_{ij}\overline{\psi }_{j\mu }dr.$$
(15)
This expression is valid for any symmetrical finite perturbations $`\mathrm{\Delta }V_{ij}`$ decreasing at $`r\mathrm{}`$ sufficiently rapidly. It is easily extended to any system of coupled-channel equations. For small $`\mathrm{\Delta }V_{ij}`$ we make the Born approximation assumption that $`\overline{\psi }\psi `$ and get:
$$\mathrm{\Delta }S_{\lambda \mu }=\frac{\mathrm{i}m}{\mathrm{}^2k}\underset{ij}{}_0^{\mathrm{}}\psi _{i\lambda }\mathrm{\Delta }V_{ij}\psi _{j\mu }dr.$$
(16)
The expression (16) is the basis for the coupled-channel inversion method. The success of the inversion method in leading to a converged solution confirms the wide applicability of this equation in each step of our iteration process.
## IV Testing couple-channel $`SV`$ inversion for spin-1 projectiles
We carried out two contrasting tests of $`SV`$ inversion as described in Sections IV B and IV C below. First let us define the potentials and the inversion basis.
### A Specification of the interaction potential and basis used
In one respect our notation is non-standard. We write down the complete potential for spin-1 projectiles scattering from a spin-zero target. It is
$$V_{\mathrm{cen}}(r)+\mathrm{i}W_{\mathrm{cen}}(r)+V_{\mathrm{coul}}(r)+2𝐥𝐬(V_{\mathrm{so}}+\mathrm{i}W_{\mathrm{so}})+(V_\mathrm{R}+\mathrm{i}W_\mathrm{R})T_\mathrm{R}$$
(17)
where $`V_{\mathrm{coul}}(r)`$ is the usual hard-sphere Coulomb potential. Note that our spin-orbit potentials $`V_{\mathrm{so}}`$ and $`W_{\mathrm{so}}`$, are defined in such a way that they will be half the magnitude of those defined according to the usual convention for spin-1 projectile.In our papers relating to spin 1/2 projectiles, the usual convention has been used. For the test cases we present, the spin-orbit potential is defined as in Eq. 17.
For notational simplicity, Eq. 17 has not been written to reflect parity dependence. There are two alternative methods of representing parity dependence. The code IMAGO can apply either of these to each of the components in Eq. 17 except $`V_{\mathrm{coul}}`$. The first representation defines Wigner and Majorana components for each term, say $`V_\mathrm{x}`$:
$$V_\mathrm{x}=V_{\mathrm{x},\mathrm{W}}+(1)^lV_{\mathrm{x},\mathrm{M}}$$
(18)
where $`l`$ is the partial wave angular momentum. With this form the inversion procedure can be made to determine $`V_{\mathrm{x},\mathrm{W}}`$ and $`V_{\mathrm{x},\mathrm{M}}`$ for any or all $`V_\mathrm{x}`$. An alternative approach is to determine independent positive or negative parity components for $`V_\mathrm{x}`$. In many cases, the Wigner-Majorana representation is most natural and has been shown to be preferable where the odd parity term may otherwise be ill-determined. However, sometimes the odd-even representation is more appropriate, for example where a particular $`V_\mathrm{x}`$ has completely different shapes and magnitudes for the different parities, as we believe can be the case for $`V_\mathrm{R}`$. The code IMAGO offers the freedom to represent the parity dependence of each component $`V_\mathrm{x}`$ in either way.
The IP method is not tied to any particular set of functions for the inversion basis and each component of the potential can be represented by a different basis. It is an important feature of the IP method, as implemented in IMAGO, that a range of different functions is available, and those which we have applied are specified in . Zeroth order Bessel functions and harmonic oscillator functions are both linearly independent sets which have proven useful where bases of large dimensionality are necessary to describe a potential over a wide radial range down to $`r=0`$. For cases involving light nuclei, particularly for inversion of small $`S`$-matrix datasets, a small basis comprising a series of Gaussian functions is preferable. A Gaussian basis covering just the nuclear surface region is also useful for heavy ion cases where there is no information available to determine the potential in the nuclear interior. It is important that a basis should not be chosen which would describe the potential over a radial range, or to a degree of detail, which is not warranted by the information contained in the set {$`S_l`$} or by the nature of the physical situation. In practice, much smaller bases are often necessary in order to eliminate spurious oscillatory features from the potentials. The operation of the SVD algorithm, with adjustable SVD limit, stabilises the inversion and can, where appropriate, reduce the effective dimensionality of the inversion basis.
### B Single-energy inversion
In Ref. we presented a test of $`SV`$ for deuterons scattering from the light nucleus <sup>4</sup>He in which there is very little absorption. Here we present a test for a much heavier nucleus, where there is substantial absorption, and demonstrate that a potential, very accurate almost to the nuclear centre, can be obtained by inversion. The test case studied was for a <sup>58</sup>Ni target and deuterons at a laboratory energy of 56 MeV. The parameters found by Hatanaka et al fitting angular distributions and the analyzing powers $`A_y`$ and $`A_{yy}`$ were used. A notable feature of the potential was that the imaginary $`T_\mathrm{R}`$ term was quite large, but the real $`T_\mathrm{R}`$ term was very small (a common but unexplained feature of deuteron optical potentials.) The spin-orbit component was real. The potential was parity independent as expected for this combination of target, projectile and energy. The energy and other characteristics of the reaction are such that there are ‘many’ active partial waves. ‘Many’ here means sufficient, even with $`S_{l^{}l}^J`$ for a single energy, to yield a precise reproduction of the potential.
The test was carried out as follows: one of the authors applied the optical model parameters of Hatanaka et al to the standard spin-1 scattering code DDTP , reading out the $`S`$-matrix onto a file. A second author, knowing only the target and the energy, then applied IMAGO to find the potential from these $`S`$-matrix elements. The inversion was carried out with a starting potential which contained only real and imaginary central potential components. These were guessed from general systematics without specific knowledge of Hatanaka’s potential. Since with IMAGO there is complete freedom to choose the starting potential and inversion basis, it is worthwhile to test the inversion method starting with no more information about the potential than might be available in a ‘for real’ case.
When a converged solution was found, the potentials obtained were compared with the known potentials with results shown in Figure 1. The solid lines represent the ‘target’ potential, i.e. the potential from which the $`S`$-matrix was calculated using DDTP. The short dashes represent the potential found by inversion; it can be seen to reproduce the target potential very closely except very near to the nuclear centre in the case of the imaginary tensor component. We have not shown the real tensor component which was only a few percent of the imaginary component nearly everywhere. This small component was reproduced only qualitatively, as expected, since the absolute errors for the real and imaginary $`T_\mathrm{R}`$ components were similar in magnitude and comparable to the real $`T_\mathrm{R}`$ potential itself. In Figure 1, the dashed line represents the starting potential, zero for the spin-orbit and tensor components. The $`S`$-matrix elements for the target and inverted potentials are indistinguishable on a graph, corresponding to values of $`\sigma `$ of roughly $`10^3`$.
This test shows that the inversion procedure has the capability of revealing quite fine details of the potential as would be required for the kind of applications, discussed in the introduction requiring the inversion of single energy $`S`$ derived from theory. Such studies might establish, for example, the contribution of specific exchange terms or reaction couplings to the inter-nucleus potential.
### C Multi-energy inversion at very low energy
At low energies and for light target nuclei, very few partial waves are involved so that there will in general be insufficient information contained in the $`S`$-matrix elements for a single energy to yield a detailed and precise potential. The situation is even worse in cases where parity dependence must be assumed since this halves the information available for potential components of each parity. The problem can be ameliorated if $`S`$ is available for more than one energy. If $`S`$ is available over a narrow range of energies, then the algorithm can be made to yield to an energy independent potential; this is what we have earlier called ‘mixed case’ inversion (see Refs. and first of Ref. ) and, in effect, the information from the energy derivative of $`S`$ is exploited. In many cases, $`S_{lj}`$ or $`S_{l^{}l}^J`$ are provided over a wide range of energies. In this case one should ideally consider the potential to be energy dependent and determine the energy dependence itself. This can be done within the framework of the parameterisations presented above.
An example of where the sets of $`S_{lj}`$ or $`S_{l^{}l}^J`$ are too small to define the potential very closely is the $`SV`$ inversion situation embedded in the analysis of low energy, experimentally determined, multi-energy observables for d + alpha scattering. A first report was presented in Ref. . The test we now describe is directly relevant and asks the following question: what properties of the potential can reliably be determined from very small sets of $`S`$?
The test was for deuterons scattering from <sup>4</sup>He with $`S_{l^{}l}^J`$ calculated from a known potential at 11 energies: 8, 8.5 …12.5, 13 MeV. The known potential was energy independent but parity dependent and was taken to be real. (The imaginary parts of empirical potentials are known to be small for d + <sup>4</sup>He at these energies.) The following terms were included: central Wigner, central Majorana, spin-orbit Wigner and separate even parity and odd parity $`T_\mathrm{R}`$ tensor potentials (the odd/even choice for $`T_\mathrm{R}`$ reflects what we believe to be the case for the actual d + <sup>4</sup>He tensor force.) The central and spin-orbit terms are like those found in Ref. , and the very large even parity tensor term is based on that of Dubovichenko , see also .
The inversion was effectively ‘mixed case’ in the sense just described. The starting potential was zero in all components except for the Wigner real central and Wigner real spin-orbit terms. In keeping with the nature of this test, the very small inversion basis of Ref. was used. This has two Gaussians only for each component except the central components for which there were three. The centres and widths of the Gaussians were not varied during the inversion.
The ‘target’ (known) and inverted potentials are shown in Figure 2, together with the starting potential required by the IP method. The starting potential is the dot dashed line, non-zero for two components only, and corresponding to $`\sigma =10.552`$ where $`\sigma `$, defined in Section III B, is summed over the 11 energies. The inverted potential is shown as the dotted line, and the ‘target’ potential, from which $`S_{l^{}l}^J`$ was calculated, is the full line. We see that the qualitative features are reproduced although less well for the small components and near the nuclear centre. The value of $`\sigma `$ for the potential shown in the dotted line was $`0.135`$ which is reasonable for a low energy multi-energy case. For 10.5 MeV, this corresponds to $`S_{l^{}l}^J`$ for the target and inverted potentials being indistinguishable on a graph apart from one single term: the phase angle of the non-diagonal part of $`S_{l^{}l}^J`$ for higher partial waves for which, in any case, the magnitude $`|S_{l^{}l}^J|,ll^{},`$ is very small. The tensor potential, having very different odd and even parity components, is as well reproduced as could be expected with the very small basis. Note that the starting potential for the inversion has zero tensor terms. From the matrix elements of $`T_\mathrm{R}`$ given in Section II B, we see that $`l=0`$ partial waves are ineffective and hence we cannot expect to reproduce the tensor real term $`V_\mathrm{R}`$ at $`r=0`$.
In summary: we found that the qualitative properties of the potential were reliably reproduced, particularly for the larger components. Thus, reliable statements about the general features of d + <sup>4</sup>He potentials can be made, but nothing can be asserted concerning non-central interactions for $`r<0.5`$ fm.
## V Inverting $`S_{l^{}l}^J`$ calculated with a $`T_\mathrm{P}`$ tensor interaction.
The inversion technique which we have described is limited to a tensor force of the $`T_\mathrm{R}`$ type. Since there exist processes which are expected to lead to $`T_\mathrm{P}`$ forces, the possibility must be faced that data analysed using the data-to-$`V`$ extension of the inversion method, which is described in Section VI, will indeed involve a $`T_\mathrm{P}`$ tensor interaction. It is therefore relevant to ask, in the context of $`SV`$ inversion: can we invert $`S_{l^{}l}^J`$ calculated with a $`T_\mathrm{P}`$ tensor interaction with a potential which has only a $`T_\mathrm{R}`$ tensor interaction? If so, to what extent does inversion yield valid central and spin-orbit components?
There is further interest in knowing how well the general effects of a $`T_\mathrm{P}`$ interaction can be represented by a $`T_\mathrm{R}`$ potential. The properties and even existence of a $`T_\mathrm{P}`$ interaction have not yet been convincingly linked to experiment since the consequences of the two kinds of interaction are difficult to distinguish phenomenologically. This was discussed by Goddard who compared $`S_{l^{}l}^J`$ and the observables calculated from a $`T_\mathrm{R}`$ interaction with the corresponding quantities calculated from a particular $`T_\mathrm{P}`$ interaction devised in such a way that, according to semi-classical arguments, it would be very similar in effect.
We study these questions by exploiting the equivalent pairs of tensor potentials introduced by Goddard. We first inverted $`S_{l^{}l}^J`$ for 30 MeV deuterons scattered from <sup>56</sup>Fe with a $`T_\mathrm{R}`$ potential and then inverted $`S_{l^{}l}^J`$ derived from the potential containing that $`T_\mathrm{P}`$ interaction which is ‘equivalent’ in Goddard’s sense. The two potentials are given in Table 1 of Ref. .
The first part of the test showed that inversion of $`S_{l^{}l}^J`$ for a known $`T_\mathrm{R}`$ still works very well at about half the energy of the test described in Section IV B. The results were very similar: the $`T_\mathrm{R}`$ potential, which in this case is of a volume Woods-Saxon form with depth 5 MeV, is accurately reproduced even at the nuclear centre. The solid and (scarcely distinguishable) dashed lines in Figure 3 respectively represent Goddard’s original potential and that found by inversion. The $`S_{l^{}l}^J`$ for the inverted potentials, including the non-diagonal terms, are indistinguishable on a graph from those for the original potentials.
The dotted lines in Figure 3, show the inversion for Goddard’s $`T_\mathrm{P}`$ case. The non-tensor components are qualitatively well reproduced, the derived potentials having the appearance of the target potentials but with superimposed oscillations. This waviness is relatively more significant for the small components, the real central potential being reproduced to within a few percent for all $`r`$. The $`T_\mathrm{R}`$ interaction found by inversion is now surface peaked in form but of average depth comparable to that of the Woods-Saxon (which however had a local momentum dependence, see ). The diagonal $`S_{l^{}l}^J`$ for target and inverted potentials are graphically indistinguishable, as are $`\mathrm{arg}S_{l^{}l}^J`$ for $`ll^{}`$ for low values of $`J`$. However the non-diagonal $`S`$-matrix was not well reproduced for $`J>7`$, for which partial waves the non-diagonal $`|S_{l^{}l}^J|`$ is very small. The value of $`\sigma `$ was much higher than for the $`T_\mathrm{R}`$ case, i.e. 0.0294 compared with 0.00589.
The results presented graphically in Figure 3 can be quantified in terms of the volume integrals and rms radii for the central and spin-orbit components of the inverted potentials. For the $`T_\mathrm{R}`$ case, all of these quantities were reproduced to a few parts in a thousand with the (small) volume integral of the spin-orbit term being least accurate: the error was 0.7 %. The errors for the non-tensor components found when inverting Goddard’s $`T_\mathrm{P}`$ potential were a few percent, the volume integral of the spin-orbit term again being least accurate with an error of 3.8 %.
Goddard also performed an identical comparison for the case of 13.0 MeV deuterons scattering from <sup>46</sup>Ti, and we repeated the test just described for this case. There is interest in doing this since the inversion algorithm applied to $`S_{l^{}l}^J`$ for a single energy is expected to fail at lower energies for reasons explained in Section IV C. However, we find that the results for 13 MeV deuterons on <sup>46</sup>Ti are essentially the same as for 30 MeV deuterons on <sup>56</sup>Fe for both $`T_\mathrm{R}`$ and $`T_\mathrm{P}`$ interactions. The form of the $`T_\mathrm{R}`$ potential representing the actual $`T_\mathrm{P}`$ component was essentially the same as that shown for 30 MeV in the bottom panel of Figure 3 and this similarity applies also to the deviations of the non-tensor terms. It therefore appears that we have found general properties of the $`T_\mathrm{R}`$ potential representing an actual $`T_\mathrm{P}`$ potential.
As a result of these tests, and noting that $`T_\mathrm{P}`$ interactions are not predicted to be particularly large, we conclude:
1. The existence of processes of the kind which give rise to a $`T_\mathrm{P}`$ component will not prevent this inversion procedure, which includes only $`T_\mathrm{R}`$ tensor interactions, from fitting $`S_{l^{}l}^J`$ and is unlikely to greatly falsify inversions of this kind, particularly with regard to the non-tensor components. IP spin-1 inversion as described here is thus not fatally undermined by the possible existence of $`T_\mathrm{P}`$ interactions. The effort needed to develop spin-1 inversion including $`T_\mathrm{P}`$ interactions require greater motivation than exists at present.
2. As Goddard suggested, almost all the effects of such a potential can be well represented by a $`T_\mathrm{R}`$ tensor interaction, although its relationship to the form of the $`T_\mathrm{P}`$ interaction is, as might be expected, more complicated than can be deduced from simple semi-classical arguments . The phenomenological problem of establishing $`T_\mathrm{P}`$ interactions is still considerable.
## VI Data to potential inversion for spin-1 projectiles
In what follows, we first briefly indicate how $`SV`$ inversion is extended to $`(\mathrm{data})V`$ inversion for the uncoupled case, then indicate how this is extended to include coupling, as is required for spin-1 scattering.
### A Data to potential inversion for uncoupled situation
For clarity we suppress spin-related subscripts and begin by recasting Equation 7, using Equation 5, as :
$$\frac{S_l}{\alpha _n^{(p)}}=\frac{\mathrm{i}m}{\mathrm{}^2k}_0^{\mathrm{}}(\psi _l(r))^2v_n^{(p)}(r)dr.$$
(19)
We now introduce a conventional $`\chi ^2`$ function:
$$\chi ^2=\underset{k=1}{\overset{N}{}}\left(\frac{\sigma _k\sigma _k^{\mathrm{in}}}{\mathrm{\Delta }\sigma _k^{\mathrm{in}}}\right)^2+\underset{n}{}\underset{k=1}{\overset{M}{}}\left(\frac{P_{kn}P_{kn}^{\mathrm{in}}}{\mathrm{\Delta }P_{kn}^{\mathrm{in}}}\right)^2$$
(20)
where $`\sigma _k^{\mathrm{in}}`$ and $`P_{kn}^{\mathrm{in}}`$ are the input experimental values of cross sections and analyzing powers of type $`n`$ ($`\sigma `$, $`\mathrm{i}T_{11}`$, etc.) respectively. When fitting data for many energies at once, the index $`k`$ indicates the angle and also the energy. Data re-normalising factors can be introduced as an additional contribution to Equation 20.
We must now expand $`\chi ^2`$ in terms of the $`\alpha _n^{(p)}`$. To do this we first linearize the calculated cross sections and analyzing powers, by expanding $`\sigma _k`$ (and $`P_{kn}`$) about some current point $`\{\alpha _n^{(p)}(i)\}`$ (see Ref.):
$$\sigma _k=\sigma _k(\alpha _n^{(p)}(i))+\underset{j,l}{}\left(\frac{\sigma _k}{S_l(E_k)}\frac{S_l(E_k)}{\alpha _n^{(p)}}\right)_{\alpha _n^{(p)}(i)}\mathrm{\Delta }\alpha _n^{(p)},$$
(21)
which applies at each iterative step $`i=0,1,2,`$…and the correction (to be determined) for the $`n`$-th amplitude is $`\mathrm{\Delta }\alpha _n^{(p)}=\alpha _n^{(p)}\alpha _n^{(p)}(i)`$. Equivalent relations are applied for the $`P`$’s.
Linear equations result from demanding that $`\chi ^2`$ is locally stationary with respect to variations in the potential coefficients $`\alpha _n^{(p)}`$, i.e. the derivatives of $`\chi ^2`$ with respect to the potential components $`\alpha _n^{(p)}`$ must vanish. Solving these linear equations is straightforward for any reasonable number of them and yields corrected values $`\alpha _n^{(p)}(i)`$. We then iterate the whole procedure, with wave-functions $`\psi _l`$ in Equation 19 calculated using the corrected potentials from Equation 5, until convergence is reached. This algorithm almost always converges very rapidly, in general diverging only when highly inconsistent or erroneous data have been used or when the iterative process involves a very unsuitable starting point. Multi-energy $`(\mathrm{data})V`$ inversion is thus reduced to the solution of simultaneous equations in a series of iterative steps.
### B Generalisation to spin-1
Spin-1 $`(\mathrm{data})V`$ inversion is a natural generalisation of the above formalism with $`S_l`$ replaced by $`S_{l^{}l}^J`$ and Eq. 19 replaced by the analogous form derived from Eq.8. It is shown in Ref. that the system does indeed converge to a potential which fits the observables.
### C Evaluation of ambiguities for spin-1 $`(\mathrm{data})V`$ inversion
The tests of $`(\mathrm{data})V`$ inversion must reflect the way it will be applied; this is rather different than for $`SV`$ inversion. With the latter, one often has quite precise $`S`$ calculated from a theory, and one then seeks quite precise and subtle properties of $`V`$, often relating to modifications of the theory. Inversion from measured observables is different because the data are generally far from complete and will contain statistical and, possibly, systematic errors. For this reason, we must be less ambitious concerning the details of the potential to be extracted. The test therefore ask the following question: for a situation with few active partial waves, how well-determined can we expect the potential to be?
As in $`SV`$ inversion, one must never attempt to establish details of the potential for which the input data carries no information. We must therefore apply the smallest possible inversion bases and accept approximate solutions. The penalty for excessive inversion basis dimensionality is the occurrence of spurious oscillatory features. In effect, at low energies where the data is incomplete and featureless (reflecting the small number of partial waves), the goal of $`(\mathrm{data})V`$ inversion is to find the smoothest potential compatible with the data. IP inversion affords a level of control in this respect that is not possible with other inversion procedures.
The test we describe is for low energy $`\stackrel{}{\mathrm{d}}`$ \+ <sup>4</sup>He scattering. The results will be useful for interpreting previous fits to experimental data for this system. The following observables, $`\sigma `$, $`\mathrm{i}T_{11}`$, $`T_{20}`$, $`T_{21}`$ and $`T_{22}`$, were calculated at laboratory energies of 8, 9, 10, 11, 12 and 13 MeV using the same purely real, energy independent potential used in Section IV C. Apart perhaps from the extremely strong ‘Dubovichenko-type’ tensor interaction, very strongly peaked at $`r=0`$, the general features of this potential are, we believe, similar to those of potentials which fit actual experimental data. This energy range is somewhat above the broad 2<sup>+</sup> resonances and the region of strong mixing between the 1<sup>+</sup> channels. The observables were evaluated for the six energies over a range of 20<sup>0</sup> to 170<sup>0</sup> CM, at intervals of one degree, and Gaussian noise was added as follows. For $`\sigma `$, 1% errors were imposed. For $`\mathrm{i}T_{11}`$, the errors were 2% of the maximum magnitude and for the three tensor observables, 5% of the maximum magnitude.
We then applied $`(\mathrm{data})V`$ inversion to this multi-energy dataset, seeking a single energy independent potential. Following Section IV C and Ref. , the inversion bases for the Wigner and Majorana real central components consisted of three Gaussian functions. For the other components there were just two Gaussians. As in Section IV C, the starting potential was zero in all but the Wigner real central and Wigner real spin-orbit components. The results are shown in Figure 4 where we compare the known (‘target’) potential (solid lines), the chosen starting potential of the iterative method (dash-dotted lines, two components only), and two inverted potentials, shown as dashed and dotted lines. The dashed lines show the potential found after a first sequence of iterations and correspond to $`\chi ^2/F=15.473`$ where $`F`$, the number of degrees of freedom, was $`4500`$. This number arises since we seek simultaneous fits to five observables at six energies and 150 angles. The effective number of parameters is $``$twelve. At this stage the reproduction of the larger components of the potential is fair, but the tensor terms are poor, with the even parity real tensor term being still almost zero. The corresponding fit to the model data is indicated by the set of dashed lines in Figure 5. The fit is of a quality which would be widely regarded as quite good when fitting experimental data, with only $`T_{22}`$, and perhaps $`T_{20}`$ around 120 degrees, fitted poorly. The quality of fit to $`T_{21}`$ is remarkable in view of the very poor reproduction of the tensor interaction.
A subsequent further set of iterations led to an almost perfect fit with $`\chi ^2/F=1.2155`$. Figure 4 shows that the potential, dotted, fits all parts of the potential except at quite small radii. In particular, the even parity real tensor is perfectly fitted for $`r>1`$ but not fitted at all for $`r<1`$. This is in accord with arguments given in Section IV C. As expected from the values of $`\chi ^2/F`$, the fits to the 10 MeV dataset, shown as dotted lines in Figure 5, are essentially perfect, being scarcely visible over the angular range of the artificial data. The same potential simultaneously fits the observables for the other five energies comparably well. We conclude that we could not expect to establish the various components of the potential to a higher degree of accuracy than shown in Figure 4 by fitting available experimental data. It is very salutary to see, in Figure 4, the profound change in the nature of the tensor interaction which follows the improvement of the fit revealed in Figure 5, comparing dashed and dotted lines. The intermediate inversion, dashed lines in Figure 5, represents a fit of a quality which is often deemed acceptible when fitting experimental data. We note without further comment that the desirability of pursuing the best possible phenomenological fits is sometimes called into question.
It should be noted that the computing time required on a modern workstation to carry out the direct inversion of the data is very modest, and certainly much less than required to carry out a model independent optical model search, particularly one involving odd and even parity $`T_\mathrm{R}`$ components and about 4500 degrees of freedom.
In Ref. we discussed the application of direct inversion of data as a method for phase shift analysis. It is therefore of interest to see the quality of fit to $`S_{l^{}l}^J`$ which corresponds to the two fits shown in Figure 5. The top three panels of Figure 6 show the phase shifts corresponding to the $`l=J1,l=J,l=J+1`$ diagonal components of $`S`$, and the bottom panel presents half the argument of the non-diagonal $`S`$. The solid lines show the known potential, the dashed line is for the $`\chi ^2/N=15.473`$ fit and the solid line is for the $`\chi ^2/N=1.2155`$ fit. For two of the panels, the solid and dash-dot lines are nearly indistinguishable but they are clearly distinguishable in the other two, suggesting that there are limits to phase shift determination even when over some 4000 data are fitted with $`\chi ^2/N=1.2155`$.
We conclude that direct inversion is a practical, reliable and efficient means of determining a local potential which represents large, multi-energy datasets including tensor observables. The example presented here indicates the extent to which the results obtained by this method are meaningful at low energies where few partial waves are involved.
## VII Summary and conclusions; survey of possible applications
We have presented details of an inversion procedure which can be applied both to spin-1 projectiles scattering from a spin-0 target nucleus and to spin-$`\frac{1}{2}`$ plus spin-$`\frac{1}{2}`$ particle scattering. The non-diagonal $`S_{l^{}l}^J`$ yield a non-diagonal potential containing a tensor term. To our knowledge, this is the first time this has been achieved, and opens up the possibility of wide range of other inversion scenarios ranging from other channel spin-1 cases (such as p + <sup>3</sup>H scattering) to the inversion of $`S`$-sub-matrices of higher dimensionality. There are many other capabilities inherent in the underlying IP method: these include the possibility of inverting $`S_{l^{}l}^J`$ for several energies leading directly to an energy dependent potential, including bound state energies within the input data, and the ability to handle cases where parity dependence must be allowed for.
In this paper we have presented tests of IP $`SV`$ spin-1 inversion and evaluated its performance in ‘difficult’ cases. We showed that when there are sufficient active partial waves, the procedure yields very accurate potentials even quite near the nuclear centre. Where, on the other hand, there are few partial waves available to define each potential component, as is typical with light nuclei at low energies and where the potential is parity dependent, it is still possible to extract the qualitative features of a potential.
We also addressed the fact that the method is at present limited to $`T_\mathrm{R}`$ tensor interactions although it is quite probable that processes leading to $`T_\mathrm{P}`$ interactions are active. We showed that $`S_{l^{}l}^J`$ arising from $`T_\mathrm{P}`$ interactions can be fitted quite well with a $`T_\mathrm{R}`$ tensor interaction and that, moreover, this does not lead to serious errors in the non-tensor components of the potential.
The IP inversion algorithm also forms the basis of a very efficient alternative way to find a multi-component local potential which fits elastic scattering data, particularly for multi-energy datasets. This is the direct (observable) $`V`$ inversion procedure in which the IP $`SV`$ inversion is embedded. This ‘direct inversion’ can be applied to spin-1 projectiles. We examined the ambiguity problems which arise in a ‘difficult’ (i.e. few partial waves, parity dependence) test case which is relevant to the evaluation of an analysis of low energy $`\stackrel{}{\mathrm{d}}`$ \+ <sup>4</sup>He scattering, the subject of a recent and an extended future publication. Known potentials can be very well re-fitted, but it is clear that the non-central terms cannot be well established at the nuclear centre. In the course of performing this inversion test, it became apparent that fits of widely accepted quality lead to tensor potentials which have nothing in common with those determined by pursuing ‘perfect fits’.
Finally, we remark that the method we have demonstrated here is certainly not limited in usefulness to deuteron scattering. It would certainly be worthwhile applying it the elastic scattering data for halo nuclei when these are of sufficiently substantial information content.
## Acknowledgements
We are most grateful to the UK EPSRC for grants supporting S.G. Cooper, the Russian Foundation for Basic Research (grant 97-02-17265) for financial assistance and to the Royal Society (UK) for supporting a visit by V.I. Kukulin to the UK. We thank Jeff Tostevin for sending us Goddard’s deuteron scattering code DDTP.
|
no-problem/9907/astro-ph9907149.html
|
ar5iv
|
text
|
# The “Papillon” nebula: a compact H ii blob in the LMC resolved by HST Based on observations with the NASA/ESA Hubble Space Telescope obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555.
## 1 Introduction
The formation process of massive stars is still a largely unsolved problem. Although it is believed that stars generally originate from the collapse and subsequent accretion of clumps within molecular clouds (Palla & Stahler palla (1993)), this model cannot explain the formation of stars beyond $``$ 10 $`M_{}`$ (Bonnell et al. bon (1998)). The strong radiation pressure of massive stars can halt the infall of matter limiting the mass of the star (Yorke & Krügel yor (1977), Wolfire & Cassinelli wol (1987), Beech & Mitalas bee (1994)), while a large fraction of the infalling material may as well be deflected into bipolar outflows by processes which we do not yet know in detail (Churchwell church (1997)).
Moreover, since the evolutionary time scales of massive stars are comparatively short, these stars are believed to enter the main sequence while still embedded in their parent molecular clouds (Yorke & Krügel yor (1977), Shu et al. shu (1987), Palla & Stahler pal (1990), Beech & Mitalas bee (1994), Bernasconi & Maeder bern (1996)). This means that massive stars may already experience significant mass loss and subsequent evolution while still accreting mass from the parental cloud.
In order to understand the formation of massive stars it is therefore necessary to study them at the earliest phases where they can be reached through the enshrouding material at different wavelengths. While high-resolution radio continuum observations allow the investigation of ultracompact H ii regions (Churchwell chur (1990)) formed around newborn massive stars, high angular resolution observations in ultraviolet, visible, and infrared wavelengths (Walborn & Fitzpatrick wal (1990), Walborn et al. walb (1995), Schaerer & de Koter sch (1997), Hanson et al. hanson (1996)) are necessary to access accurate physical parameters of these stars and then evaluate their states of evolution.
Our search for the youngest massive stars in the Magellanic Clouds started almost two decades ago on the basis of ground-based observations. This led to the discovery of a distinct and very rare class of H ii regions, that we called high-excitation compact H ii “blobs” (HEBs). The blob in N 159, which is the subject of this paper, was the prototype of this category of nebulae (Heydari-Malayeri & Testor 1982). So far only four other HEBs have been found in the LMC: N 160A1, N 160A2, N 83B-1, and N 11A (Heydari-Malayeri & Testor 1983, 1985, 1986, Heydari-Malayeri et al. 1990) and two more in the SMC: N 88A and N 81 (Testor & Pakull 1985, Heydari-Malayeri et al. 1988). To further improve our understanding of those compact H ii regions and overcome the difficulties related to their small size, we used the superior resolving power of HST to image N 81 and N 88A, in the SMC, as well as N 159-5 in the LMC. The analysis and discussion for the first two objects was presented by Heydari-Malayeri et al. (1999a ,1999b , hereafter Papers I and II respectively).
In the present paper we study our third HST target, the LMC blob N 159-5. This object lies in the H ii complex N 159 (Henize hen (1956)), situated some 30 ($``$ 500 pc) south of 30 Dor. N 159 is associated with one of the most important concentrations of molecular gas in the LMC (Johansson et al. joh (1998) and references therein) and contains several signposts of ongoing star formation (cocoon stars, IR sources, masers). N 159-5 is the name given by Heydari-Malayeri & Testor (hey82 (1982)) to a compact H ii region of size $``$ 6<sup>′′</sup> (1.5 pc) with high excitation (\[O iii\]/H$`\beta `$ = 8) and suffering a considerable extinction of $`A_V`$ = 5 mag as derived from H$`\beta `$ and radio continuum (Heydari-Malayeri & Testor hey85 (1985)). They also showed that the chemical composition of the object is compatible with that of typical LMC H ii regions. Israel & Koornneef (ik88 (1988)) detected near-IR molecular hydrogen emission towards the object, partly shocked and partly radiatively excited. They also confirmed the high extinction of the object from a comparison of Br $`\gamma `$ and H$`\beta `$ and estimated that N 159-5 contributes $``$ 25% to the total flux of the IRAS source LMC 1518 (Israel & Koornneef ikjhk (1991)). More recently, Comerón & Claes (cc (1998)) used ISOCAM to obtain an image of N 159-5 at 15 $`\mu `$m. Similarly, Hunt & Whiteoak (hun (1994)) used the Australia Telescope Compact Array (ATCA) to obtain the highest angular resolution radio continuum observations in existence of N 159-5 (beam 8<sup>′′</sup>.3 $`\times `$ 7<sup>′′</sup>.4). However, none of these observations were able to resolve N 159-5.
## 2 Observations
The observations of N 159-5 described in this paper were obtained with the Wide Field Planetary Camera (WFPC2) on board the HST on September 5, 1998 as part of the project GO 6535. We used several wide- and narrow-band filters (F300W, F467M, F410M, F547M, F469N, F487N, F502N, F656N, F814W) to image the stellar population as well as the ionized gas. The observational techniques, exposure times, and reduction procedures are similar to those explained in detail in Paper I.
## 3 Results
### 3.1 Morphology
In Fig. 1 we present the WFPC2 image of the eastern part of the giant H ii region N 159. This image reveals a very turbulent medium in which ionized subarcsecond structures are interwoven with fine absorption features. A large number of filaments, arcs, ridges, and fronts are clearly visible. In the south-western part of Fig. 1, we note a relatively large, high excitation ridge bordering a remarkable cavity $``$ 25 <sup>′′</sup> ($``$ 6 pc) in size. Another conspicuous cavity lies in the northern part of the image. These are most probably created by strong winds of massive stars. Moreover, a salient, dark gulf running westward into N 159 cuts the glowing gas in that direction and as it advances takes a filamentary appearance. A comparison with the CO map of Johansson et al. (joh (1998)) indicates that this absorption is due to the molecular cloud N 159-E.
The H ii blob N 159-5 stands out as a prominent high excitation compact nebula in the center of the WFPC2 field (Fig. 1), at the edge of two distinct absorption lanes of size $``$ 3<sup>′′</sup>$`\times `$ 13<sup>′′</sup> ($``$ 0.8 $`\times `$ 3.3 pc).
The most important result of our WFPC2 observations is shown in the inset of Fig. 1, namely the N159-5 blob resolved for the first time. In fact N 159-5 consists of two distinct ionized components separated by a low brightness zone the eastern border of which has a sharp front. The overall shape of N 159-5 is reminiscent of a butterfly or papillon in French.<sup>1</sup><sup>1</sup>1The term “butterfly” is already used to designate several planetary nebulae in our Galaxy: M 76, M2-9, NGC 6302, NGC 2440, and PN G010.8+18.0. It has also recently been used to describe the K-L nebula (see Sect. 4). The centers of the two wings are $``$ 2<sup>′′</sup>.3 (0.6 pc) apart. The brightest part of the right wing appears as a “smoke ring” or a doughnut with a projected radius of $``$ 0<sup>′′</sup>.6 (0.14 pc). The left wing is characterized by a very bright “globule” of radius $``$ 0<sup>′′</sup>.4 (0.1 pc) to which are linked several bright stripes all parallel and directed towards the central sharp front.
An obvious questions is: where is (are) the ionizing star(s) of N159-5? No conspicuous stars can be detected within the Papillon itself, although its overall high excitation and morphology require the source of ionization to be very close to the center of this structure. A faint star of $`y`$ = 17.9 mag can be seen between the two wings (Fig. 1, inset) and may well be the major source of ionization, heavily obscured by foreground dust. At least three more stars weaker than $`y`$$``$ 20 mag are detected (not visible in Fig. 1), two of them lying in the brightest parts of the smoke ring and one towards the other wing, east of the front. No star is detected towards the globule.
### 3.2 Nebular reddening
The HST observations allow us to study the spatial variation of the extinction in the direction of the Papillon nebula. The H$`\alpha `$/H$`\beta `$ ratio is high in both wings (Fig. 2a), varying between 5 and 10, corresponding to a visual extinction $`A_V`$ between 1.5 and 3.5 mag, if the LMC interstellar reddening law is used (Prévot et al. pre (1984)). The extinction towards the zone separating the two wings also shows comparable ratios. The H$`\alpha `$/H$`\beta `$ map was used to de-redden the H$`\beta `$ flux on a pixel to pixel basis.
### 3.3 Ionized gas emission
The \[O iii\] $`\lambda `$ 5007/H$`\beta `$ map also displays the butterfly-like structure (Fig. 2b) with line ratios varying between 3.5 and 8. The band separating the wings has overall smaller values. A comparison of the \[O iii\] and H$`\alpha `$ images shows that the Papillon has the same size and morphology in both filters. This suggests a hard radiation field in which the high excitation O<sup>++</sup> ions occupy the same zone as the ionized hydrogen. A simple calculation (Paper II) shows that almost all oxygen atoms are doubly ionized.
We measure a total H$`\beta `$ flux F(H$`\beta `$) = 2.68 $`\times `$ 10<sup>-13</sup> erg cm<sup>-2</sup> s<sup>-1</sup> above 3$`\sigma `$ level for both wings (accurate to $``$ 3%). Correcting for the extinction (Sect. 3.2) gives $`F_0`$ = 5.35 $`\times `$ 10<sup>-12</sup> erg cm<sup>-2</sup> s<sup>-1</sup>. The flux is not equally distributed between the two wings since $``$ 60% is generated by the eastern wing (globule), whereas the western wing (smoke ring) contributes by $``$ 40%.
A Lyman continuum flux of $`N_L`$ = 4.17 $`\times `$ 10<sup>48</sup> photons s<sup>-1</sup> can be derived for the whole N 159-5 taking $`T_e`$ = 10 500 K (Heydari-Malayeri & Testor hey85 (1985)), assuming a distance of 55 kpc, and considering that the H ii region is ionization-bounded. A single O8V star can account for this ionizing flux (Vacca et al. vac (1996), Schaerer & de Koter sch (1997)). However, this should be viewed as a lower limit, since the region is probably not ionization-bounded.
## 4 Discussion and concluding remarks
The three Magellanic Cloud blobs studied so far with HST (N 81, N 88A, and N 159-5) represent very young massive stars leaving their natal molecular clouds. While N 81 is a rather isolated starburst, N 88A and N 159-5 are formed in richer regions of gas where a preceding generation of massive stars has taken place. Being very young, these two regions are also similar in that they are heavily affected by dust. In spite of these similarities, N 159-5 has a bewildering morphology which is seen neither in the other blobs, nor, generally speaking, in any of the known Magellanic Cloud H ii regions. The global morphology of the Papillon and more especially the presence of peculiar fine structure features in the wings make this object a unique H ii region in the LMC. It is also the first bipolar nebula indicating young massive star formation in an outer galaxy. In this respect the Papillon may even represent a new type of very young H ii region in the Magellanic Clouds overlooked so far because of insufficient spatial resolution.
Even though high resolution spectroscopy would be necessary for a conclusive picture of the Papillon’s nature, the parallel ray-like features of the globule as well as the smoke ring strongly suggest a dynamical origin. Several observational facts also advocate a common process for the formation of these two compact features: their proximity ($``$ 2<sup>′′</sup>.3), their location in a distinct, more diffuse nebular structure, and the uniqueness of the phenomenon in the large field of the N 159 complex. In order to explain the observed morphology of N 159-5, two different models can be envisaged.
1) We are looking at a bipolar region produced by strong stellar wind of hot star(s) hidden behind the central absorption zone. It has been shown that in hot, rotating stars the mass loss rate is much larger at the poles than at the equator (Maeder 1999). This model can account for the high excitation of the two wings, as well as the global bipolar morphology. Although we cannot yet firmly advocate a bipolar phenomenon, we underline the morphological similarity between the Papillon and the high resolution image of Kleinmann-Low nebula in Orion recently obtained with the Subaru 8.3 m telescope at 2.12 $`\mu `$m (Subaru team 1999). This image shows a butterfly-shaped exploding area produced by the wind of a young cluster of stars, among which IRc2, a particularly active star estimated to have a mass over 30 $`M_{}`$. However, this model does not explain the smoke ring neither the parallel stripes.
One may also compare N 159-5 with the well-studied Galactic H ii region Sh 106 which has a prominent bipolar shape, marvellously shown in an HST H$`\alpha `$ image (Bally et al. bal (1998) and references therein). Its exciting star is hidden in the absorption region between the two lobes where extinction amounts to 20 mag in the visual. The southern lobe is much brighter than the northern one because it is blueshifted, while the other is expanding away from the observer. Also, both lobes get gradually fainter in their external parts. However, we do not see such global trends in N 159-5 and, more importantly, no smoke ring or globule features are present in Sh 106.
2) Alternatively, N 159-5 may represent two close but distinct nebulae each with its own massive star providing a high excitation for the ionized gas. In this possibility, the globule may be a bow shock created by an O star with a powerful stellar wind moving at speeds of $``$ 10 km s<sup>-1</sup> through the molecular cloud (models of Van Buren & Mac Low van (1992) and references therein). The presence of the parallel rays and the sharp front make this explanation attractive. Since no stars are detected towards the globule, we may speculate that the star is hidden behind the bow shock/globule that is heading towards the observer. As for the smoke ring, it may be due to the interaction of a stellar wind from a central star with the surrounding medium creating a bubble structure. If this is the case, the mass loss should be important, comparable to that observed in Luminous Blue Variables. However, LBVs are evolved massive stars and it is not clear how this phenomenon can occur in such a young region.
Another explanation for the smoke ring can be provided by the mentioned bow shock models. They predict the formation of a stellar wind bubble when the star’s motion is subsonic. The smoke ring can therefore be due to such a slower moving star. If this picture is correct, we are witnessing a very turbulent star forming site where massive stars formed in group are leaving their birthplace.
One should stress the noteworthy absence of prominent stars towards the Papillon. As in the case of the SMC N 88A (Paper II), this is certainly due to the very young age and the high dust content of this star formation region. N 159-5 is just hatching from its natal molecular cloud, and its exciting stars should therefore be buried inside dust/gas concentrations. In order for a star of type O8 with $`M_V`$ = –4.66 mag (Vacca et al. vac (1996)) to remain undetected in our Strömgren $`y`$ image, we need extinctions larger than $`A_V`$ = 6 mag, which is quite possible given the above estimates. However, N 159-5 will gradually evolve into a more extended, less dense region exhibiting its exciting stars, like the SMC N 81 (Paper I).
###### Acknowledgements.
We are grateful to Dr. J. Lequeux for a critical reading of the manuscript. We would like also to thank an unknown referee for suggestions that improved the manuscript. We are also indebted to Dr. L.B.E. Johansson for providing us with the CO map of the N 159 molecular cloud. VC would like to acknowledge the financial support from a Marie Curie fellowship (TMR grant ERBFMBICT960967).
|
no-problem/9907/cond-mat9907410.html
|
ar5iv
|
text
|
# Shapiro steps in a superconducting film with an antidot lattice
## Abstract
Shapiro voltage steps at voltages $`V_n=nV_0`$ ($`n`$ integer) have been observed in the voltage-current characteristics of a superconducting film with a square lattice of perforating microholes (antidots) in the presence of radiofrequent radiation. These equidistant steps appear at the second matching field $`H_2`$ when the flow of the interstitial vortex lattice in the periodic potential created by the antidots and the vortices trapped by them, is in phase with the applied rf frequency. Therefore, the observation of Shapiro steps clearly reveals the presence of mobile intersitial vortices in superconducting films with regular pinning arrays. The interstitial vortices, moved by the driving current, coexist with immobile vortices strongly pinned at the antidots.
Conventional Josephson junctions (JJ’s), irradiated with a radiofrequent signal, show Shapiro steps in the voltage current ($`V(I)`$) characteristics, which are equidistant in voltage and show a constant ratio $`\nu /V_1=h/2e`$, with $`V_1`$ the height of the first step and $`\nu `$ the irradiation frequency. In the framework of the resistively shunted junction (RSJ) model, the behavior in time of $`\mathrm{\Delta }\mathrm{\Phi }`$, the phase difference between the two electrodes of a JJ, can be described by the damped pendulum equation. This equation of motion is completely analoguous to the one of a driven mass on a tilted washboard surface. A horizontal driving force on this mass, corresponding to the current through the JJ, is represented by the tilt of the washboard. From this analogy, the origin of the voltage steps can be interpreted as follows: the dc driving force on the mass determines the average tilt of the washboard while the ac force makes the tilt oscillate around this average position with the applied ac frequency. When the period (or a multiple of the period) of the hopping of the mass over the barriers of the washboard coincides with the period of the applied ac force, an interference effect occurs, resulting in steps in the $`V(I)`$-characteristic.
This appearance of Shapiro steps is therefore not exclusively seen in the $`V(I)`$-characteristics of rf-irradiated JJ’s but is expected in every system where an object, moving in a periodic potential, is driven by a superimposed dc and ac force. An interesting example of such a system is the dc flux transformer.
Another system in which Shapiro steps can be observed, is a superconducting film with a laterally modulated thickness. Here the Abrikosov vortex lattice moves coherently in the one-dimensional periodic potential created by the thickness modulation. Martinoli et al. found pronounced equidistant steps in the $`V(I)`$-characteristics of these thickness-modulated films at several matching fields and rf frequencies.
In this paper, we focus on the observation of Shapiro steps in a superconducting film with a two-dimensional periodic potential created by a lattice of antidots, i.e. sub-micron holes (see Fig. 1a). Chosing the appropriate temperature and magnetic field, this system can be tuned so, that every antidot is occupied by a single vortex and an interstitial vortex lattice is formed at the centres of the cells, caged by the surrounding occupied antidots (see Fig. 1b). This weakly pinned interstitial lattice is easily moved through the potential that is created by the strongly pinned vortices at the antidots. We present $`V(I)`$ results obtained at the second matching field $`\mu _0H_2`$ on a superconducting Pb film containing a square array of antidots. We show clear evidence for the existence of Shapiro steps in these films. Moreover, the height of the appearing voltage steps proves that only the interstitial vortex lattice is moving, in agreement with recent Lorentz-microscopy experiments.
The studied samples are 50 nm thick Pb films with a square antidot lattice (antidot size $`a^2`$ = 0.4 x 0.4 $`\mu `$m<sup>2</sup> and period $`d`$ = 2 $`\mu `$m). Fig. 1a shows an atomic force micrograph of the sample. The films are electron beam evaporated in a MBE apparatus onto liquid nitrogen cooled SiO<sub>2</sub> substrates with a predefined resist-dot pattern and are covered with a protective Ge layer of 20 nm. The films have a strip geometry of 0.3 x 3 mm<sup>2</sup> with two current contacts and three equally spaced voltage contacts on each side of the strip. The part of the strip which lies between the used voltage contacts is 2 mm long and contains 1000 rows of antidots. From the $`T_c(H)`$ fase boundary of a reference plain film, coevaporated with the antidot lattice, the superconducting coherence length was determined to be $`\xi `$(0)=38 nm . The samples are measured in a <sup>3</sup>He cryostat using a DC current source (Keithley 238) and a nano-voltmeter (Keithley 182). The rf signal was generated by a 9 kHz-2 GHz signal generator (Rohde $`\&`$ Schwarz SMY 02). Fig. 2a shows a schematic drawing of the measuring setup. The rf signal was superimposed on the dc current through two 100 nF capacitors.
Due to the applied rf current and the associated Lorentz force, the pinning potential is tilted periodically, and when the resulting flow of the vortex lattice is in phase with the rf modulation of the pinning potential, steps occur in the $`V(I)`$ characteristic at well defined voltages. For a square lattice of moving vortices with $`k`$ moving vortices per unit cell of the antidot array, these voltages are given by :
$$V_n=Nk\frac{h}{2e}\frac{v}{d}=n(Nk\frac{h}{2e}\nu )=n(Nk\mathrm{\Phi }_0\nu )nV_0$$
(1)
where $`n`$ is an integer, $`N`$ is the number of antidot rows between the voltage contacts ($`N`$=1000 for this sample), $`v`$ is the average velocity of the coherent motion of the interstitial vortex lattice, $`\nu `$ is the frequency of the applied rf signal and $`d`$ is the period of the potential created by the singly occupied antidots. The third equality can be obtained from the second using the fact that, when the rf signal and the vortex flow are in phase, $`v/d`$=$`n\nu `$. At $`n`$=1 the vortices propagate over one lattice period $`d`$ during one rf cycle, for $`n`$=2 they move over two periods during one rf cycle, etc. . By comparing the observed step height $`V_0`$ with the one expected from Eq. (1), the number $`k`$ of moving vortices per unit cell can be determined. This technique can therefore be used to detect the presence and perhaps even the amount of interstitial vortices in superconducting films with an artificial periodic pinning array.
In Fig. 3 we show a $`V(I)`$ curve at $`T`$ = 7.151 K = 0.995 $`T_c`$ obtained for a sinusoidally modulated current : $`I`$ = $`I_{dc}+I_{rf}\mathrm{sin}(2\pi \nu t)`$ where $`I_{rf}I_c`$, the critical current of the sample, and $`\nu `$=40 MHz. The field was fixed at the second matching field $`\mu _0H_2`$=1.03 mT, determined from previous critical current measurements . The $`V(I)`$ curve is nearly a straight line through the origin, with smooth periodic steps superimposed on it. By plotting the derivative $`\delta I/\delta V`$ versus $`V`$, the steps appear as peaks and their position is well defined. In the curve shown in Fig. 3 the steps have a voltage separation $`V_0`$=$`Nk\nu \frac{h}{2e}`$ of 81.3 $`\mu `$V, which is within 2 % of the value 82.8 $`\mu `$V calculated from Eq. (1), using $`k`$=1. This leads to an average flow velocity of interstitial vortices $`v`$ of 80 m/s and a traveling time between entrance and exit of $``$ 3.75 $`\mu `$s.
We have repeated the same experiment for different frequencies down to 25 MHz while keeping the other parameters (temperature, magnetic field, amplitude $`I_{rf}`$ of the irradiation) fixed. According to Eq. (1), $`V_0`$ is a linear function of the frequency $`\nu `$ with a slope of $`k`$ $`\times N\mathrm{\Phi }_0`$, which is equal to $`k`$ $`\times `$ 2.07 $`\mu `$V/MHz for this sample. The measurements reveal a slope of 2 $`\mu `$V/MHz (see the upper-left inset of Fig. 3), indicating that there is only one moving vortex per unit cell of the antidot array. This means that indeed two vortices per unit cell are present, but that only the interstitial vortices are moving coherently, while the other half of the vortices is pinned to the antidots or creeping incoherently (see the schematic plot in Fig. 2b).
Indeed, the saturation number $`n_s\frac{r}{2\xi (T)}=\frac{r}{2\xi (0)}\sqrt{1\frac{T}{T_c}}`$ = 0.18 indicates that only one vortex can be pinned per antidot, forcing the excess vortices to occupy the much weaker pinning sites at interstices . Recent Lorentz-microscopy experiments and numerical simulations have shown that, for a range of driving forces, the motion of interstitial vortices is confined to 1D channels between the adjacent antidot rows. When all interstitial positions are filled, as in our experiments, it is to be expected that the repulsive interaction within the 1D vortex channel and the shear between neighboring rows will lead to a coherent motion of all interstitial vortices, as we observe in our experiments.
In summary, we have shown that Shapiro steps at voltages $`V_n=nV_0`$ are present in the $`V(I)`$ curves of a superconducting film with a square antidot lattice when a rf current is superposed on the dc transport current. These voltage steps are equidistant and the ratio $`V_0/\nu `$ depends on the number of antidot rows enclosed between the voltage contacts and the number of moving vortices per antidot lattice unit cell. From the height of the steps, we were able to prove that only the interstitial vortices are moving, while the vortices occupying the antidots are strongly pinned or creeping incoherently. Therefore, the presence of Shapiro steps should be attributed to the coherent motion of interstitial vortices in superconducting films with a regular array of strong pinning sites.
###### Acknowledgements.
The authors would like to thank A. Gilabert for useful discussions. This work has been supported by the Belgian Interuniversity Attraction Poles (IUAP) and the Flemish Concerted Action (GOA), the TOURNESOL and ESF ’VORTEX’ programs. M. J. Van Bael is a Postdoctoral Research Fellow of the Fund for Scientific Research (FWO-Vlaanderen).
|
no-problem/9907/astro-ph9907134.html
|
ar5iv
|
text
|
# Mass-detection of a matter concentration projected near the cluster Abell 1942: Dark clump or high-redshift cluster?based on observations with the Canada-France-Hawaii Telescope (CFHT) operated by the National Research Council of Canada (CNRC), the Institut des Sciences de l’Univers (INSU) of the Centre National de la Recherche Scientifique (CNRS) and the University of Hawaii (UH) and on data obtained through the NASA/GSFC HEASARC Online archive.
## 1 Introduction
The abundance of clusters of galaxies as a function of mass and redshift provides one of the most sensitive cosmological tests (e.g., Richstone et al. 1992; Bartelmann et al. 1993). In particular, in a high-density Universe, the abundance of massive clusters strongly decreases with redshift, so that the existence of a few massive high-redshift clusters can in principle rule out an $`\mathrm{\Omega }_0=1`$ model (e.g., Eke et al. 1996; Bahcall & Fan 1998).
The reliability of the test depends on the detection efficiency and selection effects in existing samples of clusters whose understanding may be critical. Currently, clusters are selected either by their optical appearance as overdensities of galaxies projected onto the sky and/or in color-magnitude diagrams, or by their X-ray emission. Both selection techniques may bias the resulting sample towards high-luminosity objects, i.e. they would under-represent clusters with high mass-to-(optical or X-ray) light ratio. Furthermore, the observed properties have to be related to their mass in order to compare the observed abundance to cosmological predictions. The usual procedures consist in assuming a dynamical and/or hydrostatic equilibrium state as well as the geometry of the mass distribution, which in general may be questionable and fairly poorly justified from a theoretical point of view.
Indeed, whereas cosmological theories have made great progress in their ability to predict the distribution of dark matter in the Universe, either analytically or numerically (e.g., Lacey & Cole 1993; Jenkins et al. 1998), the luminous properties of matter are much more difficult to model. For example, to relate the X-ray data of a cluster to its mass, a redshift-dependent luminosity-temperature relation needs to be employed (see Borgani et al. 1999 and references therein), in the absence of a detailed understanding of the physics in the intra-cluster gas. It would therefore be of considerable interest to be able to define a sample of ‘clusters’ – or more precisely, dark matter halos – which can be directly compared with the predictions coming from N-body simulations.
Weak gravitational lensing offers an attractive possibility to detect dark matter halos by their mass properties only. A mass concentration produces a tidal gravitational field which distorts the light bundles from background sources. Owing to their assumed random intrinsic orientation, this tidal field can be detected statistically as a coherent tangential alignment of galaxy images around the mass concentration. A method to quantify this tangential alignment was originally introduced by Kaiser et al. (1994) to obtain lower bounds on cluster masses, and later generalized and proposed as a tool for the search of dark matter halos (Schneider 1996). This so-called aperture mass method can be applied to blank field imaging surveys to detect peaks in the projected density field. Combining halo abundance predictions from Press & Schechter (1974) theory with the universal density profile found in N-body simulations (Navarro et al. 1997), Kruse & Schneider (1999) estimated the number density of dark matter halos detectable with this method (with a signal-to-noise threshold of 5) to be of order 10 deg<sup>-2</sup>, for a number density of 30 galaxies/arcmin<sup>2</sup>, and depending on the cosmological model. These predictions were confirmed (Reblinsky et al. 1999) in ray-tracing simulations (Jain et al. 1999) through numerically-generated cosmic density fields.
In this paper, we report the first detection of a dark matter halo not obviously associated with light, using the above-mentioned weak lensing technique. Using a $`14^{}\times 14^{}`$ deep $`V`$-band image, obtained with MOCAM at CFHT, we aimed to investigate the projected mass profile of the cluster Abell 1942 on which the image is centered. We found a highly significant peak in the reconstructed mass map, in addition to that corresponding to the cluster itself. This second peak, located about $`7^{}`$ South of the cluster center, shows up in the alignment statistics of background galaxy images with a significance $`>99.99\%`$, as obtained from Monte-Carlo simulations which randomized the orientation of these background galaxies. An additional deep $`I`$-band image, taken with the UH8K at CFHT, confirms the presence of the mass peak. No obvious large overdensity of galaxies is seen at this location, implying either a mass concentration with low light-to-mass ratio, or a halo at substantially higher redshift than A1942 itself. Finally, an analysis of an archival ROSAT/HRI image of A1942 shows, in addition to the emission from the cluster, a 3.2-$`\sigma `$ detection of a source with position close to the peak in the projected mass maps; though this weak detection would be of no significance by itself, the positional coincidence with the ‘dark’ clump suggests that it corresponds to the same halo, and that it may be due to a high-redshift ($`z\begin{array}{c}>\hfill \\ \hfill \end{array}0.5`$) cluster.
The outline of the paper is as follows: in Sect. 2 we describe the observations and data reduction techniques, as well as the measurement of galaxy ellipticities which we employed. The aperture mass statistics is briefly described in Sect. 3.1 and applied to the optical data sets, together with a determination of the peak detection significance. Properties of the mass concentration as derived from the optical data sets and the X-ray data are discussed in Sects. 3.2 and 3.3, respectively, and a discussion of our findings is provided in Sect. 4. We shall concentrate in this paper mainly on the ‘dark’ clump; an analysis of the mass profile of the cluster A1942 and the reliability of mass reconstruction will be published elsewhere (van Waerbeke et al., in preparation)
## 2 Summary of optical observations and image processing
The $`V`$\- and $`I`$-band observations were obtained at the prime focus of CFHT with the MOCAM and the UH8K cameras, respectively. Both observing procedures were similar, with elementary exposure time of 1800 seconds each in $`V`$ and 1200 seconds in $`I`$. A small shift of 10 arc-seconds between pointings was applied in order to remove cosmic rays and to prepare a super-flatfield.
The $`V`$-band images were obtained during an observing run in dark time of June 1995 with the $`4K\times 4K`$ mosaic camera MOCAM (Cuillandre et al 1997). Each individual chip is a $`2K\times 2K`$ LORAL CCD, with $`0.^{\prime \prime }206`$ per pixel, so the total field-of-view is $`14^{}\times 14^{}`$. Nine images have been re-centered and co-added, to produce a final frame with a total exposure time of 4h30min. The seeing of the coadded image is $`0.^{\prime \prime }74`$.
The $`I`$-band images were obtained with the $`8K\times 8K`$ mosaic camera UH8K (Luppino, Bredthauer & Geary, 1994). Each individual chip is a $`2K\times 4K`$ LORAL CCD, also with $`0.^{\prime \prime }206`$ per pixel, giving a field-of-view of $`28^{}\times 28^{}`$. The final centered coadded image resulting from 9 sub-images has a total exposure time of 3h and a seeing of $`0.^{\prime \prime }67`$. The $`V`$\- and $`I`$-band images have been processed in a similar manner, using standard IRAF procedures and some more specific ones developed at CFHT and at the TERAPIX<sup>1</sup><sup>1</sup>1http://terapix.iap.fr data center for large-field CCD cameras. None of these procedures had innovative algorithms, so there is basically no difference in the pre-processing and processing of the MOCAM and UH8K images. For the present paper, we use only Chip 3 of the UH8K $`I`$-band image which contains the cluster A1942, and the additional mass concentration discussed further below. Fig. 1 shows the CCD images from both fields and their relative geometry.
A first object detection and the photometry have been performed with SExtractor2.0.17 (Bertin & Arnouts 1996). The MOCAM field has been calibrated using the photometric standard stars of the Landolt field SA110 (Landolt 1992), and the UH8K field was calibrated using the Landolt fields SA104 and SA110. The completeness limit is $`V=26`$ and $`I=24.5`$.
The lensing analysis was done with the imcat software, based on the method for analysing weak shear data by Kaiser, Squires & Broadhurst (1995), with modifications described in Luppino & Kaiser (1997) and Hoekstra et al. (1998; hereafter HFKS98). This method is based on calculations of weighted moments of the light distribution. Imcat is specifically designed for the measurement of ellipticities of faint and small galaxy images, and their correction for the smearing of images by a PSF, and for any anisotropy of the PSF which could mimic a shear signal. These corrections are employed by the relation
$$\chi =\chi ^0+P^\gamma \gamma +P^{\mathrm{sm}}p,$$
$`(1)`$
where $`\chi `$ is the observed image ellipticity (defined as in, e.g., Schneider & Seitz 1995), $`\chi ^0`$ is the ellipticity of the unlensed source smeared by the isotropic part of the PSF, $`P^\gamma `$ is the response tensor of the image ellipticity to a shear, and $`P^{\mathrm{sm}}`$ is the response tensor to an anisotropic part of the PSF, characterized by $`p`$. These tensors are calculated for each galaxy image individually. Since the expectation value of $`\chi ^0`$ in (1) is zero, one obtains an unbiased estimate of the shear through
$$\widehat{\gamma }=(P^\gamma )^1\left[\chi P^{\mathrm{sm}}p\right].$$
$`(2)`$
($`\widehat{\gamma }`$ is in reality an estimate for the reduced shear $`\gamma /(1\kappa )`$ which reduces to the shear if $`\kappa 1`$.) The PSF anisotropy in our images is fairly small and regular over the field. We selected bright, unsaturated stars from a size vs. magnitude plot (see Fig. 2) and determined their ellipticities. As Fig. 3 shows, the stellar ellipticity changes very smoothly over the fields so that its behaviour can be easily fit with a second-order polynomial (see also Fig. 4). With these polynomials we performed the anisotropy correction in (1). We follow the prescription of HFKS98 for the calculation of $`P^\gamma `$, and used the full tensors, not just their trace-part, in (2).
The current version of imcat does not give information about the quality of objects; for this we produced a SExtractor (version 2.0.20) catalog containing all objects that had at least six connected pixels with 1-$`\sigma `$ above the local sky background. From this catalog we sorted out all objects with potential problems for shape estimation (like being deblended with another object or having a close neighbour). This included all objects with FLAGS$`2`$ (internal SExtractor flag). The remaining catalog was matched with the corresponding imcat catalog, using a maximum positional difference of three pixels, and keeping only those objects for which the detection signal-to-noise of imcat was $`7`$.
This procedure left us with 4190 objects ($`V>22.0`$) for the MOCAM and 1708 objects ($`I>21.0`$) for the $`I`$-band chip3. With these final catalogs all subsequent analysis was done. We note that we did not cross-correlate the MOCAM and UH8K catalogs; hence, the galaxies taken from both catalogs will be different even in the region of overlap. Due to the different waveband used for object selection, the redshift distribution of the background galaxies selected on the MOCAM and the UH8K-chip3 frame can be different.
## 3 Analysis of the ‘dark’ clump
### 3.1 Weak lensing analysis
From the image ellipticities of ‘background’ galaxies, we have first reconstructed the two-dimensional mass map of the cluster field from the MOCAM data, using the maximum-likelihood method described in Bartelmann et al. (1996) and independently, the method described in Seitz & Schneider (1998). The resulting mass maps are very similar, and we show the former of these only.
In the left panel of Fig. 5, we show the resulting mass map with the (mass-sheet degeneracy) transformation parameter $`\lambda `$ chosen such that $`\kappa =0`$ (see Schneider & Seitz 1995), together with contours of the smoothed number density of bright galaxies. In general, this number density correlates quite well with the reconstructed surface mass density. As can be seen, a prominent mass peak shows up centered right on the brightest cluster galaxy.
In addition to this mass peak, several other peaks are present in the mass map. Such peaks may partly be due to noise coming from the intrinsic image ellipticities and, to a lesser degree, to errors in the determination of image ellipticities. In order to test the statistical significance of the mass peaks, we used the aperture mass method (Schneider 1996).
Let $`U(\vartheta )`$ be a filter function which vanishes for $`\vartheta \theta `$, and which has zero mean, $`_0^\theta d\vartheta \vartheta U(\vartheta )=0`$. Then we define the aperture mass $`M_{\mathrm{ap}}(\vartheta )`$ at position $`\vartheta `$ as
$$M_{\mathrm{ap}}(\vartheta )=_{|\vartheta ^{}|\theta }\mathrm{d}^2\vartheta ^{}\kappa (\vartheta +\vartheta ^{})U(|\vartheta ^{}|).$$
$`(3)`$
Hence, $`M_{\mathrm{ap}}(\vartheta )`$ is a filtered version of the density field $`\kappa `$; it is invariant with respect to adding a homogeneous mass sheet or a linear density field, and is positive if centered on a mass peak with size comparable to the filter scale $`\theta `$. The nice feature about this aperture mass is that it can be expressed directly in terms of the shear, as
$$M_{\mathrm{ap}}(\vartheta )=_{|\vartheta ^{}|\theta }\mathrm{d}^2\vartheta ^{}\gamma _\mathrm{t}(\vartheta ^{};\vartheta )Q(|\vartheta ^{}|)$$
$`(4)`$
(Kaiser et al. 1994; Schneider 1996), where the filter function $`Q(\vartheta )=2\vartheta ^2_0^\vartheta d\vartheta ^{}\vartheta ^{}U(\vartheta ^{})U(\vartheta )`$ is determined in terms of $`U(\vartheta )`$, and vanishes for $`\vartheta \theta `$. The tangential shear $`\gamma _\mathrm{t}(\vartheta ^{};\vartheta )`$ at relative position $`\vartheta ^{}`$ with respect to $`\vartheta `$ is defined as
$$\gamma _\mathrm{t}(\vartheta ^{};\vartheta )=\text{e}[\gamma (\vartheta +\vartheta ^{})\mathrm{e}^{2\mathrm{i}\phi ^{}}],$$
$`(5)`$
where $`\phi ^{}`$ is the polar angle of the vector $`\vartheta ^{}`$. In the case of weak lensing ($`\kappa 1`$), the observed image ellipticities $`\widehat{\gamma }`$ from (2) are an unbiased estimator of the local shear, and so the aperture mass can be obtained by summing over image ellipticities as
$$M_{\mathrm{ap}}^{}(\vartheta )=\frac{\pi \theta ^2}{N}\underset{i}{}\widehat{\gamma }_{\mathrm{t}i}(\vartheta )Q(|\theta _i\vartheta |),$$
$`(6)`$
where the sum extends over all $`N`$ galaxy images with positions $`\theta _i`$ which are located within $`\theta `$ of $`\vartheta `$, and the tangential component $`\widehat{\gamma }_{\mathrm{t}i}(\vartheta )`$ of the image ellipticity relative to the position $`\vartheta `$ is defined in analogy to $`\gamma _\mathrm{t}`$. In general, $`M_{\mathrm{ap}}^{}(\vartheta )`$ is not an unbiased estimator of $`M_{\mathrm{ap}}(\vartheta )`$ since the expectation value of $`\widehat{\gamma }`$ is the reduced shear, not the shear itself. However, unless the aperture includes a strong mass clump where $`\kappa `$ is not small compared to unity, $`M_{\mathrm{ap}}^{}`$ will approximate $`M_{\mathrm{ap}}`$ closely. But even if the weak-lensing approximation breaks down for part of the aperture, one can consider the quantity $`M_{\mathrm{ap}}^{}(\vartheta )`$ in its own right, representing the tangential alignment of galaxy images with respect to the point $`\vartheta `$. This interpretation also remains valid if the aperture is centered on a position which is less than $`\theta `$ away from the boundary of the data field, so that part of the aperture is located outside the data field, in which case $`M_{\mathrm{ap}}^{}(\vartheta )`$ will not be a reliable estimator of $`M_{\mathrm{ap}}(\vartheta )`$.
In order to determine the significance of the peaks in the mass map shown in Fig. 5, we have calculated $`M_{\mathrm{ap}}^{}`$ on a grid of points $`\vartheta `$ over the data field, for four values of the filter scale $`\theta `$. Then, we have randomized the position angles of all galaxy images, and calculated $`M_{\mathrm{ap}}^{}`$ on the same grid for these randomized realizations. This has been repeated $`N_{\mathrm{rand}}`$ times. Finally, at each grid point the fraction $`\nu `$ of randomizations where $`M_{\mathrm{ap}}^{}`$ is larger than the measured value from the actual data has been obtained; this fraction (which we shall call ‘error level’ in the following) is the probability of finding a value of $`M_{\mathrm{ap}}^{}`$ at that gridpoint for randomly oriented galaxy images, but with the same positions and ellipticities as the observed galaxies.
Fig. 6 displays the contours of constant $`\nu `$, for different filter radii, varying from $`80^{\prime \prime }`$ to $`200^{\prime \prime }`$. As can be seen, the cluster center shows up prominently in the $`\nu `$-map on all scales. In addition, two highly significant peaks show up, one at the upper right corner, the other $`7^{}`$ South of the cluster center, close to the edge of the MOCAM field. We have verified the robustness of this Southern peak by using SExtractor ellipticities instead of those from imcat, and found both the cluster components and the Southern peak also with that catalog (although it should be much less suited for weak lensing techniques).
After these findings, we obtained the UH8K $`I`$-band image, on which both the cluster and the Southern mass peak are located on Chip 3. The mass reconstruction from galaxy images on Chip 3 are shown in the right panel of Fig. 5, from which we see that the cluster and this Southern mass peak also show up. Repeating the aperture mass statistics for Chip 3, we obtain the error levels as shown in Fig. 7; again, this Southern peak shows up at very high significance. Whereas the third peak in the significance maps (considering the two larger filter scales) from Chip 3, about halfway between cluster and the Southern component and slightly to the West, is also quite significant and is also seen in the corresponding MOCAM map (and most likely also corresponds to a mass peak, though a highly elongated one for which the aperture mass is less sensitive), we shall concentrate on the Southern peak, which we call, for lack of a better name, the ‘dark clump’.
In fact, as can be seen from Figures 1 and 5, this mass peak does not seem to be associated with any concentration of brighter galaxies. This could mean two things: either, the mass concentration is in fact associated with little light, or is at much higher redshift than A1942 itself.
Concentrating on the location of the dark clump, we determined the probability distribution $`p_0(M_{\mathrm{ap}}^{})`$ for the value of $`M_{\mathrm{ap}}^{}`$, obtained from $`2\times 10^6`$ randomizations of the galaxy orientations within $`160^{\prime \prime }`$ of the dark clump. This probability distribution is shown as the solid (from MOCAM) and dashed (from Chip 3) curve on the left of Fig. 8. These two distributions are very well approximated by a Gaussian, as expected from the central limit theorem. The value of $`M_{\mathrm{ap}}^{}`$ at the dark clump is $`0.0395`$ for MOCAM, and $`0.0283`$ for Chip 3. The fact that these two values are different is not problematic, since for Chip 3, the whole aperture fits inside the data field, whereas it is partially outside for MOCAM; hence, the two values of $`M_{\mathrm{ap}}^{}`$ measure a different tangential alignment. Also, since the two data sets use galaxies selected in a different waveband, their redshift distribution can be different, yielding different values of the resulting lens strength. The probability that a randomization of image orientations yields a value of $`M_{\mathrm{ap}}^{}`$ larger than the observed one is $`10^6`$ for the MOCAM field, and $`4.2\times 10^4`$ for Chip 3.
Next we investigate whether the highly significant value of $`M_{\mathrm{ap}}^{}`$ at the dark clump comes from a few galaxy images only. For this, the sample of galaxy images inside the aperture was bootstrap resampled, to obtain the probability $`p_{\mathrm{boot}}(M_{\mathrm{ap}}^{})`$ that this resampling yields a particular value of $`M_{\mathrm{ap}}^{}`$. This probability is also shown in Fig. 8. The probability that the bootstrapped value of $`M_{\mathrm{ap}}^{}`$ is negative is $`3.8\times 10^4`$ for Chip 3, and $`<10^6`$ for the MOCAM peak.
The radial dependence of the tangential image ellipticity is considered next. Fig. 9 shows the mean tangential image ellipticity in annuli of width $`20^{\prime \prime }`$, both for the MOCAM and the UH8K data centered on the dark clump. The error bars show the 80% probability interval obtained again from bootstrapping. It is reassuring that the radial behaviour of $`\widehat{\gamma }_\mathrm{t}`$ is very similar on the two data sets. In fact, owing to the different wavebands of the two data fields and the fact that the aperture does not fit inside the MOCAM field, this agreement is better than one might expect. The mean tangential ellipticity is positive over a large angular range; except for one of the inner bins (for which the error bar is fairly large), $`\widehat{\gamma }_\mathrm{t}`$ is positive in all bins for $`\theta \begin{array}{c}<\hfill \\ \hfill \end{array}150^{\prime \prime }`$. This figure thus shows that the large and significant value of $`M_{\mathrm{ap}}^{}`$ at the dark clump is not dominated by galaxy images at a particular angular separation.
### 3.2 Properties of the dark clump
We now investigate some physical properties of our dark clump candidate. We first argue that it is very unlikely for our object to lie at a redshift higher than 1. For our magnitude limit of $`24.5`$ in the $`I`$ band we expect approximately 30 galaxies/$`(1^{})^2`$. We used approximately half of them (see Sec. 2) as putative background galaxies for our analysis. The median of simulated redshift distributions that extend the CFRS data (Lilly et al. 1995) to fainter magnitude limits (Baugh, Cole & Frenk 1996) is at about $`z0.70.8`$. If we assume that all our galaxies lie in the extreme tail of these distributions, then $`z=1.0`$ represents a good upper limit for the redshift of our clump. However, the lensing analysis of the high-redshift cluster MS1054$``$03 (Luppino & Kaiser 1997) may provide an indication for a somewhat larger mean source redshift.
Next we use Fig. 9 to obtain a crude estimate of the mass of this object. Although the tangential shear appears to be fairly small close to the center position of the clump, there is a region between $`50^{\prime \prime }`$ and $`150^{\prime \prime }`$ where the tangential shear is clearly positive and decreases smoothly with radius. If we describe the mass profile by an isothermal sphere, its velocity dispersion $`\sigma _v`$ would be given by
$$\left(\frac{\sigma _v}{c}\right)^2=\frac{1}{2\pi }(\gamma _\mathrm{t}\theta )\frac{D_{\mathrm{ds}}}{D_\mathrm{s}}^1,$$
$`(7)`$
where the product $`\gamma _\mathrm{t}\theta `$ would be independent of $`\theta `$ for an isothermal sphere model, and the final term is the ratio lens-source to observer-source distance, averaged over the background galaxy population. Introducing fiducial parameters, this becomes
$$\sigma _v=1135\sqrt{\frac{\gamma _{100}}{0.06}}\sqrt{\frac{1}{3D_{\mathrm{ds}}/D_\mathrm{s}}}\mathrm{km}/\mathrm{s},$$
$`(8)`$
where $`\gamma _{100}`$ is the tangential shear $`100^{\prime \prime }`$ from the mass center. Alternatively, we can express this result in terms of the mass within a sphere of radius $`R`$, $`M(<R)=2\sigma _v^2R/G`$; for example, within $`R=0.5h^1\mathrm{Mpc}`$, we find
$$M(<0.5h^1\mathrm{Mpc})=2.9\times 10^{14}h^1M_{}\frac{\gamma _{100}}{0.06}\frac{1}{3D_{\mathrm{ds}}/D_\mathrm{s}}.$$
$`(9)`$
Whereas this model is quite crude, the largest uncertainty in quantitative mass estimates comes from the unknown redshift of the dark clump and the unknown redshift distribution of the background galaxy population. The mass is a monotonically increasing function of the lens redshift, and depends very strongly on the assumed mean source redshift, in particular for values of $`z_\mathrm{d}\begin{array}{c}>\hfill \\ \hfill \end{array}0.5`$.
With the $`I`$ band data we now estimate the light coming from the dark clump. For this we created a SExtractor catalog counting every connected area with at least 3 pixels 0.5-$`\sigma `$ above the sky background as a potential object. The flux of all these objects (except from obvious stars) in a circle of $`100^{\prime \prime }`$ radius around the clump center was summed up. We did the same in 32 control circles around ‘empty’ regions in the other UH8K chips. It turned out that the flux within the clump region is compatible with the mean flux of the control annuli, i.e., there is no overdensity of light at the position of the dark clump. So we took the 1-$`\sigma `$ fluctuation of the fluxes in the control circles as a reasonable upper limit for the light coming from the dark clump. For converting the flux into a total $`I`$ band magnitude we assumed that we are dominated by elliptical galaxies, using $`K`$ corrections for this galaxy type calculated with the latest version of the Bruzual & Charlot stellar population synthesis models for the spectrophotometric evolution of galaxies (Bruzual & Charlot 1993). From the total $`I`$ band magnitude we derived a bolometric magnitude and a bolometric luminosity using standard approximations. With a lower limit for the mass and an upper limit for the luminosity we can give lower limits for the mass to light ratio of our object. This is shown in Fig. 10 for different source redshift distributions and two cosmologies. We see that the EdS universe gives fairly high $`M/L`$ estimates in comparison to a $`\mathrm{\Omega }=0.3`$, $`\mathrm{\Lambda }=0.7`$ model. When we assume a redshift of $`z0.8`$ for our clump we obtain a lower limit of $`M/L300`$ in the $`\mathrm{\Lambda }`$ cosmology. This is a conservative lower limit which could be lowered significantly only if one assumes that the redshift distribution of the faint galaxies extends to substantially higher redshift.
As the dark clump has a mass characteristic of massive clusters it is of interest to search for X-ray emission associated with it.
### 3.3 The X-Ray data analysis
A1942 was observed by the ROSAT HRI in August 1995. The total integration time was 44,515 s. We retrieved the X-ray images from the public archive and reduced them using ESAS, Snowden’s code especially developed for the analysis of extended sources in ROSAT data (Snowden et al 1994; Snowden & Kuntz 1998).
The region showing a significant peak in the weak lensing reconstructed mass map is within the field of view of the HRI image of A1942. We have searched for X-ray emission in this area. First of all, we have refined the astrometry in the X-ray image matching X-ray point sources to objects in our deep optical images. The astrometric offset from the original instrument coordinates is 3.5”. There is a significant X-ray emission peak centered at 14<sup>h</sup> 38<sup>m</sup> 22.8<sup>s</sup>, 3 $`33^{}`$ $`11^{\prime \prime }`$ (J2000.0). This position is $`60^{\prime \prime }`$ away from the weak lensing mass peak. The X-ray source is detected at the 3.2-$`\sigma `$ level using an aperture of $`30^{\prime \prime }`$ radius. Although the number of counts detected is low, its distribution is inconsistent with a point-like source, showing a profile elongated along the NW-SE direction that is broader than the instrumental PSF.
We have measured the source count-rate using concentric circular apertures centered on the X-ray emission peak. We obtain a count-rate of $`7.4\pm 2.5\times 10^4\mathrm{s}^1`$ within a circular aperture of $`45^{\prime \prime }`$ radius. The counts still increase somewhat at larger radii but the measurement is much noisier given the uncertainty in the sky determination. The total flux is thus approximately 10-30% larger than the above value. We convert the count-rate into a flux assuming an incident spectrum of $`T=3`$ keV and a local hydrogen column density of $`N_H=2.61\times 10^{21}`$ cm<sup>-2</sup>. The resulting unabsorbed flux is $`3.4\pm 1.2\times 10^{14}`$ erg cm<sup>-2</sup> s<sup>-1</sup> in the 0.1-2.4 keV band. We have also fitted a standard beta profile (Cavaliere & Fusco-Fermiano 1978) to the azimuthally averaged radial profile. We obtain best values for the core radius and beta parameter (slope decline at large radii) of $`15^{\prime \prime }`$ and 0.80, respectively, although these values are quite uncertain given the low total number of counts.
The X-ray luminosity depends on the redshift of the source. Assuming an incident spectrum at the detector of $`T=3`$ keV \[$`T=3(1+z)\mathrm{keV}`$ at the source\], the rest-frame X-ray luminosity in the 0.1-2.4 keV band would range from $`1.9\pm 2.5\times 10^{42}h^2`$ erg s<sup>-1</sup> if the redshift is the same as that of A1942 ($`z=0.223`$) to $`3.5\pm 0.5\times 10^{43}h^2`$ erg s<sup>-1</sup> if $`z=1.0`$ ($`q_o=0.5`$).
We have also made a crude estimate of the mass of the system. On the one hand, if we assume an X-ray luminosity–temperature relation (e.g., Reichart et al 1999, Arnaud & Evrard 1999) and a temperature–mass relation (e.g., Mohr et al 1999), we can get mass estimates at a $`0.5h^1`$ Mpc radius from $`1.5\times 10^{13}h^1M_{}`$ at $`z=0.223`$ to $`1.6\times 10^{14}h^1M_{}`$ at $`z=1`$ ($`q_o=0.5`$). We can also assume a beta profile, fixing the core radius and the beta parameter, and compute the normalization necessary to obtain the observed flux at the measured radius. Then we can integrate the profile to obtain the gas mass. If we further assume a gas fraction, we can also obtain a total mass estimate. If we take the values obtained from our previous fit of the X-ray surface brightness profile, we get total masses at a radius of $`0.5h^1`$ Mpc, of $`9.2\times 10^{12}h^1M_{}`$ at $`z=0.223`$ and $`2.3\times 10^{13}h^1M_{}`$ at $`z=1`$ ($`q_o=0.5`$). Note the difference of a factor of 1.5 and 7 compared to the previous estimates. This gives an indication of the errors involved. If instead we were to use typical values of the core radius and beta parameter of most clusters of galaxies (e.g., $`r_c=0.125h^1`$ Mpc and $`\beta =2/3`$) the mass estimates would be approximately a factor 3 larger and closer to the estimates using standard correlations.
Although we have presented quantitative values for the mass of the system based on the X-ray data, these should be taken only as informative given the assumptions and errors involved. Our main point in presenting these estimates is to show that this system has the X-ray properties of a galaxy group if it is at the same redshift of A1942. The lensing shear signal measured would then be too large for such a group unless it had a remarkable unusually high mass-to-X-ray light ratio. It seems more plausible that the system is a more massive cluster of galaxies at a higher redshift if the X-ray and lensing signal do indeed come from the same source, although the X-ray derived mass is still lower than the one obtained from the shear signal. The small angular scale X-ray core radius (larger physical scale if at larger redshift) and the lack of bright galaxies also point towards the same conclusion.
As an alternative, the X-ray emission may be unrelated to the dark clump, but associated with the small galaxy number overdensity projected near it, as seen from the black contours in the right-hand panel of Fig. 5. In that case, both the local enhancement of the galaxy density and the X-ray emission may be compatible with a group of galaxies, rather than a massive cluster, as indicated by the weak lensing analysis.
## 4 Discussion and conclusions
Using weak lensing analysis on a deep high-quality wide-field $`V`$-band image centered on the cluster Abell 1942, we have detected a mass concentration some $`7^{}`$ South of the cluster. This detection was confirmed by a deep $`I`$-band image. No clear overdensity of bright galaxies spatially associated with this mass concentration is seen; therefore, we termed it the ‘dark clump’. A slight overdensity of galaxies is seen $`1^{}`$ away from the mass center of the dark clump, but it is unclear at present whether it is physically associated with the mass concentration. Archival X-ray data allowed us to detect a 3.2-$`\sigma `$ X-ray source near the dark clump, separated by 60 arcseconds from its peak; it appears to be extended. The X-ray source is spatially coincident with the slight galaxy overdensity.
We have estimated the significance of the detection of this mass peak, using several methods. For the $`V`$-band image, the probability that this mass peak is caused by random noise of the intrinsic galaxy ellipticities is $`10^6`$; a similar estimate from the I-band image yields a probability of $`4\times 10^4`$. Thus, the mass peak is detected with extremely high statistical significance. A bootstrapping analysis has shown that the tangential image alignment is not dominated by a few galaxy images, as also confirmed by the smooth dependence of the tangential shear on the angular separation from its center. Whereas these statistical tests cannot exclude any systematic effect during observations, data reduction, and ellipticity determination, the fact that this dark clump is seen in two independent images, taken in different filters and with different cameras, make such systematics as the cause for the strong alignment highly unlikely. Although we have accounted for the slight anisotropy of the PSF, the uncorrected image ellipticities yield approximately the same result.
A simple mass estimate of the dark clump shows it to be truly massive, with the exact value depending strongly on its redshift and the redshift distribution of the faint background galaxies. The mass inside a sphere of radius $`0.5h^1`$ Mpc is $`\begin{array}{c}>\hfill \\ \hfill \end{array}10^{14}h^1M_{}`$, if an isothermal sphere model is assumed; if the lens redshift is larger, this lower mass limit increases, by about a factor 2 for $`z0.5`$ and a factor of about 10 for $`z1`$. In any case, this mass estimate appears to be incompatible with the X-ray flux if the dark clump corresponds to a ‘normal cluster’, at any redshift. We therefore conclude that the mass concentration, though of a mass that is characteristic of a massive cluster, is not a typical cluster. This conclusion is independent of whether the X-ray emission is physically associated with the dark clump or not.
The lack of an obvious concentration of galaxies near the mass peak has been transformed into an upper limits of the luminosity associated with the mass concentration, and therefore into a lower limit of the mass-to-light ratio. This $`M/L`$ limit depends again strongly on the redshift distribution of the faint galaxies, as well as on the assumed clump redshift. Whereas values for $`M/L`$ as low as $`200`$ (in solar units) are theoretically possible if the clump has a redshift in excess of unity, the corresponding mass becomes excessively and unrealistically large; for more reasonable redshifts $`z_\mathrm{d}\begin{array}{c}<\hfill \\ \hfill \end{array}0.8`$, $`M/L\begin{array}{c}>\hfill \\ \hfill \end{array}450`$ for an Einstein-de Sitter Universe, and $`M/L\begin{array}{c}>\hfill \\ \hfill \end{array}300`$ for a low density flat Universe. We would like to point out, though, that estimates of the $`M/L`$-ratio quoted in the literature practically never assume a $`\mathrm{\Lambda }`$-dominated cosmology, so that the $`M/L`$ ratio quoted above for the low-density Universe cannot be directly compared to literature values.
We can only speculate about the nature of this dark clump. As argued above, a normal cluster seems to be ruled out, owing to the lack of bright X-ray emission. Whereas the estimated X-ray luminosity can be increased by shifting the putative cluster to higher redshifts, the corresponding lens mass also increases with $`z_\mathrm{d}`$, in a way which depends on the redshift distribution of the source galaxies. The spatial coincidence of the slight galaxy overdensity and the X-ray emission, both $`1^{}`$ away from the mass center of the dark clump, may best be interpreted as a galaxy group or weak cluster at relatively low redshift and not associated with the dark clump.
The dark clump itself may then be a mass concentration with either low baryon density or low temperature, or both. For example, it may correspond to a cluster in the process of formation where the gas has not yet been heated to the virial temperature so that the X-ray luminosity is much lower than expected for a relaxed cluster. The fact that the tangential shear decreases towards the center of the mass clump may indeed be an indication of a non-relaxed halo.
Further observations may elucidate the nature of this mass concentration. Deep infrared images of this region will allow us to check whether an overdensity of IR-selected galaxies can be detected, as would be expected for a high-redshift cluster, together with an early-type sequence in the color-magnitude diagram. A deep image with the Hubble Space Telescope would yield a higher-resolution mass map of the dark clump, owing to the large number density of galaxies for which a shape can be measured, and thus determine its radial profile with better accuracy. Images in additional (optical and IR) wavebands can be used to estimate photometric redshifts for the background galaxies. In conjunction with an HST image, one might obtain ‘tomographic’ information, i.e., measuring the lens strength as a function of background source redshift; this would then yield an estimate of the lens redshift. The upcoming X-ray missions will be considerably more sensitive than the ROSAT HRI and will therefore be able to study the nature of the X-ray source in much more detail. And finally, one could seek a Sunyaev-Zel’dovich signature towards the dark clump; its redshift-independence may be ideal to verify the nature of a high-redshift mass concentration.
But whatever the interpretation at this point, one must bear in mind that weak lensing opens up a new channel for the detection of massive halos in the Universe, so that one should perhaps not be surprised to find a new class of objects, or members of a class of objects with unusual properties. The potential consequences of the existence of such highly underluminous objects may be far reaching: if, besides the known optical and X-ray luminous clusters, a population of far less luminous dark matter halos exist, the normalization of the power spectrum may need to be revised, and the estimate of the mean mass density of the Universe from its luminosity density and an average mass-to-light ratio may change. We also remind the reader that already for one cluster, MS1224, an apparently very high mass-to-light ratio has been inferred by two completely independent studies (Fahlman et al. 1994; Fischer 1999).
* We thank Emmanuel Bertin, Stephane Charlot, Nick Kaiser, Lindsay King and Simon White for usefull discussions and suggestions. We are grateful to Stephane Charlot for providing the $`K`$-corrections of elliptical galaxies in the $`I`$ band. This work was supported by the TMR Network “Gravitational Lensing: New Constraints on Cosmology and the Distribution of Dark Matter” of the EC under contract No. ERBFMRX-CT97-0172, the “Sonderforschungsbereich 375-95 für Astro–Teilchenphysik” der Deutschen Forschungsgemeinschaft, and a PROCOPE grant No. 9723878 by the DAAD and the A.P.A.P.E.
|
no-problem/9907/cond-mat9907396.html
|
ar5iv
|
text
|
# Missing 2𝑘_𝐹 Response for Composite Fermions in Phonon Drag
\[
## Abstract
The response of composite Fermions to large wavevector scattering has been studied through phonon drag measurements. While the response retains qualitative features of the electron system at zero magnetic field, notable discrepancies develop as the system is varied from a half-filled Landau level by changing density or field. These deviations, which appear to be inconsistent with the current picture of composite Fermions, are absent if half-filling is maintained while changing density. There remains, however, a clear deviation from the temperature dependence anticipated for $`2k_F`$ scattering.
\]
Composite Fermions (CF), new quasiparticles initially described as the combination of an electron with an even number of magnetic flux quanta, provide a simplifying physical picture of the fractional quantum Hall effect (FQHE). The particles have also been argued to possess many of the properties of electrons at zero magnetic field, experiencing an effective field which is zero for a half-filled Landau level even though they exist in the presence of extreme magnetic fields. Numerous experimental investigations, including studies of surface acoustic waves (SAW), cyclotron resonance in antidot lattices, activation energies and magnetic focusing, have confirmed the existence of these particles and reveal behavior similar to zero-field electrons. The experiments, in addition, clearly support the existence of a Fermi surface for the particles. A common element of these investigations, however, is that they have generally been limited to small wavevector scattering. A key question for the particles, how they respond to large wavevector scattering, especially across the re-emergent Fermi surface, has not been systematically investigated in experiment. It is this response that the experiments presented here were designed to address.
Access to the large wavevectors required for scattering a CF across the Fermi surface is provided here through phonons. The use of phonons permits the scattering wavevector (q) to be effectively tunable by changing the temperature, T. At low temperatures, only small wavevector acoustic phonons are thermally excited, limiting scattering of CF to small q processes. As the temperature is increased, access to larger wavevector phonons permits larger wavevector scattering. For electrons, the transition to large angle scattering is readily evident due to a sharp cutoff for scattering with q greater than twice the Fermi wavevector (2k<sub>F</sub>). The cutoff results directly from the existence of a Fermi surface and the combined restrictions of momentum and energy conservation. A clear change in temperature dependence, the Bloch-Grüneisen transition, has been directly observed in resistivity measurements and in phonon drag.
A similar cutoff should exist for composite Fermions. Two key elements are required. The first is the presence of a Fermi surface for the particles. The second is that CF must be able to withstand large wavevector scattering, sending a CF across its Fermi surface. While the first condition is well established, the second has not been theoretically investigated in detail, with studies generally limited to small wavevectors and low temperatures. Details of the phonon interaction with CF should not affect the existence of this cutoff, as they do not for electrons, which depends only on the magnitude of the phonon wavevector as compared to 2k<sub>F</sub>.
The isolation of phonon scattering in this work is attained through electron drag measurements between remotely spaced parallel two-dimensional electron gas (2DEG) layers. In electron drag when a current is driven through one of two electrically isolated 2DEG’s, interlayer electron-electron (e-e) interactions transfer momentum to the second layer, inducing a voltage in that layer. The drag transresistivity $`\rho _D`$, the ratio of this voltage to the applied current per square, is a direct measure of the interlayer scattering rate. While Coulomb scattering dominates $`\rho _D`$ for closely spaced layers, its strong layer spacing dependence permits interlayer phonon exchange to completely dominate interactions of remote layers.
The samples used in this work, GaAs/AlGaAs double quantum well structures, consist of two 200Å wide quantum wells which are remotely spaced. The bulk of the CF measurements were performed on a sample with a 5000 Å barrier thickness. Each layer has an electron density, n, near $`1.5\times 10^{11}/cm^2`$ as grown, with mobilities approaching $`2\times 10^6cm^2/Vs`$. Individual layer densities were varied through the application of a voltage to an overall top gate or by applying an interlayer bias, with the densities in each layer made equal for all measurements. The large two-terminal resistances present in the samples at high fields demanded particular care, requiring measurement frequencies as low as 0.5 Hz and currents as low as 20 nA. Established tests such as interchanging current and voltage leads, testing current linearity, and ensuring the absence of interlayer leakage and other spurious signals through ground configuration changes all confirm the validity of the measurements. The lack of change in the drag signal upon reversal of the magnetic field indicates that Hall voltages play no role in these measurements. Comparable results were obtained for a second 5000 Å barrier sample and a 2400 Å barrier sample.
The effect of the 2k<sub>F</sub> cutoff for phonon scattering is shown in Fig. 1a for a zero-field phonon drag measurement on a 2400 Å barrier sample in which Coulomb scattering is negligible. Data are plotted as $`\rho _D/T^2`$, revealing a distinct change in temperature dependence and a peak near 2 K. The peak position is known not to change with layer spacing; $`\rho _D`$ for this sample is shown due to significantly reduced signals for the 5000 Å barrier sample at zero field. The peak position, which varies with the size of the Fermi surface (i.e., $`\sqrt{n}`$), quantifies the transition from a strong temperature dependence with q $``$ 2k<sub>F</sub>, to the weaker dependence when q is limited to 2k<sub>F</sub> scattering. The inset plots the relative net momentum carried by phonons of a given in-plane wavevector for both deformation potential and piezo-electric coupling at 3 K. This single-layer calculation, based closely on earlier work, clearly shows the cutoff at 2k<sub>F</sub> is independent of details of the electron-phonon interaction. The temperature of the peak in $`\rho _D/T^2`$ is thus directly related to the phonon wavevector which matches 2k<sub>F</sub>.
Before examining the temperature dependence of phonon drag for composite Fermions, it is necessary to re-establish, at high fields, that phonon scattering dominates $`\rho _D`$. This is explored through measurements made below 1K (Fig. 1b, inset) on a 5000 Å barrier sample at 13T corresponding to a half-filled lowest Landau level ($`\nu =1/2`$). This data is well characterized by a power law dependence with a best fit of $`\rho _DT^{3.7}`$ (solid line). This exponent is substantially higher than the sub-quadratic dependence established for Coulomb drag of CF and is more consistent with expectations of phonon scattering from thermopower measurements and theoretical calculations. The behavior of $`\rho _D`$ at low temperatures firmly establishes a negligible role for Coulomb scattering in this sample.
Measurements of $`\rho _D/T^2`$ for CF at higher temperatures, shown in Fig. 1b, reveal a behavior remarkably similar to that for zero field electrons at the same density. The transition from a strong to a weak temperature dependence mimics the low field data, with a peak position near but slightly lower than that in Fig. 1a. The behavior indicates a distinct wavevector cutoff in the phonon scattering process. A notable difference is the magnitude of $`\rho _D`$, being significantly larger for CF. This increase is similar to the enhanced scattering of CF generally observed.
While the data confirm the existence of a wavevector cutoff, the temperature of the peak in $`\rho _D/T^2`$, $`T_P`$, is substantially lower than expected. Spin polarization of the CFs results in a larger Fermi surface than at zero field, yielding a peak in $`\rho _D/T^2`$ at a higher temperature for the same phonon system. The expected $`\sqrt{2}`$ increase in the size of the Fermi surface has been established in other measurements and would result in a peak position closer to 3 K as indicated by the arrow in the figure.
The substantial difference between the measured and anticipated peak position for CF raises the possibility that the q cutoff may not result from the CF Fermi surface. Questions of CF stability, for example, must be considered. Theoretical predictions of the CF binding energy are $`4K`$ for these densities. The observed peak position, 1.9 K, however, is below this binding energy and well within the range for which CF effects are observable in SAW measurements. The lack of strong FQHE states at these temperatures does not indicate an invalid regime for CFs, but merely the absence of an energy gap. This distinction is evident in recent magnetization measurements.
Another possibility is that the maximum in $`\rho _D/T^2`$ is due to single-particle effects of the electron system in a high magnetic field. For example, the scattering wavevector may have a cutoff determined by the width of the Landau level or the magnetic length. These origins of a cutoff have been argued to be responsible for features observed in earlier thermopower measurements at $`\nu =1/2`$ and ballistic phonon absorption at high magnetic fields, respectively. Both of these mechanisms would result in an increase of the peak position as the field is increased. However, examination of $`\nu =1/4`$ (not shown), another CF state, shows a temperature dependence similar to $`\nu =1/2`$ for a given density, but with a $`10\%`$ lower peak position. This small decrease in $`T_P`$, for a factor of 2 increase in field, clearly contradicts scattering limitations due to the Landau level width or the magnetic length. In addition, the similarity between the peak position for $`\nu =1/2`$ and $`\nu =1/4`$ supports the assertion that composite Fermions are observed.
To explore the origin of the discrepancy in the peak position, $`\rho _D`$ was measured in the presence of an effective magnetic field. Figure 2a shows the effect of varying the system away from $`\nu =1/2`$ by changing the magnetic field with a constant density. A striking element of these measurements is the change in the magnitude of $`\rho _D/T^2`$, which increases by roughly threefold. Another is the variation of the peak position with field. The value of $`T_P`$ has been quantified through a fit in the vicinity of the maximum, with the resultant peak values, shown in the inset, generally insensitive to the functional form of the fit. At fields near and above half filling, $`T_P`$ is proportional to $`\sqrt{B}`$ (solid line), while $`T_P`$ falls below this dependence at lower fields.
A complimentary method for the application of an effective magnetic field is explored in Fig. 2b, where the external field is constant and the density is varied. Significant changes in magnitude continue to be present as the density is varied with the magnetic field fixed at 12.8T. Compared to the field dependence of Fig. 2a, however, there is substantially less variation in the position of $`T_P`$ (inset), with a weak maximum at half filling. This is suggestive that half filling, and thus CF, are important in determining the cutoff.
The changes in magnitude and peak position with the application of an effective field are inconsistent with general expectations for CF away from half filling. For example, field variations have been observed to induce cyclotron motion of the composite particles, which experience an effective field equal to the difference of the applied field from that at half filling. Properties related to the Fermi surface of CF should persist for low effective fields, as they do for bare electrons, until the period of cyclotron motion is less than the scattering time. From this perspective, a peak position determined by the size of the Fermi surface should not change over the range of effective fields explored in Fig. 2 and the magnitude should remain relatively constant. It is thus difficult to reconcile the changes in the measured behavior within a simple CF picture.
The complex behavior observed motivates consideration of spin effects, though expectations of spin-splitting energies are large enough that such effects appear unlikely. Measurements of $`\rho _D`$, with the sample tilted 22, matching the perpendicular fields and electron density of Fig. 2a, were indistinguishable from that data in both magnitude and peak position. This rules out a role for spin in the interlayer phonon scattering process.
A common element of the measurements of Fig. 2 is that significant deviations from $`\nu =1/2`$ were made. The complexity of those measurements is greatly reduced if half-filling is retained while the density is changed, as shown in Fig. 3, eliminating the effective field. A change in the peak position with density is still evident, however the large variations in magnitude of the measurements of Fig. 2 are now absent, with all densities approaching a common $`\rho _D/T^2`$ at higher temperatures. The peak positions, shown in the inset, are reasonably described by $`T_P\sqrt{n}`$ (solid line), consistent with changes of the size of the CF Fermi surface. This behavior does not result from a simple combination of the individual dependences on field and density observed in Fig. 2.
Comparison of the density dependence of the magnitude of $`\rho _D`$ in Fig. 3 with that of electrons at zero field provides additional support that this data results from a Fermi surface related cutoff of CF scattering. The electronic response, shown in the inset of Fig. 3 for the 2400 Å barrier sample, reflects the general behavior of the CF system. In addition to $`T_P`$ varying with the size of the Fermi surface, both show little density dependence in $`\rho _D/T^2`$ at high temperatures despite the density of the electron measurements spanning a wider range than for CF. The striking similarity of the zero field data with the CF measurements, when restricted to $`\nu =1/2`$, suggests a simpler response in which the CF system mimics that of electrons.
These data raise a number of puzzling questions. The first regards behavior as $`\nu `$ is varied from half-filling. The generally accepted picture of an effective field which has little impact until the CF cyclotron period is less than the scattering time is inconsistent with the considerable changes observed in the density and field dependence of $`\rho _D`$. The origin of these inconsistencies and whether they are related to the large q scattering probed in this work remains an open question.
Another clearly important question involves the position of $`T_P`$ observed at half filling; it is one-third lower in temperature than anticipated from extrapolation of the zero field measurements. Various reasons for this discrepancy may be considered. One possible cause lies in the significant difference in sound velocity between longitudinal and transverse phonons in GaAs layers. The shift of $`T_P`$ observed, however, would require that zero-field electrons interact exclusively with longitudinal phonons, but CF predominantly with transverse phonons. Such behavior is inconsistent with both theoretical investigations of phonon drag and the measured position of $`T_P`$ in the electron system. A second consideration is that the relative contribution of 2k<sub>F</sub> scattering, as compared to smaller q’s, may be substantially weaker for CF than in the electron system. Reducing this contribution could move $`T_P`$ to lower temperatures. This would contradict preliminary numerical calculations done for low energies. Another possibility is that the internal structure of the particles themselves are probed in these large wavevector scattering events. Resolution of these and other questions raised in this work will likely require additional investigation.
In summary, large wavevector scattering of composite Fermions has been investigated through measurements of interlayer phonon drag. The temperature dependence of these measurements implies the existence of a wavevector cutoff, in agreement with qualitative properties of the electron system at zero field. As the CF system is varied from $`\nu =1/2`$, clear changes in magnitude and temperature dependence develop which are inconsistent with current expectations of CF’s. Varying the density but remaining at half filling shows behavior substantially more consistent with the zero field electron system. A clear deviation remains, however, from the temperature dependence anticipated for a wavevector cutoff corresponding to $`2k_F`$ scattering.
|
no-problem/9907/nucl-th9907062.html
|
ar5iv
|
text
|
# Equilibrium and non-equilibrium effects in relativistic heavy ion collisions.
## Abstract
The hypothesis of local equilibrium (LE) in relativistic heavy ion collisions at energies from AGS to RHIC is checked in the microscopic transport model. We find that kinetic, thermal, and chemical equilibration of the expanding hadronic matter is nearly reached in central collisions at AGS energy for $`t10`$ fm/$`c`$ in a central cell. At these times the equation of state may be approximated by a simple dependence $`P(0.120.15)\epsilon `$. Increasing deviations of the yields and the energy spectra of hadrons from statistical model values are observed for increasing bombarding energies. The origin of these deviations is traced to the irreversible multiparticle decays of strings and many-body $`(N3)`$ decays of resonances. The violations of LE indicate that the matter in the cell reaches a steady state instead of idealized equilibrium. The entropy density in the cell is only about 6% smaller than that of the equilibrium state.
The assumption of the creation of a locally equilibrated (LE) hadronic state in ultrarelativistic heavy ion collisions has been subject of theoretical and experimental efforts during the last decades. Despite the long history the question remains still open. The present analysis employs the Ultra-relativistic Quantum Molecular Dynamics (UrQMD) model to examine the approach to local equilibrium of hot and dense nuclear matter, produced in central heavy ion collisions at energies from AGS to SPS and RHIC.
First, the kinetic equilibration of the system is examined. In order to diminish the number of distorting factors we choose a cubic cell of volume $`V=125`$fm<sup>3</sup> centered around the origin of the CM-system of the colliding nuclei. Due to the absence of a preferential direction of the collective motion, the collective velocity of the cell is essentially zero. The longitudinal flow in the cell reaches its maximum value at times from $`t=2`$ fm/$`c`$ (RHIC) to $`t=6`$ fm/$`c`$ (AGS). Then it drops and converges to the transverse flow. Disappearance of the flow implies: (i) isotropy of the velocity distributions, which leads to (ii) isotropy of the diagonal elements of the pressure tensor, calculated from the virial theorem,
$$P_{\{x,y,z\}}=\underset{i=h}{}p_{i\{x,y,z\}}^2/3V(m_i^2+p_i^2)^{1/2},$$
(1)
containing the volume of the cell $`V`$ and the mass and the momentum of the $`i`$-th hadron, $`m_i`$ and $`p_i`$, respectively. The time evolution of the pressure in longitudinal and transverse directions shows (Fig. 1) that kinetic equilibration in the central zone of the reaction takes place at $`t10`$ fm/$`c`$ (AGS), 8 fm/$`c`$ (SPS), and 4 fm/$`c`$ (RHIC). Note that the pressure given by the SM \[see Eq. (5)\] is in a good agreement with microscopic results.
The standard procedure is to compare the snapshot of particle yields and spectra in the cell at given time with those predicted by the statistical thermal model of a hadron gas . Three parameters, namely the energy density $`\epsilon `$, the baryon density $`\rho _\mathrm{B}`$, and the strangeness density $`\rho _\mathrm{S}`$, extracted from the analysis of the cell, are inserted into the equations for an equilibrated ideal gas of hadrons. Then all characteristics of the system in equilibrium, including the yields of different hadronic species, their temperature $`T`$, and chemical potentials, $`\mu _\mathrm{B}`$ and $`\mu _\mathrm{S}`$, can be calculated. If the yields and the energy spectra of the hadrons in the cell are sufficiently close to those of the SM, one can take this as indication for the creation of equilibrated hadronic matter in the central reaction zone.
The particle yields, $`N_i^{\mathrm{SM}}`$, and total energy, $`E_i^{\mathrm{SM}}`$, of the hadron species $`i`$ read
$`N_i^{\mathrm{SM}}`$ $`=`$ $`{\displaystyle \frac{Vg_i}{2\pi ^2\mathrm{}^3}}{\displaystyle _0^{\mathrm{}}}p^2f(p,m_i)𝑑p,`$ (3)
$`E_i^{\mathrm{SM}}`$ $`=`$ $`{\displaystyle \frac{Vg_i}{2\pi ^2\mathrm{}^3}}{\displaystyle _0^{\mathrm{}}}p^2\sqrt{p^2+m_i^2}f(p,m_i)𝑑p.`$ (4)
Here $`g_i`$ is the degeneracy factor, and the distribution function $`f(p,m_i)`$ is given by Eq. (2).
The hadron pressure and the entropy density are calculated within the SM as
$`P^{\mathrm{SM}}`$ $`=`$ $`{\displaystyle \underset{i}{}}{\displaystyle \frac{g_i}{2\pi ^2\mathrm{}^3}}{\displaystyle _0^{\mathrm{}}}p^2{\displaystyle \frac{p^2}{3(p^2+m_i^2)^{1/2}}}f(p,m_i)𝑑p,`$ (5)
$`s^{\mathrm{SM}}`$ $`=`$ $`{\displaystyle \underset{i}{}}{\displaystyle \frac{g_i}{2\pi ^2\mathrm{}^3}}{\displaystyle _0^{\mathrm{}}}f(p,m_i)\left[\mathrm{ln}f(p,m_i)1\right]p^2𝑑p.`$ (6)
Figure 3 shows the energy spectra of hadronic species, obtained from the microscopic calculations together with the predictions of the SM. At AGS energy the difference between the UrQMD and SM results for baryons lies within the 10%-range of accuracy. With the rise of initial energy from AGS to SPS the agreement between the models becomes worse. Moreover, even at 10.7 AGeV the deviations of pion spectra in UrQMD from those of the SM are significant.
The Boltzmann fit to pion and nucleon energy spectra from the central cell at 160 AGeV shows that the nucleon (pion) “temperature” is about 30 (50) MeV below the $`T^{\mathrm{SM}}`$. The subtraction of pions does not decrease the temperature in the SM fit, but leads to the increase of chemical potential of strange particles.
The yields of nucleons and pions in the central cell are shown in Fig. 3. The agreement between the SM and UrQMD nucleon yields is reasonably good for $`t10`$ fm/$`c`$. Compared to UrQMD, the statistical model significantly underestimates the number of pions, especially at 160 AGeV. The conditions (iii) and (iv) are not satisfied. Despite the occurrence of a state in which hadrons are in kinetic equilibrium and collective flow is rather small, the hadronic matter is neither in thermal nor in chemical equilibrium. However, the hadron multiplicities in the cell are in a good agreement with those of the equilibrated infinite matter , simulated within the UrQMD (Fig. 3, circles).
The detailed balance in relativistic heavy ion collisions is broken because of the irreversibility of multiparticle processes and non-zero lifetimes of resonances. Thus, the matter in the cell is in a steady state rather than in idealized equilibrium. The entropy per baryon ratio stays remarkably constant during the expansion at the quasi-equilibrium stage . This fact supports the applicability of hydrodynamics, which assumes an isentropic expansion of the relativistic hadron liquid. The partial entropy densities, carried separately by hadron species in the cell (Fig. 5) are close to those in the SM. The total entropy density is only about 6% smaller than the SM total entropy density.
The evolution of the central cell in $`T`$-$`\mu _\mathrm{B}`$ plane (Fig. 5) indicates that the extraction of temperature by performing the SM fit to hadron yields and energy spectra is a very delicate procedure. Although the temperatures of hadrons in the steady state are limited to $`T_{lim}145`$ MeV, the “apparent” temperature obtained from the fit may occur high enough to hit the zone of quark-hadron phase transition or even pure QGP phase. To study the heavy ion collisions at high energies one has to apply the non-equilibrium thermodynamics of irreversible processes, and not the equilibrium thermodynamics!
Conclusions. The results of the present study may be summarized as follows.
1. There is a kinetic equilibrium stage of hadron-string matter in the central $`V=125`$ fm<sup>3</sup> cell of relativistic heavy ion collisions at about $`t8`$ fm/$`c`$.
2. Entropy per baryon ratio remains constant during the time interval $`8t18`$ fm/$`c`$. This result supports the application of relativistic hydrodynamics.
3. The differences between the UrQMD and SM results increase with rising bombarding energy, i.e., thermal and chemical equilibrium is not reached. But: hadron spectra and yields in the cell are consistent with the UrQMD infinite matter calculations.
4. We call this quasi-equilibrium state steady state. Its origin is traced to the irreversible multiparticle processes and many-body decays of resonances.
|
no-problem/9907/nucl-th9907070.html
|
ar5iv
|
text
|
# Coulomb distortion of 𝜋⁺/𝜋⁻ as a tool to determine the fireball radius in central high energy heavy ion collisionsSupport received in part by CONACyT México under grant I27212-E, by the U.S. National Science Foundation under grant NSF PHY94-21309 and by the U.S. Department of Energy under grants DE-FG02-87ER40328, DE-AC0376SF00098 and DE-FG03-93ER40792.
## Abstract
We compute the Coulomb distortion produced by an expanding and highly charged fireball on the spectra of low transverse momenta and mid rapidity pions produced in central high energy heavy ion collisions. We compare to data on Au+Au at $`11.6A`$ GeV from E866 at the BNL AGS and of Pb+Pb at $`158A`$ GeV from NA44 at the CERN SPS. We match the fireball expansion velocity with the average transverse momentum of protons and find a best fit to the charged pion ratio when the fireball radius is about 10 fm at freeze-out. This value is common to both AGS and SPS data.
An important feature to account for in the analysis of the spectra of secondaries produced in the collision of heavy systems is the presence of a large amount of electric charge. Due to the long-range nature of the electromagnetic interaction, the spectrum of charged particles will be distorted even after freeze-out. For central collisions, this Coulomb effect can be more significant when there is strong stopping and the participant charge in the central rapidity region is an important fraction of the initial charge.
Another feature to consider is that the field-producing charge distribution is in general not static but rather participates in the dynamics responsible for matter expansion after the collision. The combined role played by Coulomb distortions and expansion in the description of charged particle spectra has been the subject of some recent work. Koch and Barz, Bondorf, Gaardho$`/`$je and Heiselberg have developed approximate models to describe the situation in which expansion takes place predominantly along the collision axis. In this work, we focus on the description of Coulomb effects on pion spectra from a spherically symmetric expanding source. We compare our calculation to mid-rapidity pions produced in central Au+Au reactions at $`11.6A`$ GeV from E866 at the BNL AGS and in central Pb+Pb reactions at $`158A`$ GeV from NA44 at the CERN SPS. A detailed analysis can be found in Refs. .
Before proceeding on to the model calculation, let us say a few words about the assumption of spherical symmetry. It has been known for some time that the momentum distributions of secondaries are somewhat forward-backward peaked, specially at the SPS, even for central collisions , and this observation can cast some doubt about the validity of a model that assumes a spherically symmetric fireball. However, spherical geometry is not essential to the basic physics and can be relaxed at the expense of additional computing time. Nevertheless, let us provide the following arguments in favor of its use. First, we will be comparing with the transverse momentum distributions at mid-rapidity where the impact of spherical asymmetry should be less important than near the fragmentation regions. Second, pion interferometry of central Au+Au collisions at the AGS and of central Pb+Pb collisions at the SPS both yield comparable values for the transverse and longitudinal radii at the time of pion freeze-out. These radii are about twice the radius of a cold gold or lead nucleus. Third, as we will later see, the transverse surface of the fireball needs to expand with a speed more than 90% that of light in order to reproduce the average proton transverse momentum. Since the longitudinal surface of the fireball cannot travel faster than the speed of light, this means that in velocity space the fireball is nearly symmetric. These phenomena, although not yet measured at that time, were already known to Landau . The essential insight from his model is not the degree of stopping but rather the point that significant transverse expansion sets in after the longitudinal and transverse radii become comparable in magnitude. Thereafter, from the point of view of a distant observer, the expansion is not as asymmetric as one might originally think. This model was later developed by others, including Cooper, Frye and Schonberg .
A uniformly charged sphere which has a total charge $`Ze`$ and whose radius $`R`$ increases linearly with time $`t`$ from a value $`R_0`$ at time $`t_0`$ at a constant surface speed $`v_s`$ produces an electric potential
$`V(r,t)=\{\begin{array}{cc}Ze/4\pi r,\hfill & rR=v_st\hfill \\ Ze(3R^2r^2)/8\pi R^3,\hfill & rR=v_st\hfill \end{array}.`$ (3)
In the center-of-mass frame of the fireball the charge moves radially outwards, hence there is no preferred direction and consequently the magnetic field produced by this moving charge configuration vanishes. The fireball parameters are related by $`R_0=v_st_0`$. If $`f^\pm (𝐫,𝐩,t)`$ represents the $`\pm e`$ test particle phase space distribution then, when ignoring particle collisions after decoupling, its dynamics is governed by Vlasov’s equation.
$`\left[{\displaystyle \frac{}{t}}+{\displaystyle \frac{𝐩}{E_p}}_r\pm e𝐄(𝐫,t)_p\right]f^\pm (𝐫,𝐩,t)=0,`$ (4)
where $`E_p=\sqrt{p^2+m^2}`$, $`m`$ is the meson’s mass and $`𝐄(𝐫,t)=_rV(r,t)`$ is the time-dependent electric field corresponding to the potential $`V(r,t)`$.
The solution is found by the method of characteristics. This involves solving the classical equations of motion and using the solutions to evolve the initial distribution, taken to be thermal
$`f^\pm (𝐫,𝐩,t_0)=\mathrm{exp}\left\{\left(E_p\pm V(r,t_0)\right)/T\right\},`$ (5)
forward in time. The pion’s asymptotic momentum is computed numerically by a sixth order Runge-Kutta method with adaptive step sizes from a set of initial phase-space positions. The final momentum distribution is the result of computing the trajectories for many initial phase space points. The initial radial position was incremented in N<sub>r</sub>=50 steps with spacing $`\mathrm{\Delta }r/R_0`$ = 0.02. The initial momentum was incremented in N<sub>p</sub>=600 steps with spacing $`\mathrm{\Delta }p`$ = 1 MeV. The cosine of the angle between the initial position and momentum vectors was incremented in N<sub>z</sub>=100 steps of size $`\mathrm{\Delta }\mathrm{cos}\theta =0.02`$. Hence the total number of trajectories computed was $`3\times 10^6`$ for each set of initial conditions.
To determine the value of the surface expansion velocity, we match the average transverse momentum, as inferred from our assumption of a uniformly expanding sphere, with the measured one. The result is
$`p_T=\pi {\displaystyle \frac{2(2+v_s^2)\sqrt{1v_s^2}}{4v_s^3}}m_P.`$ (6)
Note that $`v_s`$ should not be interpreted as a hydrodynamic flow velocity, rather, it embodies the combined effects of hydrodynamic flow and thermal motion of the net charge carriers, mainly protons.
For central Pb+Pb collisions at 158$`A`$ GeV , the number of protons per unit rapidity in the central region is on the order of 30. If we consider that the rapidity region spanned by the fireball is between 1 and 5 units, then we take as the effective fireball’s charge $`Z=120`$. To model the primordial distribution, we use an exponential parametrization of $`dN/p^2dp`$ from which we extract an effective temperature $`T_{eff}=110`$ MeV up to $`m_Tm=500`$ MeV. Figure 1 shows this representation in comparison to the transverse mass distributions of positive and negative pions. Both NA44 and NA49 have reported the transverse mass distribution in central Pb+Pb collisions at mid-rapidity to be $`dN/p_Tdp_T\mathrm{exp}(m_T/T_P)`$ with $`T_P`$ = 290 MeV. This corresponds to an average transverse momentum of 825 MeV. The value $`v_s`$ = 0.916 gives, according to Eq. (6), the match of the model to the proton spectra. The best fit to the $`\pi ^+/\pi ^{}`$ ratio is obtained for a value of the freeze-out radius for pions of 10 fm with a $`\chi _{p.d.o.f.}^2=1.67`$.
For central Au+Au collisions at 11.6$`A`$ GeV , the number of participating protons may be estimated from the data to be $`Z=116`$. The average proton transverse momentum is 820 MeV yielding the surface speed $`v_s=0.914`$. The slopes of the $`\pi ^+`$ and $`\pi ^{}`$ distributions up to $`m_Tm=400`$ MeV are the same to within several MeV; their average is $`111`$ MeV. All of these quantities are remarkably similar to those in central Pb+Pb collisions at the much higher energy of the SPS. The best fit to the $`\pi ^{}/\pi ^+`$ ratio is obtained for a value of the freeze-out radius for pions of 10 fm with a $`\chi _{p.d.o.f.}^2=0.53`$. The computed ratio is compared to the data in Fig. 2 and its appearance is quite satisfactory.
In conclusion, we have shown that the suppression of the ratio $`\pi ^+/\pi ^{}`$ in central Pb+Pb collisions at the SPS, as observed by NA44, and in central Au+Au collisions at the AGS, as observed by E866, can be quantitatively understood as a Coulomb effect generated by the electric field of an expanding and highly charged fireball. This ratio provides a good measure of the size of the fireball at decoupling. In principle, a different parametrization of the primordial distribution, such as a two temperature fit, might lead to an even better representation of data.
|
no-problem/9907/astro-ph9907195.html
|
ar5iv
|
text
|
# An Upper Limit on the Reflected Light from the Planet Orbiting the Star 𝜏 Bootis
## 1. INTRODUCTION
Radial velocity surveys of nearby F, G, K and M dwarf stars have revealed eight planets (Mayor & Queloz 1995; Butler et al. 1997; Butler et al. 1998; Fischer et al. 1999; Mayor et al. 1999; Queloz et al. 1999) which orbit their parent stars with a separation of $`a0.1\mathrm{AU}`$. These close-in extrasolar giant planets (CEGPs) may be directly detectable by their reflected light, due to the proximity of the planet to the illuminating star. In this Letter, we present the results of a spectroscopic search for the reflected light component from the planet orbiting the star $`\tau `$ Boo. The motivation to attempt such a detection for a CEGP is strong: It would constitute the first direct detection of a planet orbiting another star. It would yield the orbital inclination, and hence the planetary mass, and would also measure a combination of the planetary radius and albedo, from which a minimum radius can be deduced. Furthermore, it would open the way to direct investigation of the spectrum of the planet itself. Conversely, a low enough upper limit would provide useful constraints on the radius and albedo of the CEGP.
## 2. REFLECTED LIGHT
### 2.1. Photometric Variations
In order to calculate the predicted flux ratio of the planet relative to the star, let $`R_p`$ denote the planetary radius, $`R_s`$ the stellar radius, $`a`$ the physical separation, and $`\alpha `$ the angle between the star and the observer as seen from the planet. The observationally useful quantity is the geometric albedo $`p`$, which is the flux from the planet at $`\alpha =0`$ divided by the flux that would be measured from a Lambert law (ie. perfectly diffusing; see, for example, Sobolev 1975) disk of the same diameter, located at the distance of the planet. In the case that $`R_pR_sa`$, the ratio $`ϵ`$ of the observed flux from the planet at $`\alpha =0`$ to that of the star is
$$ϵ=p\left(\frac{R_p}{a}\right)^2.$$
(1)
The value of $`p`$ depends on the amplitude and angular dependence of the various sources of scattering in the planetary atmosphere, integrated over the surface of the sphere. For a Lambert law sphere, $`p=2/3`$, whereas for a semi-infinite purely Rayleigh scattering atmosphere, $`p=3/4`$. The geometric albedos at 480 nm of Jupiter, Saturn, Uranus and Neptune are 0.46, 0.39, 0.60, and 0.58, respectively (Karkoschka 1994).
We treat the orbit as circular, since the observed orbit of $`\tau `$ Boo has an eccentricity less than 0.02 (Butler et al. 1997). We neglect occultations, since a transit would produce a $`0.01\mathrm{mag}`$ photometric dip, and is ruled out by Baliunas et al. (1997). We take the orbital phase $`\mathrm{\Phi }[0,1]`$ to be 0 at the time of maximum radial velocity of the star. The phase angle $`\alpha [0,\pi ]`$ is then defined by
$$\mathrm{cos}\alpha =\mathrm{sin}i\mathrm{sin}2\pi \mathrm{\Phi }$$
(2)
where $`i[0,\pi /2]`$ is the orbital inclination. The flux from the planet at a phase angle $`\alpha `$ relative to that at $`\alpha =0`$ is denoted by the phase function $`\varphi (\alpha )`$. In the case of a Lambert law sphere, the phase-dependent flux ratio $`f(\mathrm{\Phi },i)`$ is given by (Sobolev 1975)
$$f(\mathrm{\Phi },i)=ϵ\varphi (\alpha )=p\left(\frac{R_p}{a}\right)^2\left[\frac{\mathrm{sin}\alpha +(\pi \alpha )\mathrm{cos}\alpha }{\pi }\right].$$
(3)
For this analysis, we assume the phase variation of the reflected light is described by equation 3. The phase functions of the gas giants of our solar system are well approximated as Lambert spheres (see, for example, Pollack et al. 1986).
In the case of $`\tau `$ Boo, Baliunas et al. (1997) can exclude a sinusoidal photometric variation at the planetary orbital period with a peak-to-peak amplitude of 0.4 millimag or greater. The predicted variation due to a highly reflective companion of Jupiter size is $``$ 0.1 millimag. If proposed photometric satellite missions (Matthews 1997; Rouan et al. 1997) can achieve a precision of $`10\mu \mathrm{mag}`$ with stability over timescales of a few days, they could measure this photometric modulation, as discussed by Charbonneau (1999a).
### 2.2. Spectroscopic Variations
We assume that $`\tau `$ Boo has a stellar mass of $`M_s=1.2M_{\mathrm{}}`$, based on its spectral classification as an F7 V star (Perrin et al. 1977). It has a ($`BV`$) color of 0.48, consistent with the spectral classification. The radial velocity observations (Butler et al. 1997; Marcy 1997) provide the orbital period ($`P=3.3125\mathrm{d}`$), phase ($`T_{\mathrm{\Phi }=0}=2450526.916\mathrm{JD}`$), eccentricity ($`e=0`$) and amplitude ($`K_s=468\mathrm{m}\mathrm{s}^1`$), from which a semi-major axis of $`a=0.0462\mathrm{AU}`$ and a planetary mass of $`M_p=3.89M_{\mathrm{Jup}}/\mathrm{sin}i`$ are calculated. The radial velocity of the planet relative to that of the star is
$$v_p(\mathrm{\Phi },i)=K_s\frac{M_s+M_p}{M_p}\mathrm{cos}2\pi \mathrm{\Phi }.$$
(4)
This has a maximum amplitude of $`|v_p(\mathrm{\Phi },i)|152\mathrm{km}\mathrm{s}^1`$.
Thus the spectrum of the system could contain a secondary component which varies in amplitude according to equation 3 and in Doppler-shift according to equation 4. Charbonneau, Jha & Noyes (1998) demonstrate that the effect of the reflected light component on the line profile bisector is not far from current observational limits for a CEGP of high reflectivity. This is an alternate technique which may be used to directly detect or limit the reflected light from a CEGP.
### 2.3. Tidal Locking Effects
Baliunas et al. (1997) use the activity-rotation relation of Noyes et al. (1984) and the mean Ca II flux of $`\tau `$ Boo to predict a stellar rotation period of 5.1 days. Analysis of the observations of the Ca II H & K lines by Baliunas et. al (1997) yield a weakly detected period of 3.3 $`\pm `$ 0.5 d, consistent with the observed orbital period of the planet, implying that the star and planet form a tidally locked system. Marcy et al. (1997) demonstrate that, in the case of $`\tau `$ Boo, a convective envelope of mass $`M_{CE}0.01M_{\mathrm{}}`$ could become tidally locked in less than the age of the system. If so, then there is no relative motion of any point on the surface of the planet relative to any point on the surface of the star. In this case, the planet reflects a non-rotationally-broadened stellar spectrum, with a typical line width dominated by the stellar photospheric convective motions ($`4\mathrm{km}\mathrm{s}^1`$; Baliunas et al. 1997). Thus, one might expect relatively narrow planetary lines superimposed on much broader stellar lines.
## 3. TARGET SELECTION AND OBSERVATIONS
Several considerations entered into the choice of $`\tau `$ Boo (HR 5185, HD 120136) as the optimal candidate for this experiment. Firstly, the semi-major axis of $`\tau `$ Boo was smaller than that of the other three CEGPs (51 Peg, $`\upsilon `$ And, & $`\rho ^1`$ Cnc) known at the time, which is desirable since the relative amplitude of the reflected light decreases with the square of the planet-star distance. Secondly, the visual brightness of $`\tau `$ Boo is greater than either 51 Peg or $`\rho ^1`$ Cnc. The photon noise of the star is the dominant source of noise in the experiment, and a brighter star allows for a more precise determination of the stellar flux in a given amount of observing time. Thirdly, as discussed above, it may be that the star is rotating with the planetary orbital period. If so, the planetary spectral features would be much sharper and deeper than those of the primary, which might facilitate their separation.
We observed $`\tau `$ Boo for three nights (1997 March 20 to March 22) using the HIRES echelle spectrograph (Vogt et al. 1994) mounted on the Keck-1 Ten-Meter Telescope at the W. M. Keck Observatory located atop Mauna Kea in Hawaii. These nights were carefully selected based on the phase of the companion’s orbit. The spectral range used in this analysis was 465.8 nm to 498.7 nm, and the observations were made at a resolution $`R\lambda /\delta \lambda `$ of either 60 000 (March 20) or 45 000 (March 21 & 22).
Since the apparent magnitude of $`\tau `$ Boo is 4.5 mag, the high flux from the star would saturate the detector pixels for an exposure time less than the readout time of the CCD. To avoid this readout-time-limited scenario, the cross-disperser was slowly trailed during each observation so as to spread the photons over roughly 30 pixels. This allowed for typical exposure times of 300 seconds, which resulted in a duty cycle of $`70\%`$. In all, 154 spectra of $`\tau `$ Boo were obtained, with a nightly breakdown of 38 for March 20, 32 for March 21 and 84 for March 22. Cloudy weather degraded the number and quality of the spectra on March 21.
## 4. DATA ANALYSIS
### 4.1. Extraction of the One-Dimensional Spectra
Since the extraction of the one-dimensional spectra from the two-dimensional exposures must be accomplished without introducing systematic errors above the level of $`1\times 10^4`$ per dispersion element, it was necessary to create an entirely new and independent set of extraction codes specific to this experiment. By so doing, we were able to treat the sources of systematic noise particular to the Keck HIRES and these observations, as well as have the necessary control in identifying sources of contamination as our understanding of the data proceeded.
To extract the one-dimensional spectra from an individual frame, the following algorithm was applied: The bias was subtracted and the non-linear gain was corrected. A two-dimensional scattered light model was derived by fitting the inter-order scattered light, and subtracted. The two-dimensional flat-field correction was applied, and the orders were extracted by summing along the cross-dispersion direction, making use of windows which we had produced to identify the location of both the spectral orders and the cosmetic defects from internal reflections and a felt-tip pen mark. The one-dimensional spectra were then corrected for cosmic rays by cubic spline interpolation across contaminated pixels. A low amplitude source of high frequency noise in the extracted spectra was corrected for by applying a narrow notch filter in Fourier space. The typical signal-to-noise-ratio (SNR) was $`1500`$ per dispersion element. The wavelength solution was derived from extracted Th & Ar emission line spectra taken throughout the observing run.
### 4.2. The Model
The model is that the data contain a secondary spectrum, identical to that of the primary, but Doppler-shifted due to the orbital motion of the planet and varying in amplitude with the angle subtended between the star, the planet and the observer. The key to the method is to first produce a stellar template spectrum, and then make use of the orbital parameters from the radial velocity observations to calculate a model for a given observation taken at a particular orbital phase. The methods we briefly describe here will be presented in detail in an upcoming paper (Charbonneau 1999b).
The high SNR stellar template spectrum was produced for each of the two instrumental resolutions by combining all of the extracted spectra. Initially, a high SNR spectrum was chosen and an optimized model was found which corrected each observation to this reference (allowing for variations in the wavelength solution and instrumental profile, and low-frequency spatial variations of the continuum). A summed stellar template spectrum was produced, and this process was iterated twice, beyond which point the errors were no longer significantly reduced by further iteration. The errors in the summed stellar template were 1.2 times the expectation from photon noise, indicating a precision of $`1\times 10^4`$ per dispersion element. We note that this may well comprise the most precise visible stellar spectrum for any star other than the Sun.
For each observed spectrum, we first modify the stellar template spectrum in order to correct for the aforementioned variations in the wavelength solution and instrumental profile, and low-frequency spatial variations of the continuum. Note that we wish to interpolate the stellar template spectrum, and not perform the reverse procedure and interpolate the observed spectra, since the stellar template spectrum is at a much higher SNR. Then, if we denote by $`S`$ this modified stellar template spectrum and by $`\lambda `$ the wavelength solution, the model at a given pixel $`j`$ of an observed spectrum taken at an orbital phase $`\mathrm{\Phi }`$ is described by
$$M_j=\frac{S\left(\lambda _j\right)+ϵ\varphi (\mathrm{\Phi },i)S\left(\lambda _j\left[1+\frac{v_p(\mathrm{\Phi },i)}{c}\right]\right)}{1+ϵ\varphi (\mathrm{\Phi },i)}$$
(5)
The two unknown parameters are $`\{ϵ,i\}`$. The situation in which there in no reflected light from the planet is equivalent to $`ϵ=0`$. In this case, the observed spectra are best fit as replicated stellar spectra.
As noted earlier, the stellar rotation period may be the orbital period of the planet, and hence the reflected spectrum may be composed of non-rotationally-broadened lines. The instrumental resolution will smear all spectral lines to a width of $`7\mathrm{km}\mathrm{s}^1`$. Several exposures of the sharp-lined F8 V star 36 UMa (HR 4112, HD 90839, $`V=4.84`$, $`BV=0.52`$) were combined and corrected to the Doppler-shift of $`\tau `$ Boo to produce a stellar template spectrum, $`S^{}`$. The spectral differences between an F7 V and an F8 V star are insignificant for the purposes of this analysis. The spectrum of 36 UMa serves as an excellent mock-up for the non-rotationally-broadened spectrum of $`\tau `$ Boo and includes the instrumental effects. Thus we also investigated the model
$$M_j^{}=\frac{S\left(\lambda _j\right)+ϵ\varphi (\mathrm{\Phi },i)\gamma S^{}\left(\lambda _j\left[1+\frac{v_p(\mathrm{\Phi },i)}{c}\right]\right)}{1+ϵ\varphi (\mathrm{\Phi },i)}$$
(6)
where $`\gamma `$ is a normalization factor.
The model was evaluated by calculating the $`\chi ^2`$ parameter as a function of $`\{ϵ,i\}`$. The minimum $`\chi _{\mathrm{min}}^2`$ is subtracted off to define $`\mathrm{\Delta }\chi ^2=\chi ^2\chi _{\mathrm{min}}^2`$. The confidence levels in the allowed values of the parameters are described by drawing contours of fixed $`\mathrm{\Delta }\chi ^2`$ at a desired set of significance levels. The confidence levels were tested for a given choice of $`\{ϵ,i\}`$ by directly injecting a reflected light secondary at the correct amplitude and Doppler-shift into each observed spectrum. At $`ϵ10^3`$ and high inclination, the secondary can be detected at the 99% confidence level with only one spectrum. At $`ϵ10^4`$, the planet is recovered only by considering all of the spectra, and the uncertainty in the parameters is significantly greater. Tests showed that the planet could be recovered for $`i10^{}`$.
A second test was provided by the detection of solar contamination employing a model similar to the one described in equation 6, but with the modification that the secondary spectrum is at a constant (but unknown) amplitude and Doppler-shift. Solar contamination was detected at the Doppler-shift between the Sun and $`\tau `$ Boo, and at a relative amplitude of $`10^3`$, in the spectra taken on March 21. The source of this contamination was reflection of the solar spectrum off the Moon and subsequently off the clouds which were present throughout the night. The exclusion of the contaminated spectra from the reflected light analysis did not greatly reduce the statistical significance as these spectra contained only 10% of the photons of the entire data set.
## 5. RESULTS AND DISCUSSION
We find no evidence for a highly reflective planet orbiting $`\tau `$ Boo. For $`i10^{}`$, we can constrain the reflected flux ratio $`ϵ8\times 10^5`$ at the 99% confidence level, under the assumptions that the reflected light spectrum is a copy of the stellar spectrum. For $`i70^{}`$, this improves to $`ϵ5\times 10^5`$. Assuming a planetary radius of 1.2 $`R_{\mathrm{Jup}}`$ (Guillot et al. 1996), this limits the geometric albedo to $`p0.3`$. Figure 1 shows the precise limit of the reflected light amplitude as a function of orbital inclination. Under the assumption that the secondary reflects a non-rotationally-broadened version of the stellar spectrum, this limit becomes stronger for high inclinations. The particular shape of a given confidence level in Figure 1 results from the interplay of the orbital phases and statistical weights of the set of spectra. The upper limit imposed is set primarily by the last night of observing (March 22), when the planet was near a phase of $`\mathrm{\Phi }=0`$. The dip down to stronger constraints on the flux ratio at an inclination $`i15^{}`$ results from the first night of observing (March 20) when the planet was near inferior conjunction: Only if the planet is at low inclinations will it be expected to contribute a reasonable reflected light signal and hence allow us to significantly differentiate between models.
At very low inclinations ($`i10^{}`$), this experiment is not able to exclude even very bright companions due to both the lack of a significant Doppler-shift between the primary and the secondary, and the lack of a phase variation in the light from the secondary. However, these low inclination orbits may be excluded under a further consideration: If the axis defined by the stellar rotation is the same as that of the orbit of the planet, then the observed $`v\mathrm{sin}i15\mathrm{km}\mathrm{s}^1`$ for the star would imply a true rotational velocity of greater than 50 $`\mathrm{km}\mathrm{s}^1`$ for $`i17^{}`$. Such high rotational velocities are not observed (Gray 1982) for main-sequence F7 stars. High inclination orbits can be excluded by the lack of eclipses from photometric monitoring. Baliunas et al. (1997) exclude $`i83^{}`$. This is consistent with our experiment as we find no evidence for a companion at these high inclinations.
We reiterate that the derivation of an upper limit for the geometric albedo requires the assumption of a value for the planetary radius (1.2 $`R_p`$) and a functional form for the phase variation (a Lambert law sphere). If the actual values are significantly different than these, then the upper limit on the geometric albedo is modified as well. For example, assuming a smaller planetary radius would permit a larger albedo (see equation 1).
Published predictions of the albedo of CEGPs vary by many orders of magnitude, and are highly sensitive to the presence of condensates in the planetary atmosphere. Burrows & Sharp (1999) consider cloud formation and depletion by rainout, and demonstrate that $`\mathrm{MgSiO}_3`$ will be an abundant condensate at the effective temperature of $`\tau `$ Boo b ($`1500\mathrm{K}`$). Marley et al. (1999) calculate both cloud-free and silicate cloud atmospheres and predict $`0.35p(480\mathrm{nm})0.55`$ for an EGP with a temperature of 1000 K, which is greater than our upper limit of $`p(480\mathrm{nm})=0.3`$. They neglect the effects of stellar insolation on the model atmosphere. Seager & Sasselov (1998) explicitly include the stellar flux, solve the equation of radiative transfer through a model atmosphere of $`\tau `$ Boo b, and predict $`p(480\mathrm{nm})0.0002`$. The low albedo is due in part to the absorption of photons by TiO in the atmosphere. However, it may be that the TiO forms and rains out, and thus is not an important factor. Including the presence of $`\mathrm{MgSiO}_3`$ clouds, Seager & Sasselov predict a larger (but still very dark) albedo of $`p(480\mathrm{nm})0.003`$. The reflectivity of the $`\mathrm{MgSiO}_3`$ grains at a given wavelength is highly dependent on the grain size relative to the wavelength of light. Burrows & Sharp (1999) also predict that other condensates (such as Fe) may be present at these temperatures. If iron droplets are a significant condensate, the resulting albedo would be very dark due to the high absorption at optical wavelengths. Given the current uncertainty in the models, there are many reasonable model planetary atmospheres which are consistent with our upper limit.
We have achieved the current upper limit using only a limited spectral range, and data obtained when the planet was far from opposition. It is restricted by the photon noise of the data set, not by systematic errors. By expanding the spectral range and observing on several nights when the planet is near opposition, it would be possible to significantly reduce this upper limit. It may be advantageous to conduct this experiment at shorter wavelengths, since Seager & Sasselov (1998) predict a dramatic rise in the albedo shortwards of 420 nm.
We gratefully acknowledge the NASA/Keck Time Assignment Committee for the observing time allocation. This work was supported in part by NASA Grant NAG5-75005.
|
no-problem/9907/astro-ph9907362.html
|
ar5iv
|
text
|
# Interpretation of ∼35 Hz QPO in the Atoll Source 4U 1702-42 as a Low Branch of the Keplerian Oscillations Under the Influence of the Coriolis Force
## 1 Introduction
The discovery of kilohertz quasiperiodic oscillations (QPOs) in the low mass X-ray neutron star binaries (Strohmayer et al. 1996; Van der Klis et al. 1996) was followed by similar results for nineteen sources (Van der Klis et al. 1998, Strohmayer, Swank and Zhang 1998). For most of them, the Rossi X-Ray Timing Explorer (RXTE) mission showed the persistence of twin peaks in the spectrum ($`4001200`$ Hz). In the lower part of the spectrum of Sco X-1, van der Klis et al. (1997) found two branches $`45`$ and $`90`$ Hz which correlate with the frequencies of the above-mentioned twin peaks. The nature of the twin peaks has been discussed intensively. The apparent constancy of the difference $`\mathrm{\Delta }\nu `$ for the twin peaks in some sources led to the beat-frequency interpretation (a concept originated by Alpar and Shaham 1985). Below, we describe difficulties of this model following the recent assessment by Mendez and van der Klis (1999). Within the beat-frequency model, the higher peak in the kHz range is identified with the Keplerian frequency (van der Klis et al. 1996). The lower twin peak in this model occurs as a result of beating between the high peak and the spin frequency of the neutron star $`\nu _{spin}`$ which is believed to be observed during type 1 X-ray bursts as $`\nu _{burst}`$ (or half that) (Strohmayer et al. 1996, Miller, Lamb & Psaltis 1998). Within the original beat-frequency model, $`\mathrm{\Delta }\nu `$ and $`\nu _{spin}`$ should remain constant. The observed $`2030\%`$ variation of $`\mathrm{\Delta }\nu `$ (Van der Klis et al. 1997, Mendez et al. 1998, Ford et al. 1998, Mendez & van der Klis 1999) undermined the beat-frequency interpretation. Convincingly, Mendez and van der Klis (1999) showed that for atoll source 4U 1728-34, $`\mathrm{\Delta }\nu `$ is not equal to $`\nu _{burst}`$ even for the lowest inferred mass accretion rate. In their words, this seems to rule out the simple beat-frequency interpretation of the kHz QPOs in LMXBs and some of the modifications introduced to explain the results of a variable $`\mathrm{\Delta }\nu `$.
A completely different paradigm has been proposed by Osherovich and Titarchuk (1999) and by Titarchuk and Osherovich (1999) (hereafter OT99 and TO99, respectively) to explain QPOs in neutron stars. The new model is based on the idea that the twin peaks occur as a result of the Coriolis force imposed on a one-dimensional Keplerian oscillator. The second section of our Letter contains the assumptions and predictions of the model. Verification of the predictions concerning the low frequency branch for the source 4U 1702-42 is presented in section 3. Discussion and summary follow in the last section.
## 2 Oscillations in the Disk and the Magnetosphere Surrounding the Neutron Star
The following assumptions constitute our model (OT99 and TO99):
a) The lower frequency of the twin kHz QPO is the Keplerian frequency
$$\nu _K=\frac{1}{2\pi }\left(\frac{GM}{R^3}\right)^{1/2}$$
(1)
where G is the gravitational constant, M is the mass of the neutron star and R is the radius of the corresponding Keplerian orbit.
b) There is a boundary layer: transition region between the NS surface and the first Keplerian orbit. Within this layer, the radial viscous oscillations $`\nu _\mathrm{v}`$ are maintained. There is also a diffusive propagation process of the perturbations generated in this layer and characterized by a break frequency $`\nu _b`$ related to $`\nu _\mathrm{v}`$. Outside of this layer, the disk has Keplerian radial oscillations with $`\nu _K`$ (see also Titarchuk, Lapidus & Muslimov 1998).
c) The magnetosphere which surrounds the neutron star has a rotational frequency $`𝛀`$ which is not perpendicular to the disk.
d) inhomogeneities (hot blobs) thrown into the vicinity of the disk participate in the radial Keplerian oscillations but simultaneously are subjected to the Coriolis force associated with the differential rotation of the magnetosphere.
The consequences of the model are straightforward. The problem of a linear oscillator in the frame of reference rotating with rotational frequency $`𝛀`$ is known to have an exact solution describing two branches of oscillations
$$\nu _h^2=\nu _K^2+\left(\frac{\mathrm{\Omega }}{\pi }\right)^2$$
(2)
$$\nu _L=(\mathrm{\Omega }/\pi )(\nu _K/\nu _h)\mathrm{sin}\delta $$
(3)
where $`\delta `$ is the angle between $`𝛀`$ and the plane of the Keplerian oscillations. Formulas (2) and (3) are derived under the assumption of small $`\delta `$ (OT99 and references therein). Thus, the high frequency of the observed twin kHz peaks has been interpreted as the upper hybrid frequency $`\nu _h`$ or the frequency of the high branch of the Keplerian oscillator under the influence of the Coriolis force associated with magnetospheric rotation. The existence of the lower branch ($`\nu _L`$) is a prediction of the model. Oscillations $`45`$ Hz and 90 Hz in Sco X-1 reported by van der Klis et al. (1997) are found to fit formula (3) for $`\delta =5.5\pm 0.5^o`$ when interpreted as 1st and 2nd harmonics of $`\nu _L`$ (OT99, TO99). In the low part of QPO spectra according to TO99 there are two more frequencies (namely the break frequency $`\nu _b`$ and frequency of viscous oscillations $`\nu _\mathrm{v}`$) related to each other
$$\nu _b=0.041\nu _\mathrm{v}^{1.61}.$$
(4)
We now consider the application of our model to source 4U 1702-42 studied in detail by Markwardt, Strohmayer & Swank (1999).
## 3 Determination of $`\delta `$ for the Source 4U 1702-42
For Sco X-1, we have shown that the solid body rotation $`\mathrm{\Omega }=\mathrm{\Omega }_0=const`$ is a reasonable first order approximation (OT99). Theoretically, $`\mathrm{\Omega }`$ depends on the magnetic structure of the neutron star’s magnetosphere. Within the second order approximation, the $`\mathrm{\Omega }`$ profile has a slow variation as a function of $`\nu _K`$, which we modeled for Sco X-1 and for the source 4U 1608-52 in OT99. For the source 4U 1702-42, there are not enough simultaneous measurements of $`\nu _h`$ and $`\nu _K`$ to reconstruct the $`\mathrm{\Omega }`$ profile. Thus, we restrict ourselves to the first order approximation and adopt $`\mathrm{\Omega }/2\pi =\mathrm{\Omega }_0/2\pi =380\pm 7`$ Hz, found in OT99. The observed $`\nu _K`$ and $`\nu _h`$ used in OT99 for the calculation of $`\mathrm{\Omega }_0/2\pi `$ according to formula (2) are presented in Table 1 which contains the data kindly provided by C. Markwardt. Then from formulas (2) and (3) we have the expression for $`\delta `$, namely
$$\delta =\mathrm{arcsin}\left[(\pi /\mathrm{\Omega }_0)(\nu _K^2+\mathrm{\Omega }_0^2/\pi ^2)^{1/2}(\nu _L/\nu _K)\right]$$
(5)
where $`\nu _K`$ is the observed Keplerian frequency (the second from the top kHz QPO) and $`\nu _L`$ is the alleged frequency of the low Keplerian branch. Within the first order approximation, the angle $`\delta `$ should stay the same for all $`\nu _K`$, since $`\delta `$ is effectively the angle between the equatorial plane of the disk and the plane of the magnetic equator. The first two columns of Table 2 present frequencies observed by Markwardt et al. (1999) for the source 4U 1702-42 with the corresponding date of the measurements. These two columns repeat the first two columns of the table in Markwardt et al. (1999) with the addition of data for July 30, 1997 taken from the text of the same paper. In the third column of our Table 2, we give our theoretical interpretation for the observed frequency peaks: K stands for the Keplerian frequency, L for the frequency of the lower branch of the Keplerian frequency, b for the break frequency. According to TO99, the relation between the frequency of viscous oscillations $`\nu _\mathrm{v}`$ (symbol v in our table) and frequency $`\nu _b`$ (Eq. 4) including equation (5) will serve as a tool for the identification of $`\nu _b`$ and $`\nu _\mathrm{v}`$. The last column of Table 2 contains $`\nu _h`$ calculated according to formula (2) with $`\mathrm{\Omega }_0/2\pi =380`$ Hz and also the angle $`\delta `$ (calculated according to formula (5) for those cases when $`\nu _K`$ and $`\nu _L`$ are measured simultaneously). The resulting values of $`\delta `$ are shown in Figure 1. Indeed, the angle $`\delta `$ shows little variation with $`\nu _K`$ and as expected is rather small
$$\delta =3.9^o\pm 0.2^o$$
(6)
With knowledge of $`\delta `$ for the source 4U 1702-42, we attempted to interpret the remaining observed frequencies in Table 2. For July 26.60-26.93, $`\nu _K`$ and $`\nu _h`$ are not present. Three frequencies 10.8, 32.5 and 80.1 Hz were observed on this day. If 32.5 Hz should be the $`\nu _L`$ then the second harmonic $`2\nu _L=65`$ Hz. But instead we have 80.1 Hz, which is significantly higher than the expected $`2\nu _L`$. On the other hand, from the classification Figure 4 presented in TO99, we know that in the vicinity of $`\nu _K=800900`$ Hz, the two frequencies $`\nu _L`$ and $`\nu _\mathrm{v}`$ may come close to one another. Thus assuming that $`\nu _\mathrm{v}=32.5`$ Hz, we find that $`\nu _b=10.8`$ Hz satisfies the theoretical relation (4) within the observational errors. We identify $`2\nu _L=80.1`$ Hz and also $`2\nu _L=85.5`$ Hz for July 21 and $`\nu _b=12`$ Hz for the same day. From formula (3) we derive $`\nu _K`$ for both cases assuming the fixed $`\delta `$ and $`\mathrm{\Omega }/2\pi =380`$ Hz. This identification holds best for $`\delta =4.3^o\pm 0.1^o`$ which is the angle derived for $`\nu _K=769`$ Hz and $`\nu _L=40.1`$ Hz on July 30 (see the last line of Table 2). The consistency of our identification is shown in Figure 2 where $`\nu _h`$, $`\nu _L`$, $`2\nu _L`$ and $`\nu _b`$ are plotted as functions of $`\nu _K`$ (solid lines are theoretical). Figure 2 shows that as expected $`\mathrm{\Delta }\nu =\nu _h\nu _K`$ decreases as $`\nu _K`$ increases. With the small number of observational points, the theoretical curves in Figure 2 are, in fact, a prediction of our model which can be viewed as a challenge for those observers who choose to extend the observational base for the source 4U 1702-42 to verify our model. It is possible that the difference between $`\delta =3.8^o`$ and observed $`\delta =4.3^o`$ is attributed to $`\mathrm{\Omega }=const`$assumption. Further measurement of $`\nu _h`$ and $`\nu _K`$ shall allow to reconstruct the $`\mathrm{\Omega }`$profile and improve our predicative capability.
## 4 Discussion and Conclusions
Our model (OT99 and TO99) reveals the physical nature of $`35`$ Hz oscillations in the source 4U 1702-42. As a low Keplerian branch of oscillations in the rotating frame of reference, this phenomenon should have an observational invariant - namely the angle $`\delta `$. Within the first order approximation this angle can be viewed as a global parameter describing the inclination of the magnetospheric equator to the equatorial plane of the disk. Measured locally for different radial distances (therefore different $`\nu _K`$), $`\delta `$ may vary considerably unless indeed the observed oscillations correspond to the predicted low Keplerian branch. The constancy of $`\delta `$ shown in Figure 1 allows us to interpret the $`35`$ Hz oscillations as the low branch described by equation (3). Knowledge of the angle $`\delta `$ is essential for the classification of the QPO resonances and the understanding of the physical nature of this phenomenon. The parameter $`\delta `$ is critical in evaluating the differences between the spectra of different sources. For Sco X-1, oscillations with frequency $`45`$ Hz are identified as belonging to the $`\nu _L`$ branch (OT99). The higher observed frequencies of the $`\nu _L`$ for Sco X-1 are mainly a result of a larger angle $`\delta =5.5^o\pm 0.5^o`$.
We believe that the angle $`\delta `$ as a fundamental geometric parameter of the neutron star magnetosphere will be found eventually for other sources with kHz QPO. The source 4U 1702-42 is the second source for which $`\delta `$ has been inferred.
The authors are grateful to J. Fainberg and R.G. Stone for discussions and suggestions. The comments of the referee which led to an essential improvement of the paper are appreciated.
|
no-problem/9907/cond-mat9907300.html
|
ar5iv
|
text
|
# Local Time-Reversal Symmetry Breaking in 𝑑_{𝑥²-𝑦²} Superconductors
## Abstract
We show that an isolated impurity in a spin singlet $`d_{x^2y^2}`$ superconductor generates a $`d_{xy}`$ order parameter with locally broken time-reversal symmetry. The origin of this effect is a coupling between the $`d_{x^2y^2}`$ and the $`d_{xy}`$ order parameter induced by spin-orbit scattering off the impurity. The signature of locally broken time-reversal symmetry is an induced orbital charge current near the impurity, which generates a localized magnetic field in the vicinity of the impurity. We present a microscopic theory for the impurity induced $`d_{xy}`$ component, discuss its spatial structure as well as the pattern of induced current and local magnetic field near the localized impurity spin.
There is now strong evidence to support the identification of the superconducting state of many of the high T<sub>c</sub> cuprates with a spin-singlet pairing amplitude having “d-wave” orbital symmetry, or more precisely $`d_{x^2y^2}`$ symmetry. This phase preserves time-reversal ($`𝒯`$) symmetry, but changes sign under reflection along the and \[$`\overline{1}`$10\] mirror planes, as well as $`\pi /2`$-rotations in a tetragonal crystal. As a consequence $`d_{x^2y^2}`$ pairing correlations are particularly sensitive to scattering of quasiparticles on the Fermi surface. In this article we show that an isolated impurity in a spin singlet $`d_{x^2y^2}`$ superconductor generates a complex $`d_{xy}`$ order parameter (OP) with locally broken $`𝒯`$ symmetry; the signature of this effect is an induced orbital charge current near the impurity and a localized magnetic field in the vicinity of the impurity.
Atomic scale impurities, or defects, scatter conduction electrons which leads to local suppression of the superconducting OP (pair-breaking) near the impurity. The mechanism responsible for pair-breaking is the formation of quasiparticle states near the Fermi level which are bound to the impurity by Andreev scattering. The corresponding reduction in the spectral weight of the pair condensate is responsible for pair-breaking. The existence of quasiparticle states near the Fermi level can also lead to local Fermi-surface instabilities and mixing of order parameters with different symmetry. Low temperature phase transitions associated with a secondary OP may provide new information on the mechanism(s) for pairing in unconventional superconductors, while impurity-induced mixing of the $`d_{x^2y^2}`$ OP with an OP of different symmetry can provide direct information on the atomic structure of the impurity.
Recent transport experiments report evidence for low temperature phases associated with a secondary OP at surfaces ($`T_s8\text{K}`$). This was interpreted in terms of a surface phase transition to a $`d_{x^2y^2}+is`$ state with spontaneously broken $`𝒯`$-symmetry. Bulk transport measurements on $`\mathrm{Bi}_2\mathrm{Sr}_2\mathrm{CaCu}_2\mathrm{O}_{8+\delta }`$ (Bi-2212) show a pronounced drop in the thermal conductivity at $`T^{}150\mathrm{mK}T_c80\mathrm{K}`$ in Ni doped Bi-2212. This anomaly was interpreted as the signature of a second superconducting phase with a fully gapped quasiparticle spectrum and a mixed symmetry OP of the form $`d_{x^2y^2}+id_{xy}`$. This phase was proposed to arise from a coupling of the orbital momentum of the conduction electron with the spin of the magnetic impurity, $`_{\text{so}}=𝑑𝐫\psi _\alpha ^{}(𝐫)v(r)𝐋_{\text{orbit}}𝐒_{\text{imp}}\psi _\alpha (𝐫)`$. Measurements of the spin-orbit coupling energy for Ni<sup>2+</sup> ions indicate that it is a few percent of the nonmagnetic and exchange channels. In this model the $`id_{xy}`$ OP is induced at a temperature above the second phase transition, $`T^{}<TT_c`$; the low temperature transition is argued to be ordering of the impurity-induced “patches” of the local $`d_{x^2y^2}\pm id_{xy}`$ order. Above $`T^{}`$ patches with randomly fluctuating internal phase destroy the long range order, $`d_{xy}=0`$, but preserve $`|d_{xy}|^20`$. Thus, the local structure associated with $`d_{x^2y^2}\pm id_{xy}`$ symmetry near a magnetic impurity should be observable at temperatures well above $`T^{}`$. The electronic and magnetic structure near an impurity located near the surface of a superconductor can now be studied with atomic resolution at low temperature by scanning tunneling microscopy (STM), opening a new window for local probes.
In this article we investigate theoretically the local structure of the OP and the current distribution in the neighborhood of an atomic impurity within the $`d_{x^2y^2}`$ model for the high T<sub>c</sub> cuprates. We present new analytical and numerical results for the local $`d_{x^2y^2}\pm id_{xy}`$ OP that develops near a magnetic impurity, which has attracted new theoretical interests. Our approach follows closely the theory developed in the late 70’s for ions in superfluid <sup>3</sup>He, and later adapted to study the properties of impurities in heavy fermion superconductors. The theory of impurity scattering in superconductors can be formulated to quasiclassical accuracy as an expansion in $`\sigma /\xi _0`$, where $`\sigma `$ is the (linear in 2D) cross-section of the impurity for scattering of normal-state quasiparticles at the Fermi surface, and $`\xi _0=v_f/\pi \mathrm{\Delta }_0`$ is the superconducting coherence length. This ratio is typically very small in low T<sub>c</sub> superconductors and in superfluid <sup>3</sup>He, but may be as big as $`0.2`$ for strong scatterers in high T<sub>c</sub> superconductors.
We start from Eilenberger’s transport equation, for the matrix propagator in particle-hole/spin space, $`\widehat{g}(𝐩_f,𝐫;ϵ_n)`$. The diagonal element of the propagator determines the local density of states and local equilibrium current distribution, and the off-diagonal elements are the components of the local pair amplitude. Quasiparticle scattering off an isolated impurity is included through a source term on the r.h.s of the Eilenberger equation. The transport equation can be linearized to leading order in $`\sigma /\xi _0`$ for distances $`r\sigma `$ from the impurity. In this limit the Fourier transform of the linearized transport equation reduces to,
$`[iϵ_n\widehat{\tau }_3\widehat{\mathrm{\Delta }}_b\widehat{\sigma }_{\text{imp}},\delta \widehat{g}]+𝐪𝐯_f\delta \widehat{g}=[\widehat{t}+\delta \widehat{\mathrm{\Delta }}+\delta \widehat{\sigma }_{\text{imp}},\widehat{g}_b].`$ (1)
The bulk propagator, $`\widehat{g}_b(𝐩_f;ϵ_n)=\pi [i\stackrel{~}{ϵ}_n\widehat{\tau }_3\widehat{\mathrm{\Delta }}_b(𝐩_f)]/E`$, order parameter, $`\widehat{\mathrm{\Delta }}_b(𝐩_f)=\mathrm{\Delta }_b(𝐩_f)\widehat{\tau }_1i\sigma _2`$, impurity scattering self-energy, $`\widehat{\sigma }_{\text{imp}}`$, and in-plane Fermi velocity, $`𝐯_f`$, are inputs to the linear response equations. The energy denominator is given by $`E=(|\mathrm{\Delta }_b(𝐩_f)|^2+\stackrel{~}{ϵ}_n^2)^{1/2}`$, where $`\stackrel{~}{ϵ}_n=ϵ_n+\frac{i}{4}\mathrm{Tr}\widehat{\tau }_3\widehat{\sigma }_{\text{imp}}(ϵ_n)`$ is the renormalized Matsubara frequency. The $`\widehat{t}`$ matrix for the isolated impurity, as well as the induced OP, $`\delta \widehat{\mathrm{\Delta }}=[\delta \mathrm{\Delta }_1\widehat{\tau }_1+\delta \mathrm{\Delta }_2\widehat{\tau }_2]i\sigma _2`$, and self-energy, $`\delta \widehat{\sigma }_{\text{imp}}`$, enter the r.h.s. of Eq. (1) as source terms. Here $`\widehat{\tau }_i`$ and $`\sigma _i`$ are Pauli matrices in particle-hole and spin space, respectively. The $`\widehat{t}`$ matrix for the isolated impurity is given by
$`\widehat{t}(𝐩_f,𝐩_f^{};ϵ_n)`$ $`=`$ $`\widehat{v}(𝐩_f,𝐩_f^{})+N_f{\displaystyle d^2𝐩_f^{\prime \prime }\widehat{v}(𝐩_f,𝐩_f^{\prime \prime })}`$ (3)
$`\times \widehat{g}_b(𝐩_f^{\prime \prime };ϵ_n)\widehat{t}(𝐩_f^{\prime \prime },𝐩_f^{};ϵ_n),`$
where $`N_f`$ is the 2D density of states at the Fermi energy per spin, and $`\widehat{v}(𝐩_f,𝐩_f^{})`$ is the impurity potential, which is evaluated in the forward scattering limit in Eq. (1); $`𝐩_f^{}=𝐩_f`$. We separate $`\widehat{v}`$ into channels for nonmagnetic ($`u`$), spin-spin exchange ($`𝐦=J𝐒_{\text{imp}}`$), and spin-orbit scattering ($`𝐮_{\mathrm{so}}`$) between the orbital momentum of the quasiparticle and the impurity spin, $`𝐒_{\text{imp}}`$, as well as the self-coupling ($`𝐰_{\mathrm{so}}`$) to the quasiparticle spin, $`\widehat{𝐒}`$,
$`\widehat{v}(𝐩_f,𝐩_f^{})`$ $`=`$ $`u(𝐩_f,𝐩_f^{})+𝐦(𝐩_f,𝐩_f^{})\widehat{𝐒}+`$ (5)
$`i[𝐮_{\mathrm{so}}(𝐩_f,𝐩_f^{})𝐒_{\text{imp}}+𝐰_{\mathrm{so}}(𝐩_f,𝐩_f^{})\widehat{𝐒}]\widehat{\tau }_3.`$
The induced OP is self-consistently determined from the gap equation,
$$\delta \widehat{\mathrm{\Delta }}(𝐩_f,𝐪)=N_fT\underset{ϵ_n}{}d^2𝐩_f^{}V(𝐩_f,𝐩_f^{})\widehat{f}(𝐩_f^{},𝐪;ϵ_n),$$
(6)
where $`V(𝐩_f,𝐩_f^{})`$ is the pairing interaction and $`\widehat{f}=[\delta f_1\widehat{\tau }_1+\delta f_2\widehat{\tau }_2]i\sigma _2`$ is the induced off-diagonal pair amplitude. We resolve the pairing interaction into the dominant attractive $`d_{x^2y^2}`$ channel, and a secondary pairing channel with $`d_{xy}`$ symmetry, $`V(𝐩_f,𝐩_f^{})=V_1\eta _1(\varphi )\eta _1(\varphi ^{})+V_2\eta _2(\varphi )\eta _2(\varphi ^{})`$, where the eigenfunctions are $`\eta _1(\varphi )=\mathrm{cos}2\varphi `$ and $`\eta _2(\varphi )=\mathrm{sin}2\varphi `$ for the two channels, respectively. The dominant, attractive interaction is $`V_1V_{x^2y^2}`$, and the subdominant interaction, $`V_2V_{xy}`$, may be either attractive or repulsive. We neglect the $`s`$-wave pairing channel in order to simplify the analysis, and we restrict our discussion to the regime in which the subdominant interaction, $`V_{xy}`$, is repulsive or too weak to nucleate a bulk $`d_{xy}`$ OP.
Numerical calculations of the OP and current distribution were carried out for an isolated impurity at $`𝐫=0`$, with the $`\widehat{t}`$ matrix source term of the form $`\widehat{t}(𝐩_f,𝐩_f;ϵ_n)\delta (𝐫)`$ in real space. We modeled the position of the impurity to quasiclassical accuracy by replacing the delta function by a smooth function, $`\delta _{r_0}(𝐫)=\frac{1}{\pi r_0^2}\mathrm{exp}(r^2/r_0^2)`$, of atomic width $`r_0=0.1\xi _0`$. This model guarantees a smooth cutoff in $`𝐪`$-space and faster convergence of the Fourier integrals. For the computation reported here we also chose a subdominant pairing interaction corresponding to a bare subdominant transition temperature of $`T_{c2}/T_{c1}=0.1`$, which is well below the threshold for bulk nucleation of a $`d_{xy}`$ order parameter.
The physical solution to the linearized transport equation (1) is,
$$\delta \widehat{g}(𝐩_f,𝐪;ϵ_n)=\frac{E\widehat{g}_b+\pi Q}{2\pi (E^2+Q^2)}[\widehat{t}+\delta \widehat{\mathrm{\Delta }}+\delta \widehat{\sigma }_{\text{imp}},\widehat{g}_b],$$
(7)
where $`Q=\frac{1}{2}𝐪𝐯_f`$. The induced charge current is also determined by the $`\widehat{t}`$ matrix and induced OP,
$$\delta 𝐣(𝐪)=N_fT\underset{ϵ_n}{}d^2𝐩_f𝐯_f\frac{2i\pi eQ\mathrm{\Delta }_1}{E(E^2+Q^2)}\left(t_2+\delta \mathrm{\Delta }_2\right),$$
(8)
where $`t_2`$ is the $`\widehat{\tau }_2`$ component of $`\widehat{t}`$.
We evaluate the $`\widehat{t}`$ matrix in second-order Born approximation, which is adequate for weak scattering. More importantly, the Born approximation generates the relevant coupling between the $`d_{x^2y^2}`$ and $`d_{xy}`$ order parameters. We also assume that the impurity potential is short-ranged, so we retain only the scattering amplitudes for the $`s`$-wave and $`p`$-wave channels, i.e., $`u(𝐩_f,𝐩_f^{})u_0+u_1𝐩_f𝐩_f^{}`$, and $`𝐦(𝐩_f,𝐩_f^{})(J_0+J_1𝐩_f𝐩_f^{})𝐒_{\text{imp}}`$. For the spin-orbit terms we approximate, $`𝐮_{\mathrm{so}}(𝐩_f,𝐩_f^{})(\lambda _0+\lambda _1𝐩_f𝐩_f^{})𝐩_f\times 𝐩_f^{}`$, and $`𝐰_{\mathrm{so}}(𝐩_f,𝐩_f^{})(w_0+w_1𝐩_f𝐩_f^{})𝐩_f\times 𝐩_f^{}`$. The $`\widehat{t}`$ matrix then has the general form $`\widehat{t}=\left[t_1\eta _1\widehat{\tau }_1+t_2\eta _2\widehat{\tau }_2\right]i\sigma _2+\left[t_3+𝐭\widehat{𝐒}\right]\widehat{\tau }_3`$. The important interference term is given by
$$t_2(ϵ_n)=\pi S_zN_f\stackrel{~}{\lambda }^2\frac{d\varphi }{2\pi }\frac{\mathrm{\Delta }_1\eta _1^2(\varphi )}{\sqrt{|\mathrm{\Delta }_1\eta _1(\varphi )|^2+\stackrel{~}{ϵ}_n^2}},$$
(9)
with the spin-orbit parameter $`\stackrel{~}{\lambda }^2=u_0\lambda _1+u_1\lambda _0+J_0w_1+J_1w_0`$, and the impurity spin $`S_z`$. The $`t_2`$ term generates a correction to the off-diagonal propagator with $`d_{x^2y^2}`$ (B$`_{\text{1g}}`$) symmetry and induces the $`id_{xy}`$ (B<sub>2g</sub>) OP near the impurity.
The corrections to the bulk OP, $`\delta \mathrm{\Delta }_i`$, with $`i=1,2`$, do not belong to a single representation, i.e., $`\delta \mathrm{\Delta }_i(\varphi ,𝐪)\sim ̸\eta _i(\varphi )`$. In particular, $`\delta \mathrm{\Delta }_2(\varphi ,𝐪)`$ has mixed $`d_{xy}`$ and $`d_{x^2y^2}`$ symmetry, $`\delta \mathrm{\Delta }_2(\varphi ,𝐪)=\delta \mathrm{\Delta }_{21}(𝐪)\eta _1(\varphi )+\delta \mathrm{\Delta }_{22}(𝐪)\eta _2(\varphi )`$. These amplitudes also determine the current distribution (8), and satisfy the coupled equations
$`\left[1/V_1𝒦_{11}(𝐪)\right]\delta \mathrm{\Delta }_{21}(𝐪)𝒦_{12}\delta \mathrm{\Delta }_{22}(𝐪)`$ $`=`$ $`𝒜_1(𝐪),`$ (10)
$`\left[1/V_2𝒦_{22}(𝐪)\right]\delta \mathrm{\Delta }_{22}(𝐪)𝒦_{12}\delta \mathrm{\Delta }_{21}(𝐪)`$ $`=`$ $`𝒜_2(𝐪),`$ (11)
where $`𝒦_{ij}(𝐪)=\pi T_{ϵ_n}\frac{d\varphi }{2\pi }\eta _i(\varphi )\eta _j(\varphi )E/[E^2+Q^2]`$, and $`𝒜_i(𝐪)=\pi T_{ϵ_n}t_2(ϵ_n)\frac{d\varphi }{2\pi }\eta _i(\varphi )\eta _2(\varphi )E/[E^2+Q^2]`$. The solutions to these equations have, in addition to inversion symmetry, the mirror reflections $`\delta \mathrm{\Delta }_{2i}(q_1,q_2)=()^i\delta \mathrm{\Delta }_{2i}(q_1,q_2)=()^i\delta \mathrm{\Delta }_{2i}(q_2,q_1)`$. The induced $`\delta \mathrm{\Delta }_{22}`$ OP component with B$`_{\text{2g}}`$ symmetry is finite for $`𝐪=0`$, while the induced $`\delta \mathrm{\Delta }_{21}`$ component with B$`_{\text{1g}}`$ symmetry vanishes for $`𝐪=0`$ and along the diagonals and principle axes. The Fourier transformation of $`\delta \mathrm{\Delta }_{21}(𝐪)`$ also vanishes, i.e., $`\delta \mathrm{\Delta }_{21}(𝐫)=0`$; thus only the induced component with B$`_{\text{2g}}`$ symmetry survives. The contour plot of $`\delta \mathrm{\Delta }_2(𝐫)`$ in Fig. 1 shows a four-fold pattern characteristic of the $`d_{xy}`$ amplitude with maxima located at approximately $`0.3\xi _0`$ along the nodal directions of the $`d_{x^2y^2}`$ OP.
Bulk impurity scattering is pair-breaking for any unconventional OP including the induced $`d_{xy}`$ amplitude. Figure 2 shows both the temperature dependence of the induced $`d_{xy}`$ OP and the pair-breaking suppression of $`\delta \mathrm{\Delta }_{22}(𝐪=0)`$ by bulk impurity scattering. Note that the $`d_{xy}`$ OP develops below $`T_c`$ and that it is suppressed by bulk scattering on the same scale as the bulk $`d_{x^2y^2}`$ OP. Fig. 2(b) shows the increase in the induced $`d_{xy}`$ amplitude with increasing (attractive) pairing interaction in the $`B_{2g}`$ channel ($`T_{c2}/T_{c1}`$); the divergence at $`T_{c2}/T_{c1}>0.37`$ corresponds to a bulk instability for $`d_{x^2y^2}\pm id_{xy}`$ pairing. For a repulsive pairing interaction $`V_2<0`$ we find a cutoff dependent result for the induced OP, $`0.44\xi _0^2/\mathrm{log}(\omega _c/\mathrm{\Delta }_1)<\delta \mathrm{\Delta }_{22}(q=0)<0`$, at $`T=0`$. We parameterized the interference term in the scattering amplitude by the coupling energy $`=\pi S_zN_f\stackrel{~}{\lambda }^2/\xi _0^2`$,
The existence of a $`\pm id_{xy}`$ OP implies that the equilibrium superconducting state breaks $`𝒯`$ symmetry locally near the impurity, in addition to broken and reflection symmetries. The signature of the $`d_{x^2y^2}\pm id_{xy}`$ state is the equilibrium charge current and magnetic field distribution near the impurity. From the $`\widehat{t}`$ matrix in Eq. (3) and the induced $`d_{xy}`$ OP we obtain
$`\delta 𝐣(𝐪)=i\pi eN_fT{\displaystyle \underset{ϵ_n}{}}{\displaystyle \frac{d\varphi }{2\pi }𝐯_f(\varphi )𝐯_f(\varphi )𝐪\frac{\mathrm{\Delta }_1\eta _1(\varphi )}{E(E^2+Q^2)}}`$ (12)
$`\times \left(\left(t_2(ϵ_n)+\delta \mathrm{\Delta }_{22}(𝐪)\right)\eta _2(\varphi )+\delta \mathrm{\Delta }_{21}(𝐪)\eta _1(\varphi )\right).`$ (13)
It is straightforward to show that the current density obeys the symmetry relations: $`\delta 𝐣(𝐪)=\delta 𝐣(𝐪)`$, and $`\delta j_i(q_1,q_2)=()^{i+1}\delta j_i(q_1,q_2)`$, $`\delta j_1(q_1,q_2)=\delta j_2(q_2,q_1)`$. From these relations one might expect a simple circulation pattern for the induced charge current; however, as Figs. 1 and 3 show, the current density exhibits the superposition of a very small circulating current loop and four counter circulating currents around the nodal directions, which are anchored to the local maxima of the induced $`\delta \mathrm{\Delta }_2(𝐫)`$ OP. This pattern is qualitatively similar to the current distribution predicted for a vortex with $`d+is`$ symmetry, however, there is no circulation at large distances from the impurity. The complex flow pattern that is observed near the impurity is also observed for mesoscopic superconductors with surfaces that are normal to the $`110`$ direction, and reflects the strong nonlocality of the current response shown in Eq. (12).
The spatial pattern of current generates a four-fold magnetic field distribution which we calculated from the current distribution using the Biot-Savart law, $`B_z(𝐫)=\frac{1}{c}d^2𝐫^{}|𝐫^{}𝐫|^3(𝐫^{}𝐫)\times \delta 𝐣(𝐫^{})`$. The field distribution in Fig. 1 shows 8 sectors of flux threading in and out of the plane. As a result the net magnetic flux through the superconducting plane is zero. This fact was checked numerically. The magnetic flux was calculated for squares of area $`L^2`$ and shown to vanish in the limit $`L\mathrm{}`$. This is a general result, at least at the quasiclassical level, provided that scattering by the impurity does not generate a coupling of the current to a soft mode of the OP. However, particle-hole asymmetry corrections to the quasiclassical theory may lead to a net moment from the impurity-induced orbital currents.
The magnitudes of the induced $`id_{xy}`$ OP and magnetic field near an impurity depend on parameters characterizing the interaction between quasiparticles and the magnetic impurity. Not much is known about these interactions. Thus, measurements of the induced OP or magnetic field near an impurity can provide direct information about the coupling of the quasiparticle orbital momentum to the impurity moment. We can express the impurity induced OP, current density and magnetic field in terms of a few key material parameters of the impurity. We scale the induced OP in units of the coupling energy, $`\delta \mathrm{\Delta }_2^{(n)}(𝐫)=\delta \mathrm{\Delta }_2(𝐫)/`$. From Eq. (12) we obtain the scale of the current and field: $`\delta 𝐣^{(n)}(𝐫)=\delta 𝐣(𝐫)/(c)`$, $`B_z^{(n)}(𝐫)=B_z(𝐫)/`$, with $`=\frac{e}{c}N_fv_f=\frac{\mathrm{\Phi }_0}{4\pi \lambda ^2}\frac{d}{\pi \mathrm{}v_f}`$. Figure 3 shows the spatial variations of the scaled OP, current density and field profile along the $`110`$ and $`010`$ directions at low temperature.
We estimate the magnitude of the induced OP and magnetic field for Ni impurities in Bi-2212 as follows: The coupling parameter for Ni<sup>2+</sup> ions is estimated from the spin-orbit coupling energy for free Ni<sup>2+</sup> ions, $`\stackrel{~}{\lambda }^2(30\mathrm{meV}a_0^2)^2`$. The in-plane penetration depth of $`\lambda 200\mathrm{nm}`$, the interlayer spacing, $`d1.5\mathrm{nm}`$, the in-plane lattice constant, $`a_05.4\mathrm{\AA }`$, and the Fermi velocity, $`v_f100\mathrm{k}\mathrm{m}/\mathrm{s}`$, provide a determination of the density of states per Cu-O bilayer, $`N_f=c^2d/(4\pi e^2v_f^2\lambda ^2)`$. This gives an estimated energy scale for the induced $`d_{xy}`$ gap of $`0.1\mathrm{meV}`$, and a magnetic field scale of order $`1\mu \mathrm{T}`$. An induced $`d_{xy}`$ gap of order $`0.1\mathrm{meV}1\mathrm{K}`$, is the right order of magnitude to account for a $`d_{x^2y^2}\pm id_{xy}`$ phase order transition at lower temperature, $`T^{}150\mathrm{mK}`$, as observed in 0.6% and 1.5% Ni doped Bi-2212.
In conclusion, we have shown that spin-orbit scattering induces a $`d_{x^2y^2}+id_{xy}`$ state, which locally breaks $`𝒯`$ symmetry in the vicinity of a magnetic impurity. The induced OP develops below $`T_c`$ and survives bulk impurity scattering so long as the bulk $`d_{x^2y^2}`$ OP does. The signature of the spontaneously broken $`𝒯`$ symmetry manifests itself as a complex pattern of circulating charge currents near the local maxima of the $`d_{xy}`$ OP located along the $`110`$ and $`\overline{1}10`$ directions. We estimated the magnitude of the induced $`d_{xy}`$ gap to be $`1\mathrm{K}`$, which should be observable in low temperature STM measurements of the tunneling density of states.
We thank R. Movshovich, D. Rainer and J.R. Schrieffer for discussions. The work of MJG and AVB was supported by the Los Alamos National Laboratory under the auspices of the US Department of Energy. JAS acknowledges support by the STCS through NSF DMR-91-20000.
|
no-problem/9907/cond-mat9907462.html
|
ar5iv
|
text
|
# Plasma resonance at low magnetic fields as a probe of vortex line meandering in layered superconductors
\[
## Abstract
We consider the magnetic field dependence of the plasma resonance frequency in pristine and in irradiated Bi<sub>2</sub>Sr<sub>2</sub>CaCu<sub>2</sub>O<sub>8</sub> crystals near $`T_c`$. At low magnetic fields we relate linear in field corrections to the plasma frequency to the average distance between the pancake vortices in the neighboring layers (wandering length). We calculate the wandering length in the case of thermal wiggling of vortex lines, taking into account both Josephson and magnetic interlayer coupling of pancakes. Analyzing experimental data, we found that (i) the wandering length becomes comparable with the London penetration depth near T<sub>c</sub> and (ii) at small melting fields ($`<20`$ G) the wandering length does not change much at the melting transition. This shows existence of the line liquid phase in this field range. We also found that pinning by columnar defects affects weakly the field dependence of the plasma resonance frequency near $`T_c`$.
\]
Josephson plasma resonance (JPR) measurements in highly anisotropic layered superconductors provide unique information on the interlayer Josephson coupling and on the effect of pancake vortices on this coupling. The squared c-axis plasma resonance frequency, $`\omega _p^2`$, is proportional to the average interlayer Josephson energy, $`\omega _p^2J_0\mathrm{cos}\phi _{n,n+1}(𝐫)`$, where $`J_0`$ is the Josephson critical current, $`\phi _{n,n+1}(𝐫)`$ is the gauge-invariant phase difference between layers $`n`$ and $`n+1`$ and $`𝐫`$ is the in-plane coordinate. Here $`\mathrm{}`$ means average over thermal disorder and pinning. Thermal fluctuations and uncorrelated pinning lead to misalignment of pancake vortices induced by the magnetic field applied along the $`c`$ axis. Misalignment results in nonzero phase difference and in the suppression of Josephson coupling and plasma frequency. Thus, the $`\omega _p`$ dependence on the $`c`$-axis magnetic field measures the $`c`$-axis correlations of pancakes in the vortex state.
The JPR measurements performed in the liquid vortex phase at relatively high magnetic fields, $`B>B_J=\mathrm{\Phi }_0/\lambda _J^2`$ revealed that the plasma frequency drops approximately as $`1/\sqrt{B}`$. Here $`\lambda _J=\gamma s`$ is the Josephson length, $`\gamma `$ is the anisotropy ratio and $`s`$ is the interlayer distance. The above dependence is characteristic for the pancake liquid weakly correlated along the $`c`$ axis. Here a pancake in a given layer is shifted by a distance of the order of vortex spacing $`a=(\mathrm{\Phi }_0/B)^{1/2}\lambda _J`$ from the nearest pancake in the neighboring layer. Thus at high fields many pancake vortices contribute to the suppression of the phase difference at a given point $`𝐫`$, because $`\lambda _J`$ determines the decay length for the phase difference induced by misaligned pancakes of a given vortex line. In contrast, in the vortex solid a lattice of vortex lines forms as shown by neutron scattering- and $`\mu ^+`$SR data. JPR measurements in Bi<sub>2</sub>Sr<sub>2</sub>CaCu<sub>2</sub>O<sub>8-δ</sub> (Bi-2212) crystals have shown that in the fields above 20 Oe the interlayer phase coherence changes drastically at the transition line \[$`\mathrm{cos}\phi _{n,n+1}(𝐫)`$ jumps\], implying the decoupling nature of the first-order melting transition, see discussion in Ref..
In this Letter we focus on the low magnetic fields, $`B<B_J`$, near $`T_c`$ regime. Here the intervortex distance is much larger than $`\lambda _J`$, and the Josephson coupling in the region occupied by a given vortex is not suppressed by other vortices. In such a single vortex regime the Josephson energy increases linearly with displacements of nearest pancakes in neighboring layers, when $`|𝐮_{n,n+1}||𝐫_n𝐫_{n+1}|>\lambda _J`$, see Ref. (here $`𝐫_n`$ is the coordinate of a pancake in the layer $`n`$). This leads to the confinement of pancakes in neighboring layers, and $`c`$-axis correlated pancakes (i.e. vortex lines) may be preserved above the melting transition. We show that it is the linear decrease of $`\omega _p^2`$ with $`B`$ that characterizes such a vortex state. The linear dependence was observed experimentally in Refs. in both solid and liquid vortex states in Bi-2212 crystals in fields below 20 Oe near $`T_c`$ providing evidence for a line structure of the vortex liquid state at low fields.
We calculate the plasma frequency at low magnetic fields $`B`$, and near $`T_c`$ assuming that only vortices induced by the applied magnetic field $`𝐁c`$ contribute to the field dependence of $`\omega _p`$. We, thus, ignore the contribution of thermally excited vortices and antivortices to the field dependence of the plasma frequency.
The JPR absorption is described by a simplified equation for small oscillations of the phase difference $`\phi _{n,n+1}^{}(𝐫,\omega )`$ induced by an external microwave electric field with the amplitude $`𝒟`$ and the frequency $`\omega `$ applied along the $`c`$ axis:
$$\left[\frac{\omega (\omega +i\mathrm{\Gamma }_c)}{\omega _0^2}1+\lambda _J^2\widehat{L}_𝐫^2𝒱_n(𝐫)\right]\phi _{n,n+1}^{}=\frac{i\omega 𝒟}{4\pi J_0}.$$
(1)
Here $`𝒱_n(𝐫)=\mathrm{cos}\phi _{n,n+1}(𝐫)1`$ is the effective potential, $`\phi _{n,n+1}(𝐫)`$ is the phase difference induced by vortices misaligned due to thermal fluctuations and pinning in the absence of a microwave field. In Eq. (1) we neglect the time variations of $`\phi _{n,n+1}(𝐫,t)`$ because the plasma frequency is much higher than the characteristic frequencies of vortex fluctuations, see below. Further, $`\omega _0(T)=c/\sqrt{ϵ_0}\lambda _c(T)`$ is the zero field plasma frequency , $`ϵ_0`$ is the high frequency dielectric constant, $`\lambda _{ab}`$ and $`\lambda _c=\gamma \lambda _{ab}`$ are the components of the London penetration depth, $`E_J=E_0/\lambda _J^2`$ is the Josephson energy per unit area and $`E_0=s\mathrm{\Phi }_0^2/16\pi ^3\lambda _{ab}^2`$ is the characteristic pancake energy. The inductive matrix $`\widehat{L}`$ is defined as $`\widehat{L}A_n=_mL_{nm}A_m`$ with $`L_{nm}(\lambda _{ab}/2s)\mathrm{exp}\left(|nm|s/\lambda _{ab}\right)`$. The parameter $`\mathrm{\Gamma }_c=4\pi \sigma _c/ϵ_0`$ describes dissipation due to quasiparticles. Here $`\sigma _c`$ is the $`c`$-axis quasiparticle conductivity in the superconducting state. Practically it coincides with conductivity right above $`T_c`$.
The absorption in the uniform AC electric field is defined by the imaginary part of the inverse dielectric function
$$\mathrm{Im}\frac{1}{ϵ(\omega )}=\frac{1}{ϵ_0}\underset{\alpha n}{}𝑑𝐫\frac{\mathrm{\Psi }_{\alpha n}^{}(0)\mathrm{\Psi }_{\alpha n}(𝐫)\omega ^3\mathrm{\Gamma }_c}{(\omega ^2\omega _\alpha ^2)^2+\omega ^2\mathrm{\Gamma }_c^2},$$
(2)
where $`E_\alpha =1\omega _\alpha ^2/\omega _0^2`$ and $`\mathrm{\Psi }_{\alpha n}(𝐫)`$ are the eigenvalues and eigenfunctions of the operator $`\lambda _J^2\widehat{L}_𝐫^2+𝒱_n(𝐫)`$.
Consider magnetic fields $`B\mathrm{\Phi }_0/4\pi \lambda _{ab}^2,B_J`$ (single vortex regime). The phase difference near a given vortex line is induced by displacements of pancakes in neighboring layers along this vortex line (see Fig. 1). The potential $`𝒱_n(𝐫)`$ at distances $`u_{n,n+1}r\lambda _J`$ is determined by the phase difference produced by nearest pancakes in neighboring layers $`n`$ and $`n+1`$ relatively displaced at distance $`𝐮_{n,n+1}`$:
$$\phi _{n,n+1}(𝐫)[𝐫\times 𝐮_{n,n+1}]/r^2.$$
(3)
At large distances $`r\lambda _J`$ the potential drops exponentially and at small distances $`ru_{n,n+1}`$ it tends to a constant. This potential is attractive and there are localized and delocalized states. At $`a\lambda _J,u_{n,n+1}`$ main contribution to absorption is coming from the most homogeneous delocalized state. Such a state determines the center of JPR line $`\omega _p(B,T)`$. Other states lead to inhomogeneous line broadening in addition to broadening caused by quasiparticle dissipation described by $`\mathrm{\Gamma }_c`$. The latter mechanism of broadening dominates at low magnetic fields near $`T_c`$.
The strength of the potential with respect to the kinetic term, $`\lambda _J^2\widehat{L}_𝐫^2`$, is characterized by the dimensionless parameter $`r_w^2/\lambda _J^2`$ which we assume to be small in the following calculations. Here $`r_w`$ is the elemental wandering length of vortex lines, $`r_w^2𝐮_{n,n+1}^2`$. Then we use perturbation theory with respect to the potential $`𝒱_n(𝐫)`$ to find the energy of the most uniform delocalized state. The unperturbed wave function of this state is given by a constant. The first order correction to the energy of this most homogeneous delocalized state is given by the space average of the potential, averaging is over $`𝐫`$ and $`n`$. This space average is equivalent to the thermal average of $`\mathrm{cos}\phi _{n,n+1}(𝐫)1\phi _{n,n+1}^2(𝐫)/2`$. Using Eq. (3) we obtain a simple relation connecting field-induced suppression of the plasma frequency $`\omega _p(B,T)`$ with $`r_w`$ for the case $`r_w<\lambda _J`$:
$$\frac{\omega _0^2(T)\omega _p^2(B,T)}{\omega _0^2(T)}\frac{\phi _{n,n+1}^2(𝐫)}{2}\frac{\pi Br_w^2}{2\mathrm{\Phi }_0}\mathrm{ln}\frac{\lambda _J}{r_w}.$$
(4)
The relation (4) is very general and does not depend on the mechanism of the vortex wandering. It allows one to extract $`r_w^2`$ from the plasma resonance measurements. The field dependence of the resonance temperature, $`T_r(B)`$, is determined by the equation $`\omega _p^2(B,T_r)=\omega ^2`$. According to Eq. (4) this gives a linear dependence at small fields, $`T_r(B)T_r(0)+(dT_r/dB)B`$, and $`r_w^2`$ is directly related to the slope of this dependence
$$r_w^2\mathrm{ln}\frac{\lambda _J}{r_w}=\frac{2\mathrm{\Phi }_0}{\pi }\frac{d\mathrm{ln}\omega _0^2(T)}{dT}\left(\frac{dT_r}{dB}\right)_{B0}.$$
(5)
We now calculate $`r_w^2`$ when wandering of the vortex lines is caused by thermal fluctuations. In the single vortex regime $`r_w^2`$ is determined by the wandering energy consisting of the Josephson and magnetic contributions,
$$_w\frac{s}{2}\underset{n}{}\left[\epsilon _{1J}\left(\frac{𝐫_{n+1}𝐫_n}{s}\right)^2+w_M𝐫_n^2\right],$$
(6)
where $`\epsilon _{1J}(\mathrm{\Phi }_0^2/(4\pi \lambda _c)^2)\mathrm{ln}(\lambda _J/r_w)`$ is the line tension due to the Josephson coupling and $`w_M(\mathrm{\Phi }_0^2/(4\pi \lambda _{ab}^2)^2)\mathrm{ln}(\lambda _{ab}/r_w)`$ is the effective cage potential, which appears due to strongly nonlocal magnetic interactions between pancake vortices in different layers. Assuming Gaussian fluctuations we have
$`r_{wT}^2={\displaystyle \frac{dq_z}{2\pi }\frac{4T(1\mathrm{cos}q_zs)}{2\left(\epsilon _{1J}/s^2\right)\left(1\mathrm{cos}q_zs\right)+w_M}}={\displaystyle \frac{2sT}{\epsilon _{1J}}}f(\zeta ),`$ (7)
$`f(\zeta )={\displaystyle \frac{\zeta }{1+\zeta +\sqrt{1+\zeta }}},`$ (8)
where the parameter $`\zeta (T)=4\lambda _{ab}^2(T)/\lambda _J^2`$ describes the relative roles of the Josephson and magnetic interactions. Substituting this result into Eq. (4) we obtain
$$\frac{\omega _0^2(T)\omega _p^2(B,T)}{\omega _0^2(T)}\frac{B}{B_0}f(\zeta )$$
(9)
with $`B_0=\mathrm{\Phi }_0^3/16\pi ^3\lambda _c^2sT=B_J(E_0/T)`$. We stress that this result of the single vortex regime is valid in both solid and liquid vortex states for $`BB_J`$, because in this field range wandering of lines at short scales does not change much at the melting point. The difference between these states appears only in the second order in the magnetic field. At $`B=0`$ the resonance occurs at the temperature $`T_r(\omega )`$. The slope of the curve $`B_r(T)`$ at small $`B_r`$ is
$$\frac{dB_r}{dT}=\frac{1}{f(\zeta _r)}\left(\frac{dB_0}{dT}\right)_{T=T_r},\zeta _r=\frac{4\lambda _{ab}^2(T_r)}{\lambda _J^2}.$$
(10)
The crossover region from the line liquid, $`r_w^2a^2`$, to the pancake liquid, where $`r_w^2a^2`$, takes place at the magnetic field $`B\mathrm{min}[\pi B_0/2f(\zeta ),B_J]`$.
To compare our calculation with experiment we plot in Fig. 2 dependence of $`dB_r/dT`$ vs reduced resonance temperature at $`B=0`$, $`t_r=(T_cT_r)/T_c`$, obtained in Ref. using different microwave frequencies (shown in the plot) for Bi-2212 with $`T_c=84.45`$ K. We also show data point obtained by Matsuda et al. for Bi-2212 with close $`T_c`$, $`T_c=85.7`$ K. To calculate the dependence $`dB_r/dT`$ from Eq. (10) we need dependencies $`\lambda _c(T)`$ and $`\lambda _{ab}(T)`$, which determine dependencies $`B_0(T)`$ and $`\zeta (T)`$. $`\lambda _c(T)`$ was found from temperature dependence of $`\omega _0`$ at zero field, which we fit as $`\omega _0(t)/2\pi (133.5\mathrm{GHz})t^{0.32}`$, and taking $`ϵ_c=11`$. Matsuda et al. obtained similar temperature dependence of $`\omega _0`$ using direct frequency scan. $`\lambda _{ab}(T)`$ was obtained assuming temperature independent $`\gamma `$, which we adjusted to obtain the best agreement between the theoretical and experimental curves giving $`\gamma =480`$. Nonmonotonic temperature dependence of $`dB/dT_r`$ arises from competition between two factors in Eq. (10): increase at low temperatures is due to the factor $`1/f(\zeta )\lambda _{ab}^2`$ at $`\zeta 1`$, and increase at temperatures close to $`T_c`$ is due to nonlinearity of the dependence $`\lambda _c^2(T)`$, which leads to the divergency of $`dB_0/dT`$ at $`TT_c`$. As one can see from the plot, our theory describes satisfactorily the field dependence of $`\omega _p`$ not very close to $`T_c`$, i.e., in the region where the critical fluctuations are not very strong. The region of critical fluctuations is beyond applicability of our theory.
We can now check validity of our approximations: the static approximation for the potential in Eq. (1) and the perturbation theory with respect to the potential. The maximum frequency of vortex fluctuations in the single vortex regime is $`\omega _{fl}\epsilon _{1J}/s^2\eta `$, where the vortex viscosity $`\eta `$ estimated from flux flow resistivity is in the interval $`10^610^7`$ g/cm$``$s, see Ref. . This gives $`\omega _{fl}/\omega _p\mathrm{\Phi }_0^2\sqrt{ϵ_0}/16\pi ^2s^2\lambda _cc\eta 0.1`$ near $`T_c`$. Condition $`r_w^2/\lambda _J^21`$ for applicability of the perturbation theory is satisfied at $`t>0.02`$.
Near $`T_c`$ The experimental curve $`T_r(B)`$ shows no change of slope when it crosses the melting line, $`H_m1.3`$\[Oe/K\]$`(T_cT)`$. This gives evidence that the parameter $`r_w^2`$ does not change much at melting. Using Eq. (5) we estimate that, near melting line $`r_w^2/a^20.1`$ at temperatures studied. This confirms the line structure of the vortex liquid above the melting line at low fields, though vortex lines wander over extended distances already in the solid phase due to high temperatures, see Fig. 1. The estimated value, $`r_w1`$ $`\mu `$m at 77 K, is comparable with both $`\lambda _J`$ and $`\lambda _{ab}`$. In Bi-2212 crystals near optimal doping crossover to a pancake liquid occurs in the field interval $`1015`$ Oe. In less anisotropic high-T<sub>c</sub> materials one anticipates a larger region of the line liquid on the vortex phase diagram.
Next we calculate the effect of columnar defects (CDs) on the parameter $`r_w^2`$ at high temperatures. Columnar defects always straighten vortex lines and reduce $`r_w`$. At low temperatures this effect is very strong. Each vortex line is localized near one defect and its wandering is much smaller than in an unirradiated superconductor. At high temperatures lines start to distribute over a large number of defects and the effect of CDs progressively decreases. We consider the extreme case of very high temperatures when the effect of defects can be considered within perturbation theory. This approach is applicable at temperatures higher than the pinning energy of pancakes by CD, $`T>\pi E_0\mathrm{ln}(b/\xi _{ab})`$, where $`b`$ is CD radius and $`\xi _{ab}`$ is the superconducting correlation length. This situation corresponds to experiments.
The free energy functional is
$$=_w+_p,_p=\underset{n}{}U(𝐫_n),$$
(11)
where $`U(𝐫)=_iV(𝐫𝐑_i)`$ is the pinning potential of CDs, $`𝐑_i`$ are the positions of CDs, and $`V(𝐫)=\pi E_0\mathrm{ln}(1b^2/r^2)\mathrm{exp}(r/\lambda _{ab})`$ is the pinning potential of the individual CD. Expanding with respect to disorder up to second order terms we obtain the correction, $`(r_{wD}^2)`$, to the zero order term, $`r_{wT}^2`$, due to pinning by CDs:
$`r_{wD}^2=r_w^2r_{wT}^2`$ (12)
$`={\displaystyle \frac{s}{T}}{\displaystyle \frac{}{\epsilon _{1J}}}{\displaystyle \underset{m}{}}\left[K(𝐫_m𝐫_0)_0K(𝐫_0^{}𝐫_0)_0_0^{}\right],`$ (13)
where $`\mathrm{}_0`$ stands for statistical average for a system without disorder and $`K(𝐫)`$ is the correlation function of disorder,
$$K(𝐫^{}𝐫)=U(𝐫^{})U(𝐫)_DU(𝐫)_D^2.$$
(14)
When $`r_w`$ is much larger than the distance between columns we can transform Eq. (12) to a simpler form
$$r_{wD}^2=\frac{2sK_0}{T}\frac{}{\epsilon _{1J}}\underset{m}{}\left[\frac{2}{(𝐫_m𝐫_0)^2_0}\frac{1}{𝐫_0^2_0}\right],$$
(15)
where $`K_0𝑑𝐫K(𝐫)=n_\varphi \left[\pi ^2b^2E_0\mathrm{ln}(\lambda _{ab}/b)\right]^2`$. Here $`n_\varphi `$ is the concentration of CDs. The correction is determined by the lateral line displacement $`(𝐫_m𝐫_0)^2_0`$ which we calculate as
$`(𝐫_m𝐫_0)^2_0={\displaystyle \frac{dq_z}{2\pi }\frac{4T(1\mathrm{cos}mq_zs)}{2\left(\epsilon _{1J}/s^2\right)\left(1\mathrm{cos}q_zs\right)+w_M}}`$ (16)
$`={\displaystyle \frac{4sT}{\epsilon _{1J}}}{\displaystyle \frac{v\left(1v^m\right)}{\left(1v^2\right)}},v={\displaystyle \frac{\zeta +22\sqrt{\zeta +1}}{\zeta }}.`$ (17)
Combining Eqs. (15) and (17) we finally obtain for $`r_{wD}^2`$
$$r_{wD}^2=\frac{2K_0}{T^2}f_D(v).$$
(18)
Here the dimensionless function
$`f_D(v){\displaystyle \frac{\left(1v\right)^3}{1+v}}{\displaystyle \frac{d}{dv}}\left({\displaystyle \frac{1+v}{1v}}{\displaystyle \underset{n=1}{\overset{\mathrm{}}{}}}{\displaystyle \frac{1}{v^n1}}\right)`$
has the limits $`f_D(0)=1`$ and $`f_D(v)2.162\mathrm{ln}(1v)`$ at $`v1`$. Comparing this equation with Eq. (8) we obtain
$$\frac{r_{wD}^2}{r_{wT}^2}\frac{\pi ^2n_\varphi b^4}{\lambda _J^2}\left(\mathrm{ln}^2\frac{\lambda _{ab}}{b}\right)\left(\frac{E_0}{T}\right)^3\mathrm{ln}\frac{\lambda _J}{b}.$$
(19)
In the crystals studied $`\lambda _J1`$ $`\mu `$m, $`\lambda _{ab}(0)2000`$ Å, $`b70`$ Å. For the temperature range $`T>77`$ K explored in Refs. by the plasma resonance this gives a very small correction ($`r_{wD}^2/r_{wT}^210^3`$). Therefore, the effect of pinning by CDs on the field dependence of the plasma frequency near $`T_c`$ is negligible.
It was found in Refs. that after irradiation with the matching field $`B_\varphi =\mathrm{\Phi }_0n_\varphi 1`$ T the value $`(dB/dT_r)_{B0}`$ near $`T_c`$ increases about two times in comparison with that in pristine crystals. As was estimated above, pinning due to CDs cannot give such a strong effect. One may think that irradiation reduces the value of the anisotropy parameter $`\gamma `$, probably due to damage of the crystal structure around the heavy ion tracks. This assumption is consistent with recent measurements of the Josephson current in irradiated Bi-2212 mesas by Yurgens et al. It was found that irradiation approximately doubles the Josephson current at zero field.
In conclusion, we have calculated the field dependence of the JPR frequency in the single vortex regime at low magnetic fields near $`T_c`$ and demonstrated that the JPR provides a direct probe for meandering of individual lines. We have shown that the JPR data in highly anisotropic Bi-2212 crystals give evidence that at high magnetic fields $`BB_J`$ pancakes are uncorrelated along the $`c`$ axis in the vortex liquid (pancake liquid), while at lower fields, $`BB_J`$, pancakes form vortex lines (line liquid). These lines, however, strongly meander in both solid and liquid vortex states due to thermal fluctuations. We have shown also that JPR data provide evidence that irradiation by heavy ions causes a significant decrease of the effective anisotropy.
The authors thank Y. Matsuda, M. Gaifullin, T. Tamegai, and T. Shibauchi for providing their experimental data prior publication. This work was supported by the NSF Office of the Science and Technology Center under contract No. DMR-91-20000 and by the U. S. DOE, BES-Materials Sciences, under contract No. W-31-109-ENG-38. Work in Los Alamos is supported by the U. S. DOE.
|
no-problem/9907/cond-mat9907060.html
|
ar5iv
|
text
|
# CHIRAL GLASS PHASE IN CERAMIC SUPERCONDUCTORS
## I Introduction
Since the discovery ceramic high-$`T_c`$ superconductors it has been known that they may exhibit a glassy behavior reminiscent of the spin glass . Recently, by the noise and ac susceptibility measurements Leylekian et. al. demonstrated that LSCO ceramics show the glassy behavior even in zero external field . They also observed an intergranular cooperative phenomenon indicative of a glassy phase transition. This collective phenomenon may be interpreted in terms of the chiral glass (CG) picture using a three-dimensional lattice model of the Josephson junctions with a finite self-conductance. The order parameter of the CG phase is a ”chirality” which represents the direction of the local loo-supercurrent over grains . The frustration essential to realize the CG phase arises due to random distribution of 0- and $`\pi `$-junctions with positive and negative Josephson couplings, respectively . In this paper we review our recent results obtained by Monte Carlo simulations on the nature of ordering of the CG. One of the most important conclusions is that the screening effect does not destroy this phase in three dimensions (the vortex glass phase which may exist only in the non-zero external field is, in contrast, unstable under influence of the screening ). It should be noted that more direct support of the CG has been reported by the ac susceptibility and resistivity measurements on YBCO ceramic samples and by the aging effect in Bi<sub>2</sub>Sr<sub>2</sub>CaCu<sub>2</sub>O<sub>8</sub> .
Another issue of the present paper is to explain the so called compensation effect (CE) observed in some ceramic superconductors . Overall, this effect may be detected in the following way. The sample is cooled in the external dc field down to a low temperature and then the field is switched off. At the fixed low $`T`$ the second harmonics are monitored by applying the dc and ac fields to the sample. Due to the presence of non-zero spontaneous orbital moments the remanent magnetization or, equivalently, the internal field appears in the cooling process. If the direction of the external dc field is identical to that during the field cooled (FC) procedure, the induced shielding currents will reduce the remanence. Consequently, the absolute value of the second harmonics $`|\chi _2|`$ decreases until the signal of the second harmonics is minimized at a field $`H_{dc}=H_{com}`$. Thus the CE is a phenomenon in which the external and internal fields are compensated and the second harmonics become zero.
The key observation of Heinzel et al. is that the CE appears only in the samples which show the paramagnetic Meissner effect (PME) but not in those which do not. It should be noted that the intrinsic mechanism leading to the PME is still under debate. Sigrist and Rice argued that the PME in the high-$`T_c`$ superconductors is consistent with the $`d`$-wave superconductivity. On the other hand, the paramagnetic response has been seen even in the conventional Nb and Al superconductors. In order to explain the PME in terms of conventional superconductivity one can employ the idea of the flux compression inside of a sample. Such phenomenon becomes possible in the presence of the inhomogeneities or of the sample boundary.
In this paper we explain the CE theoretically by Monte Carlo simulations. Our starting point is based on the possible existence of the CG phase in which the remanence necessary for observing the CE should occur in the cooling procedure. Such remanence phenomenon is similar to what happens in spin glass. In fact, in the CG phase the frustration due to existence of 0- and $`\pi `$-junctions leads to non-zero supercurrents. The internal field (or the remanent magnetization) induced by the supercurrents in the cooling process from high temperatures to the CG phase may compensate the external dc field.
Using the three-dimensional XY model of the Josephson network with finite self-inductance We show that in the FC regime the CE appears in the samples which show the PME. This finding agrees with the experimental data of Heinzel et al.
## II Model
We neglect the charging effects of the grain and consider the following Hamiltonian
$`={\displaystyle \underset{<ij>}{}}J_{ij}\mathrm{cos}(\theta _i\theta _jA_{ij})+`$ (1)
$`{\displaystyle \frac{1}{2}}{\displaystyle \underset{p}{}}(\mathrm{\Phi }_p\mathrm{\Phi }_p^{ext})^2,`$ (2)
$`\mathrm{\Phi }_p={\displaystyle \frac{\varphi _0}{2\pi }}{\displaystyle \underset{<ij>}{\overset{p}{}}}A_{ij},A_{ij}={\displaystyle \frac{2\pi }{\varphi _0}}{\displaystyle _i^j}\stackrel{}{A}(\stackrel{}{r})𝑑\stackrel{}{r},`$ (3)
where $`\theta _i`$ is the phase of the condensate of the grain at the $`i`$-th site of a simple cubic lattice, $`\stackrel{}{A}`$ is the fluctuating gauge potential at each link of the lattice, $`\varphi _0`$ denotes the flux quantum, $`J_{ij}`$ denotes the Josephson coupling between the $`i`$-th and $`j`$-th grains, $``$ is the self-inductance of a loop (an elementary plaquette), while the mutual inductance between different loops is neglected. The first sum is taken over all nearest-neighbor pairs and the second sum is taken over all elementary plaquettes on the lattice. Fluctuating variables to be summed over are the phase variables, $`\theta _i`$, at each site and the gauge variables, $`A_{ij}`$, at each link. $`\mathrm{\Phi }_p`$ is the total magnetic flux threading through the $`p`$-th plaquette, whereas $`\mathrm{\Phi }_p^{ext}`$ is the flux due to an external magnetic field applied along the $`z`$-direction,
$$\mathrm{\Phi }_p^{ext}=\{\begin{array}{cc}HS\hfill & \text{if }p\text{ is on the }<xy>\text{ plane}\hfill \\ 0\hfill & \text{otherwise},\hfill \end{array}$$
(4)
where $`S`$ denotes the area of an elementary plaquette. In what follows we assume $`J_{ij}`$ to be an independent random variable taking the values $`J`$ or $`J`$ with equal probability ($`\pm J`$ or bimodal distribution), each representing 0 and $`\pi `$ junctions.
## III Existence of CG phase
In this section we employ the finite size scaling technique to study the nature of ordering of the CG phase in three dimension. The external field is set to be equal to zero ($`\mathrm{\Phi }_p^{ext}`$=0).
At each plaquette the local chirality is defined by the gauge-invariant quality,
$$\kappa =\mathrm{\hspace{0.33em}2}^{3/2}\underset{<ij>}{\overset{p}{}}\stackrel{~}{J}_{ij}\mathrm{sin}(\theta _i\theta _jA_{ij}),$$
(5)
where $`\stackrel{~}{J}_{ij}=J_{ij}/J`$ and the sum runs over a directed contour along the sides of the plaquette $`p`$. The overlap between two replicas of the chirality is
$$q_\kappa =\frac{1}{N_p}\underset{p}{}\kappa _p^{(1)}\kappa _p^{(2)},$$
(6)
where $`N_p`$ is the total number of plaquettes. In term of this chiral overlap, the Binder ratio of the chirality is defined as follows
$$g_{CG}=\frac{1}{2}(3[<q_\kappa ^4>]/[<q_\kappa ^2>]^2).$$
(7)
Here $`<\mathrm{}>`$ and $`[\mathrm{}]`$ represent the thermal and the configuration average, respectively. $`g_{CG}`$ is normalized so that it tends to zero above the chiral-glass transition temperature, $`T_{CG}`$, and tends to unity below $`T_{CG}`$ provided the ground state is non-degenerate.
Monte Carlo simulation is performed according to the replica exchange method . Equilibration is checked by monitoring the stability of the results against at least three-times longer runs for a subset of samples. The free boundary conditions are employed. Fig. 1 shows the results for the system sizes $`L=3,4,6,8`$ and 10, and for the dimensionless inductance $`\stackrel{~}{}=1`$ ($`\stackrel{~}{}=(2\pi /\varphi _0)^2J`$). The number of samples we used is running from 1500 to 100 depending on the system size $`L`$. Obviously, all of the curves of $`g_{CG}`$ for $`L=3,4,6`$ and 8 cross at almost the same temperature, strongly suggesting the occurence of a finite-temperature CG transition at $`T_{CG}=0.286\pm 0.01`$. A more careful analysis shows that the CG phase exists if $`\stackrel{~}{}`$ is smaller than a critical value $`\stackrel{~}{}_c`$, where $`5\stackrel{~}{}_c7`$. Thus, the CG phase is stable against the screening if the latter is not strong enough.
## IV Compensation effect
In order to study the CE one has to apply the external field $`H`$ which includes the dc and ac parts
$$H=H_{dc}+H_{ac}\mathrm{cos}(\omega t).$$
(8)
It should be noted that the dc field is necessary to generate even harmonics. The ac linear susceptibilty of model (1) has been studied by Monte Carlo simulations. Here we go beyond our previous calculations of the linear ac susceptibility . We study the dependence of the second harmonics as a function of the dc field. In this way, we can make a direct comparison with the CE observed in the experiments .
The dimensionless magnetization along the $`z`$-axis mormalized per plaquette, $`\stackrel{~}{m}`$, is given by
$$\stackrel{~}{m}=\frac{1}{N_p\varphi _0}\underset{p<xy>}{}(\mathrm{\Phi }_p\mathrm{\Phi }_p^{ext}),$$
(9)
where the sum is taken over all $`N_p`$ plaquettes on the $`<xy>`$ plane of the lattice. The real and imaginary parts of the ac second order susceptibility $`\chi _2^{}(\omega )`$ and $`\chi _2^{\prime \prime }(\omega )`$ are calculated as
$`\chi _2^{}(\omega )`$ $`=`$ $`{\displaystyle \frac{1}{\pi h_{ac}}}{\displaystyle _\pi ^\pi }\stackrel{~}{m}(t)\mathrm{cos}(2\omega t)d(\omega t),`$ (10)
$`\chi _2^{\prime \prime }(\omega )`$ $`=`$ $`{\displaystyle \frac{1}{\pi h_{ac}}}{\displaystyle _\pi ^\pi }\stackrel{~}{m}(t)\mathrm{sin}(2\omega t)d(\omega t),`$ (11)
where $`t`$ denotes the Monte Carlo time. The dimensionless ac field $`h_{ac}`$, dc field $`h_{dc}`$ are defined as follows
$`h_{ac}={\displaystyle \frac{2\pi H_{ac}S}{\varphi _0}},h_{dc}={\displaystyle \frac{2\pi H_{dc}S}{\varphi _0}},`$ (12)
For model (1) the PME appears clearly for $`h_{dc}1`$ . So the largest $`h_{dc}`$ we take is 1. On the other hand, as mentioned above, the CG phase is found to exist below a critical value of the inductance $`\stackrel{~}{}_c`$ where $`5\stackrel{~}{}_c7`$. One has to choose, therefore, an $`\stackrel{~}{}`$ which is smaller than its critical value. To be sure that we are in the CG phase we choose $`\stackrel{~}{}=4`$ and $`T=0.1`$.
The second harmonics have been obtained by employing Monte Carlo simulations based on the standard Metropolis updating technique. While Monte Carlo simulations involve no real dynamics, one can still expect that they give useful information on the long-time behavior of the system. We take $`L=8`$ and $`\omega =0.001`$. The sample average is taken over 20-40 independent bond realizations. We set $`h_{ac}=0.1`$, corresponding to $`0.016`$ flux quantum per plaquette. Smaller value of $`h_{ac}`$ turned out to leave the results almost unchanged.
Fig. 2 shows the dependence of second harmonics $`|\chi _2|`$, $`|\chi _2|=\sqrt{(\chi _2^{})^2+(\chi _2^{\prime \prime })^2}`$, on $`h_{dc}`$ in the FC regime. Our calculations follow exactly the experimental procedure of Heinzel et al. First the system is cooled in the dc field $`h_{dc}=1`$ from $`T=0.7`$ down to $`T=0.1`$ which is below the paramagnet-chiral glass transition temperature $`T_{CG}0.17`$ . The temperature step is chosen to be equal to 0.05. At each temperature, the system is evolved through 2$`\times 10^4`$ Monte Carlo steps. When the lowest temperature is reached the dc field used in cooling is switched off and we apply the combined field given by Eq. (6). Using Eq. (7) and (8) we monitor the second harmonics (the technique for obtaing the ac susceptibility may be found in Ref. ) reducing the dc field from $`h_{dc}=1`$ to zero stepwise by an amount of $`\mathrm{\Delta }h_{dc}=0.05`$. $`|\chi _2|`$ reaches minimum at the compensation field $`h_{com}=0.7\pm 0.05`$. At this point, similar to the experimental findings, the intersection of $`\chi _2^{}`$ and $`\chi _2^{\prime \prime }`$ is observed. This fact indicates that at $`H_{com}`$ the system is really in the compensated state. Furthermore, in accord with the experiments, at the compensation point the real and imaginary parts should change their sign. Our results show that $`\chi _2^{}`$ changes its sign roughly at $`h_{dc}=h_{com}`$. A similar behavior is also displayed by $`\chi _2^{\prime \prime }`$ but it is harder to observe due to a smaller amplitude of $`\chi _2^{\prime \prime }`$.
In conclusion we have shown that the finite-temperature CG phase is not spoiled by the screening. The CE may be explained, at least qualitatively, by using the CG picture of the ceramic superconductors. The CE is shown to appear in the CG phase in which the PME is present but not in the samples without the PME .
Financial support from the Polish agency KBN (Grant number 2P03B-025-13) is acknowledged.
|
no-problem/9907/hep-ph9907514.html
|
ar5iv
|
text
|
# LU TP 99-19 hep-ph/9907514 I : Chiral Perturbation for Kaons II: The Δ𝐼=1/2-rule in the Chiral Limit
## 1 Introduction
Chiral Perturbation Theory (CHPT) is a very large subject now so I will only discuss it briefly and then review the present status of its use in semileptonic and nonleptonic kaon decays. It has had several major successes in rare decays which are discussed in the contribution by Isidori . The application to $`K_L^0\pi ^+\pi ^{}e^+e^{}`$ is treated by Savage . As described in the second part and in several other talks , it is also very relevant in calculations of the nonleptonic matrixelements. In Section 2 I very briefly describe the underlying principles. The next section reviews the application to kaon semileptonic decays, this is one of the main playgrounds for CHPT and the area of some major successes. The use in kaon decays to pions is then discussed in Sect. 4. We treat the use of CHPT in simplifying matrixelement calculations in Sect. 4.1, predictions for $`K3\pi `$ in Sect. 4.2, and chiral limit cancellations in $`B_6`$ in Sect. 4.3.
Section 5 constitutes part II of this talk. Here I describe how the large $`N_c`$ method can take into account the scheme dependence of short-distance operators and first results.
## 2 Chiral Perturbation Theory
CHPT grew out of current algebra where systematically going beyond lowest order was difficult. The use of effective Lagrangians to reproduce current algebra results was well known and Weinberg showed how to use them for higher orders . This method was improved and systematized by Gasser and Leutwyler in the classic papers and proving that CHPT is indeed the low-energy limit for QCD using only general assumptions only was done by Leutwyler . Recent lectures are .
The assumptions underlying CHPT are:
* Global Chiral Symmetry and its spontaneous breaking to the vector subgroup: $`SU(3)_L\times SU(3)_RSU(3)_V`$.
* The Goldstone Bosons from this spontaneous breakdown are the only relevant degrees of freedom, I.e. the only possible singularities.
* Analyticity, causality, cluster expansion and special relativity .
The result is then a systematic expansion in meson masses, quark-masses, momenta and external fields. The external field method allows to find the minimal set of parameters consistent with chiral symmetry and the rest is basically only unitarity. With current algebra and dispersive methods it is in principle also possible to obtain the same results but the method of Effective Field Theories is much simpler.
So for any application of CHPT two questions should be answered:
1. Does the expansion in momenta and quark masses converge ?
2. If higher orders are important then:
$``$ Can we determine all the needed parameters from the data ? $``$ Can we estimate them if not directly obtainable.
## 3 Semileptonic decays
The application of CHPT to semileptonic decays has been reviewed in and in . Since then first results at order $`p^6`$ have appeared. The situation order by order is:
* 2 parameters: $`F_0`$, $`B_0`$ (+quark masses).
* : 10+2 parameters 7 are relevant; 3 more appear in the meson masses. In addition we also have the Wess-Zumino term and one-loop contributions.
* : 90+4 parameters . In addition there are two-loop diagrams and one-loop diagrams with $`_4`$ vertices.
### 3.1 General Situation
$`𝒑^\mathrm{𝟐}`$ Current Algebra : sixties
$`𝒑^\mathrm{𝟒}`$ One-loop 80’s, early 90’s
$`𝒑^\mathrm{𝟔}`$ $``$ Estimates using dispersive and/or models: “done” $``$ Double Log Contributions: mostly done . $``$ Two-flavour full calculations: Done. $``$ Three flavour full calculations: few done, several in progress.
$`𝒆^\mathrm{𝟐}𝒑^\mathrm{𝟐}`$ In progress.
Experiment Progress from DAPHNE, NA48, BNL, KTeV, …
### 3.2 $`K_{l2}`$
These decays are used to determine $`F_K`$ and test lepton universality by comparing $`K\mu \nu `$ and $`Ke\nu `$. $`F_\pi `$ is similarly determined from $`\pi \mu \nu `$. The theory is now known to NNLO fully in CHPT (for $`F_\pi `$ also ) The results are shown in Table 1 when the contributions from the $`p^6`$ Lagrangian are set to zero, i.e. $`C_i^r=0`$, at the scale indicated. The numbers in brackets are the extended double log approximation of . The inputs are $`10^3L_{i=4,10}=(0.3,1.4,0.2,0.9,6.9,5.5)`$; $`\mu `$ = 0.77 GeV unless otherwise indicated and for set A $`10^3L_{i=1,3}=(0.4,1.35,3.5)`$ while for set B $`10^3L_{i=4,10}=(0.3,1.4,0.2,0.9,6.9,5.5)`$.
We see that the variation with the $`p^4`$ input is sizable and that the extended double log approximation gives a reasonable first estimate for the correction.
### 3.3 $`K_{l2\gamma }`$
In this decay there are two form-factors. The axial form-factor is known to $`p^4`$ and a similar calculation for $`\pi e\nu \gamma `$ shows a 25% correction and a small dependence on the lepton invariant mass $`W^2`$. The vector form-factor is known to $`p^6`$ and has a 10 to 20 % correction in the relevant phasespace. The main interest in these decays is that it allows to test the anomaly and its sign as well as the $`VA`$ structure of the weak interactions.
### 3.4 $`K_{l2ll}`$
In these decays there are three vector and one axial form-factor. The vector ones are known to $`p^4`$ and the axial one to $`p^6`$ . Especially the decays with $`e\nu _e`$ in the final state are strongly enhanced over Bremsstrahlung. Since my last review there is a new limit from BNL E787 of $`B(K^+e^+\nu \mu ^+\mu ^{})510^7`$. All data are in good agreement with CHPT.
### 3.5 $`K_{l3}`$
These decays, $`K^{+,0}\pi ^{0,}\mathrm{}^+\nu `$, are our main source of knowledge of the CKM element $`V_{us}`$. It is therefore important to have as precise predictions as possible. The form-factors
$$\pi (p^{})|V_\mu ^{4i5}|K(p)=\frac{1}{\sqrt{2}}\left[(p+p^{})_\mu f_+(t)+(pp^{})_\mu f_{}(t)\right]$$
(1)
are usually parametrized by $`f_+(t)f_+(0)\left[1+\lambda _+t/m_\pi ^2\right]`$ and $`f_0(t)f_+(t)+tf_{}(t)/(m_K^2m_\pi ^2)f_+(0)\left[1+\lambda _0t/m_\pi ^2\right]`$.
The CHPT calculation at order $`p^4`$ fits these parametrizations well. The agreement with data is quite good except for the scalar slope where there is disagreement between different experiments. The extended double log calculation has small quadratic slopes, $`\lambda _+^{}`$ and $`\lambda _0^{}`$, and small corrections to the linear slopes. This, as shown in Table 2, is good news for improving the precision of $`V_{us}`$. $`f_+`$ is shown for $`K^0\pi ^{}e^+\nu `$ where isospin breaking is smallest.
### 3.6 $`K_{l3\gamma }`$
These decays have been calculated in CHPT to $`p^4`$ in . There are 10 formfactors and after a complicated interplay between all the various terms the final corrections to tree level are small even though individual form factors have large corrections. E.g. first adding tree level, then $`p^4`$ tree level and finally $`p^4`$ loop level contributions changes $`B(K_{e3\gamma }^+)`$ with $`E_\gamma 30`$ MeV and $`\theta _\mathrm{}\gamma 20^o`$ from $`2.810^4`$ via $`3.210^4`$ to $`3.010^4`$. Notice that $`F_K/F_\pi =1.22`$ so agreement with tree level at the 10% level is a good test of CHPT at order $`p^4`$.
Recent new results of $`B(K_{e3\gamma }^0)=(3.61\pm 0.14\pm 0.21)10^3`$ (NA31) and $`B(K_{\mu 3\gamma }^0)=(0.56\pm 0.05\pm 0.05)10^3`$ (NA48) are in good agreement with the theory results of $`(3.64.03.8)10^3`$ and $`(0.520.590.56)10^3`$ respectively. The three numbers correspond to the contributions included as above.
### 3.7 $`K_{l4}`$
In these decays, $`K\pi \pi \mathrm{}\nu `$, there are four form-factors, $`F,G,H,R`$ as defined in . The $`R`$ form factor can only be measured in $`K_{\mu 4}`$ decays and is known to $`p^4`$. $`F`$ and $`G`$ were calculated to $`p^4`$ in and improved using dispersion relations in . The main data come from ($`K\pi ^+\pi ^{}e^+\nu `$) and ($`K_L\pi ^\pm \pi ^0e^{}\nu `$). The form factors were parametrized as $`X=X(0)(1+\lambda (s_{\pi \pi }/(4m_\pi ^2)1))`$ with the same slope for $`X=F,G,H`$. $`H(0)=2.7\pm 0.7`$ is a test of the anomaly in both sign and magnitude, see and references therein. The other numbers are the main input for $`L_1^r`$, $`L_2^r`$ and $`L_3^r`$. In table 3 I show the tree level results, which expression, the $`p^4`$ or dispersive improved, to determine $`L_{1,2,3}^r`$ of set A and B given in Sect. 3.2, and the extended double log estimate of $`p^6`$. The results of the latter show similar patterns as the dispersive improvement. The full $`p^6`$ calculation is in progress and if the results are as indicated by the extended double log approximation a refitting of $`p^4`$ constants will be necessary. This is important since in these decays and in pionium decays the $`\pi \pi `$ phaseshifts will be measured accurately and their main theory uncertainty is the values of these constants. A useful parametrization to determine these phases from $`K_{l4}`$ can be found in as well as further relevant references.
## 4 Nonleptonic decays
For rare decays see , here only $`K0,\pi ,\pi \pi ,\pi \pi \pi `$ are discussed. The lowest order Lagrangian contains three terms with parameters $`G_8`$, $`G_{27}`$, $`G_8^{}`$ in the notation of . The term with $`G_8^{}`$, the weak mass term, contributes to processes with photons at lowest order and otherwise at NLO. The NLO lagrangian contains about 30 parameters for the octet, denoted by $`E_i`$, and twenty-seven, denoted by $`D_i`$, representation of $`SU(3)_L`$ .
### 4.1 $`K\pi ,K0K\pi \pi `$
As shown in the method of can be extended to $`p^4`$ using well defined off-shell Green functions of pseudo-scalar currents. Except for one $`E_i`$ and one $`D_i`$ all the necessary ones can be obtained from $`K\pi `$ transitions<sup>1</sup><sup>1</sup>1Using $`K0`$ allows to obtain two more constants than given in .. To $`K\pi \pi `$ at order $`p^4`$ 7 $`E_i`$ and 6 $`D_i`$ contribute in addition to the three couplings of lowest order. Of these 16 constants we can determine 14 from the much simpler $`K`$ to $`\pi `$ and vacuum transitions. This allows thus a more stringent test of various models than possible from on-shell $`K\pi \pi `$ alone. Models like factorization etc. will probably be needed in the foreseeable future to go to $`K3\pi `$ and various rare decays.
### 4.2 CHPT for $`K\pi \pi `$ and $`K\pi \pi \pi `$
These decays were calculated to $`p^4`$ , relations between them clarified in and some $`p^6`$ estimates to them were performed in .
The main problem is to find experimental relations after all parameters are counted. To order $`p^2`$ we have 2(1) and to $`p^4`$ 7(3). The number in brackets refers to the $`\mathrm{\Delta }I=1/2`$ observables only. As observables (after using isospin) we have 2(1) $`K\pi \pi `$ rates and 2(1)(+1) $`K\pi \pi \pi `$ rates. We have 3(1)(+3) linear and 5(1)(+5) quadratic slopes. The (+i) indicates the phases, in principle also measurable and predicted but not counted here. 12 observables and 7 parameters leave five relations to be tested. The fits and results are shown in Table 4 where we have also indicated which quantities are related. See for definitions and references. The new CPLEAR data improve the precision slightly. $`K\pi \pi `$ rates are always input.
It is important to tests these relations directly, the agreement at present is satisfactory but errors are large.
CP-violation in $`K3\pi `$ will be very difficult to detect. The strong phases needed to interfere with are very small, see and references therein. E.g. $`\delta _2\delta _1`$ in $`K_L\pi ^+\pi ^{}\pi ^0`$ is predicted to be $`0.083`$ and the experimental result is only $`0.33\pm 0.29`$. Asymmetries are expected to be of order $`10^6`$ so we can only expect to improve limits in the near future.
### 4.3 $`B_6`$ in the chiral limit
In the usual definitions of $`B_i`$ factors in nonleptonic decays
$$B_6\frac{\text{out}|Q_6|\text{in}}{\text{out}|Q_6|\text{in}_{\text{factorized}}}$$
(2)
the denominator needs to be well defined. This is not true for $`B_6`$ in the chiral limit. The factorizable denominator contains the scalar radius which is infinite in full chiral limit. This can be seen in the CHPT calculation.
$$G_8|_{Q_6\text{fact}}=\frac{80C_6(\mu )B_0^2(\mu )}{3F_0^2}\left[L_5^r(\nu )\frac{3}{256\pi ^2}\{2\mathrm{ln}\frac{m_L}{\nu }+1\}\right]$$
(3)
Here $`\nu `$ is the CHPT scale and $`m_L`$ the meson mass, we can see that $`G_{8fact}\mathrm{}`$ for $`m_L0`$.
The nonfactorizable part has precisely the same divergence so that in the sum it cancels. Thus when calculating $`B_6`$ care must be taken to calculate factorizable and nonfactorizable consistently so this cancellation that is required by chiral symmetry takes place and does not inflate final results.
## 5 The $`X`$-boson method and the $`\mathrm{\Delta }I=1/2`$ rule in the chiral limit.
In this section I shortly describe how in the context of the large $`N_c`$ method after the improvements of the momentum routing also the scheme dependence can be described. Other relevant references to the problem of nonleptonic matrix elements are .
The basic underlying idea is that we have more experience in hadronizing currents. We therefore replace the effect of the local operators of $`H_W(\mu )=_iC_i(\mu )Q_i(\mu )`$ at a scale $`\mu `$ by the exchange of a series of colourless $`X`$-bosons at a low scale $`\mu `$. Let me illustrate the procedure in a simpler case of only one operator and neglecting penguin contributions. In the more general case all coefficients become matrices.
$$C_1(\mu )(\overline{s}_L\gamma _\mu d_L)(\overline{u}_L\gamma ^\mu u_L)X_\mu \left[g_1(\overline{s}_L\gamma ^\mu d_L)+g_2(\overline{u}_L\gamma ^\mu u_L)\right].$$
(4)
Colour indices inside brackets are summed over. To determine $`g_1`$, $`g_2`$ as a function of $`C_1`$ we set matrix elements of $`C_1Q_1`$ equal to the equivalent ones of $`X`$-boson exchange. This must be done at a $`\mu `$ such that perturbative QCD methods can still be used and thus we can use external states of quarks and gluons. To lowest order this is simple. The tree level diagram
from Fig. 1(a) is set equal to that of Fig. 1(b) leading to $`C_1=g_1g_2/M_X^2.`$At NLO diagrams like Fig. 1(c) and 1(d) contribute as well leading to
$$C_1\left(1+\alpha _S(\mu )r_1\right)=\frac{g_1g_2}{M_X^2}\left(1+\alpha _S(\mu )a_1+\alpha _S(\mu )b_1\mathrm{log}\frac{M_X^2}{\mu ^2}\right).$$
(5)
The left-hand-side (lhs) is scheme-independent. The right-hand-side can be calculated in a very different renormalization scheme from the lhs. The infrared dependence of $`r_1`$ is present in precisely the same way in $`a_1`$ such that $`g_1`$ and $`g_2`$ are scheme-independent and independent of the precise infrared definition of the external state in Fig. 1.
One step remains, to calculate the matrix element of $`X`$-boson exchange between meson external states. The integral over $`X`$-boson momenta we split in two
$$_0^{\mathrm{}}𝑑p_X\frac{1}{p_X^2M_X^2}_0^{\mu _1}𝑑p_X\frac{1}{p_X^2M_X^2}+_{\mu _1}^{\mathrm{}}𝑑p_X\frac{1}{p_X^2M_X^2}.$$
(6)
The second term involves a high momentum that needs to flow back through quarks or gluons and leads through diagrams like the one of Fig. 1(c) to a four quark-operator with a coefficient
$$\frac{g_1g_2}{M_X^2}\left(\alpha _S(\mu _1)a_2+\alpha _S(\mu _1)b_1\mathrm{log}\frac{M_X^2}{\mu ^2}\right).$$
(7)
The four-quark operator needs to be evaluated only in leading order in $`1/N_c`$. The first term in (6) we have to evaluate in a low-energy model with as much QCD input as possible. The $`\mu _1`$ dependence cancels between the two terms in (6) if the low-energy model is good enough. The coefficients $`r_1`$, $`a_1`$ and $`a_2`$ give the correction to the factorization used in previous $`1/N_c`$ calculations.
It should be stressed that in the end all dependence on $`M_X`$ cancels out. The $`X`$-boson is a purely technical device to correctly identify the four-quark operators in terms of well-defined products of nonlocal currents.
### 5.1 Numerical results
We now use the $`X`$-boson method with $`r_1`$ as given in and $`a_1=a_2=0`$, the calculation of the latter is in progress, and $`\mu =\mu _1`$. For $`B_K`$ we can extrapolate to the pole both for the real case ($`\widehat{B}_K`$) and in the chiral limit ($`\widehat{B}_K^\chi `$). For $`K\pi \pi `$ we can get at the values of the octet ($`G_8`$), weak mass term ($`G_8^{}`$) and 27-plet ($`G_{27}`$) coupling. We obtain $`\widehat{B}_K^\chi =0.25\text{}0.4;`$
$$\widehat{B}_K=0.69\pm 0.10;G_8=4.3\text{}7.5;G_{27}=0.25\text{}0.40\text{ and }G_8^{}=0.8\text{}1.1.$$
(8)
The experimental values are $`G_86.2`$ and $`G_{27}0.48`$ .
In Fig. 3 the $`\mu `$ dependence of $`G_8`$ is shown and in Fig. 3 the contribution from the various different operators. If we look inside the numbers we see that $`B_6`$ defined with only the large $`N_c`$ term in the factorizable part, is about 2 to 2.2 for $`\mu `$ from 0.6 to 1.0 GeV.
## 6 Conclusions
CHPT is doing fine in kaon decays, especially in the semileptonic sector where several calculations at $`p^6`$ are now in progress. In the nonleptonic sector it provides several relations in $`K3\pi `$ decays. Testing these is an important part since it tells us how well $`p^4`$ works in this sector. CHPT can also help in simplifying and identifying potentially dangerous parts in the calculations of nonleptonic matrix elements.
The large $`N_c`$ method allows to include the scheme dependence appearing in short-distance operators and when then all long-distance constraints from CHPT and some other input are used encouraging results are obtained for $`K\pi \pi `$ decays in the chiral limit.
|
no-problem/9907/nucl-th9907105.html
|
ar5iv
|
text
|
# Cranked Relativistic Hartree-Bogoliubov Theory: Superdeformed Bands in the 𝐴∼190 Region
\[
## Abstract
Cranked Relativistic Hartree-Bogoliubov (CRHB) theory is presented as an extension of Relativistic Mean Field theory with pairing correlations to the rotating frame. Pairing correlations are taken into account by a finite range two-body force of Gogny type and approximate particle number projection is performed by Lipkin-Nogami method. This theory is applied to the description of yrast superdeformed rotational bands observed in even-even nuclei of the $`A190`$ mass region. Using the well established parameter sets NL1 for the Lagrangian and D1S for the pairing force one obtains a very successful description of data such as kinematic ($`J^{(1)}`$) and dynamic ($`J^{(2)}`$) moments of inertia without any adjustment of new parameters. Within the present experimental accuracy the calculated transition quadrupole moments $`Q_t`$ agree reasonably well with the observed data.
\]
The investigation of superdeformation in different mass regions still remains in the focus of low-energy nuclear physics. Experimental data on superdeformed rotational (SD) bands are now available in different parts of the periodic table, namely, in the $`A60`$ , 80, 130, 150 and 190 mass regions. This richness of data provides the necessary input for a test of different theoretical models and the underlying effective interactions at superdeformation. Cranked relativistic mean field (CRMF) theory developed in Refs. represents one of such theories. It has been applied in a systematic way for the description of SD bands observed in the $`A60`$ and $`A150`$ mass regions. The pairing correlations in these bands are considerably quenched and at high rotational frequencies a very good description of experimental data is obtained in the unpaired formalism in most of the cases as shown in Refs. .
On the contrary, pairing correlations have a considerable impact on the properties of SD bands observed in the $`A190`$ mass region and more generally on rotational bands at low spin. Different theoretical mean field methods have been applied for the study of SD bands in this mass region. These are the cranked Nilsson-Strutinsky approach based on a Woods-Saxon potential , self-consistent cranked Hartree-Fock-Bogoliubov approaches based either on Skyrme or Gogny forces . It was shown in different theoretical models that in order to describe the experimental data on moments of inertia one should go beyond the mean field approximation and deal with fluctuations in the pairing correlations using particle number projection. This is typically done in an approximate way by the Lipkin-Nogami method . With exception of approaches based on Gogny forces, special care should also be taken to the form of the pairing interaction. For example, quadrupole pairing has been used in addition to monopole pairing in the cranked Nilsson-Strutinsky approach . A similar approach to pairing has also been used in projected shell model . Density dependent pairing has been used in connection to Skyrme forces . These requires, however, the adjustment of additional parameters to the experimental data.
Cranked Relativistic Hartree-Bogoliubov (CRHB) theory presented in this article is an extension of cranked relativistic mean field (CRMF) theory to the description of pairing correlations in rotating nuclei. A brief outline of this theory and its application to the study of several yrast SD bands observed in even-even nuclei of the $`A190`$ region with neutron numbers $`N=110,112,114`$ is presented below while more details (both of the theory and the calculations) will be given in a forthcoming publication.
The theory describes the nucleus as a system of Dirac nucleons which interact in a relativistic covariant manner through the exchange of virtual mesons : the isoscalar scalar $`\sigma `$ meson, the isoscalar vector $`\omega `$ meson, and the isovector vector $`\rho `$ meson. The photon field $`(A)`$ accounts for the electromagnetic interaction.
The CRHB equations for the fermions in the rotating frame are given in one-dimensional cranking approximation by
$$\left(\begin{array}{cc}h\mathrm{\Omega }_x\widehat{J}_x& \widehat{\mathrm{\Delta }}\\ \widehat{\mathrm{\Delta }}^{}& h^{}+\mathrm{\Omega }_x\widehat{J}_x^{}\end{array}\right)\left(\begin{array}{c}U_k\\ V_k\end{array}\right)=E_k\left(\begin{array}{c}U_k\\ V_k\end{array}\right)$$
(1)
where $`h=h_D\lambda `$ is the single-nucleon Dirac Hamiltonian minus the chemical potential $`\lambda `$ and $`\widehat{\mathrm{\Delta }}`$ is the pairing potential. $`\widehat{J_x}`$ and $`\mathrm{\Omega }_x`$ are the projection of total angular momentum on the rotation axis and the rotational frequency. $`U_k`$ and $`V_k`$ are quasiparticle Dirac spinors and $`E_k`$ denote the quasiparticle energies. The variational principle leads to time-independent inhomogeneous Klein-Gordon equations for the mesonic fields in the rotating frame
$`\left\{\mathrm{\Delta }(\Omega _x\widehat{L}_x)^2+m_\sigma ^2\right\}\sigma (𝒓)`$ $`=`$ $`g_\sigma \rho _s(𝒓)`$ (2)
$`g_2\sigma ^2(𝒓)g_3\sigma ^3(𝒓)`$ (3)
$`\left\{\mathrm{\Delta }(\Omega _x\widehat{L}_x)^2+m_\omega ^2\right\}\omega _0(𝒓)`$ $`=`$ $`g_\omega \rho _v^{is}(𝒓)`$ (4)
$`\left\{\mathrm{\Delta }(\Omega _x(\widehat{L}_x+\widehat{S}_x))^2+m_\omega ^2\right\}𝝎(𝒓)`$ $`=`$ $`g_\omega 𝒋^{is}(𝒓)`$ (5)
$`\left\{\mathrm{\Delta }(\Omega _x\widehat{L}_x)^2+m_\rho ^2\right\}\rho _0(𝒓)`$ $`=`$ $`g_\rho \rho _v^{iv}(𝒓)`$ (6)
$`\left\{\mathrm{\Delta }(\Omega _x(\widehat{L}_x+\widehat{S}_x))^2+m_\rho ^2\right\}𝝆(𝒓)`$ $`=`$ $`g_\rho 𝒋^{iv}(𝒓)`$ (7)
$`\mathrm{\Delta }A_0(𝒓)`$ $`=`$ $`e\rho _v^p(𝒓)`$ (8)
$`\mathrm{\Delta }𝑨(𝒓)`$ $`=`$ $`e𝒋^p(𝒓)`$ (9)
where the source terms are sums of bilinear products of baryon amplitudes
$`\rho _s(𝒓)`$ $`=`$ $`{\displaystyle \underset{k>0}{}}(V_k^n(𝒓))^{}\widehat{\beta }V_k^n(𝒓)+(V_k^p(𝒓))^{}\widehat{\beta }V_k^p(𝒓)`$ (10)
$`\rho _v^{is}(𝒓)`$ $`=`$ $`{\displaystyle \underset{k>0}{}}(V_k^n(𝒓))^{}V_k^n(𝒓)+(V_k^p(𝒓))^{}V_k^p(𝒓)`$ (11)
$`\rho _v^{iv}(𝒓)`$ $`=`$ $`{\displaystyle \underset{k>0}{}}(V_k^n(𝒓))^{}V_k^n(𝒓)(V_k^p(𝒓))^{}V_k^p(𝒓)`$ (12)
$`𝒋^{is}(𝒓)`$ $`=`$ $`{\displaystyle \underset{k>0}{}}(V_k^n(𝒓))^{}\widehat{𝜶}V_k^n(𝒓)+(V_k^p(𝒓))^{}\widehat{𝜶}V_k^p(𝒓)`$ (13)
$`𝒋^{iv}(𝒓)`$ $`=`$ $`{\displaystyle \underset{k>0}{}}(V_k^n(𝒓))^{}\widehat{𝜶}V_k^n(𝒓)(V_k^p(𝒓))^{}\widehat{𝜶}V_k^p(𝒓)`$ (14)
The sums over $`k>0`$ run over all quasiparticle states corresponding to positive energy single-particle states (no-sea approximation). In Eqs. (9,14), the indexes $`n`$ and $`p`$ indicate neutron and proton states, respectively, and the indexes $`is`$ and $`iv`$ are used for isoscalar and isovector quantities. $`\rho _v^p(𝒓)`$, $`𝒋^p(𝒓)`$ in Eq. (9) correspond to $`\rho _v^{is}(𝒓)`$ and $`𝒋^{is}(𝒓)`$ defined in Eq. (14), respectively, but with the sums over neutron states neglected.
The spatial components of the vector mesons give origin to a magnetic potential $`𝑽(𝒓)`$ which breaks time-reversal symmetry and removes the degeneracy between nucleonic states related via this symmetry . This effect is commonly referred as a nuclear magnetism . It is very important for a proper description of the moments of inertia . Consequently, the spatial components of the vector mesons $`\omega `$ and $`\rho `$ are properly taken into account in a fully self-consistent way. Since the coupling constant of the electromagnetic interaction is small compared with the coupling constants of the meson fields, the Coriolis term for the Coulomb potential $`A_0(𝒓)`$ and the spatial components of the vector potential $`𝑨(𝒓)`$ are neglected.
In the present version of CRHB theory, pairing correlations are only considered between the baryons, because pairing is a genuine non-relativistic effect, which plays a role only in the vicinity of the Fermi surface. The phenomenological Gogny-type finite range interaction
$`V^{pp}(1,2)`$ $`=`$ $`{\displaystyle \underset{i=1,2}{}}e^{[(𝒓_1𝒓_2)/\mu _i]^2}`$ (16)
$`\times (W_i+B_iP^\sigma H_iP^\tau M_iP^\sigma P^\tau )`$
with the parameters $`\mu _i`$, $`W_i`$, $`B_i`$, $`H_i`$ and $`M_i`$ $`(i=1,2)`$ is employed in the $`pp`$ (pairing) channel. The parameter set D1S has been used in the present calculations. This procedure requires no cutoff and provides a very reliable description of pairing properties in finite nuclei. In conjuction with relativistic mean field theory such an approach to the description of pairing correlations has been applied, for example, in the study of ground state properties , neutron halos , and deformed proton emitters . In the present approach we go beyond the mean field and perform an approximate particle number projection before the variation by means of Lipkin-Nogami method . As illustrated in Fig. 1, this feature is extremely important for a proper description of the moments of inertia.
The present calculations have been performed with the NL1 parametrization of the relativistic mean field Lagrangian. The CRHB-equations are solved in the basis of an anisotropic three-dimensional harmonic oscillator in Cartesian coordinates. A basis deformation of $`\beta _0=0.5`$ has been used. All fermionic and bosonic states belonging to the shells up to $`N_F=14`$ and $`N_B=16`$ are taken into account in the diagonalisation and the matrix inversion, respectively. This truncation scheme provides reasonable numerical accuracy. For example, the increase of the fermionic basis up to $`N_F=17`$ changes the values of the kinematic moment of inertia $`J^{(1)}`$ and the transition quadrupole moment $`Q_t`$ by less than 1%. The numerical errors for the total energy are even smaller.
Yrast SD bands in <sup>194</sup>Pb and <sup>194</sup>Hg are linked to the low-spin level scheme . In addition, there is a tentative linking of SD band in <sup>192</sup>Pb . These data provide an opportunity to compare with experiment in a direct way not only calculated dynamic ($`J^{(2)}`$) but also kinematic ($`J^{(1)}`$) moments of inertia. On the contrary, at present the yrast SD bands in <sup>190,192</sup>Hg and <sup>196</sup>Pb are not linked to the low-spin level scheme yet. Thus some spin values consistent with the signature of the calculated yrast SD configuration should be assumed for the experimental bands when a comparison is made with respect of the kinematic moment of inertia $`J^{(1)}`$.
The results of such a comparison are shown in Figs. 1, 2 and 3. The theoretical $`J^{(1)}`$ values agree well with the experimental ones in the cases of linked SD bands in <sup>194</sup>Pb and <sup>194</sup>Hg and tentatively linked SD band in <sup>192</sup>Pb. The comparison of theoretical and experimental $`J^{(1)}`$ values (see Figs. 2 and 3) indicates that the lowest transitions in the yrast SD bands of <sup>190</sup>Hg, <sup>192</sup>Hg and <sup>196</sup>Pb with energies 316.9, 214.4 and 171.5 keV, respectively, most likely correspond to the spin changes of $`14^+12^+`$, $`10^+8^+`$ and $`8^+6^+`$. If these spin values are assumed, good agreement between theory and experiment is observed. Calculated and experimental values of the dynamic moment of inertia $`J^{(2)}`$ agree also well, see Figs. 1, 2 and 3.
The increase of kinematic and dynamic moments of inertia in this mass region can be understood in the framework of CRHB theory as emerging predominantly from a combination of three effects: the gradual alignment of a pair of $`j_{15/2}`$ neutrons, the alignment of a pair of $`i_{13/2}`$ protons at a somewhat higher frequency, and decreasing pairing correlations with increasing rotational frequency. The interplay of alignments of neutron and proton pairs is more clearly seen in Pb isotopes where the calculated $`J^{(2)}`$ values show either a small peak (for example, at $`\mathrm{\Omega }_x0.45`$ MeV in <sup>192</sup>Pb, see Fig. 2) or a plateau (at $`\mathrm{\Omega }_x0.4`$ MeV in <sup>196</sup>Pb, see Fig. 2). With increasing rotational frequency, the $`J^{(2)}`$ values determined by the alignment in the neutron subsystem decrease but this process is compensated by the increase of $`J^{(2)}`$ due to the alignment of the $`i_{13/2}`$ proton pair. This leads to the increase of the total $`J^{(2)}`$-value at $`\mathrm{\Omega }_x0.45`$ MeV. The shape of the peak (plateau) in $`J^{(2)}`$ is determined by a delicate balance between alignments in the proton and neutron subsystems which depends on deformation, rotational frequency and Fermi energy. For example, no increase in the total dynamic moment of inertia $`J^{(2)}`$ has been found in the calculations after the peak up to $`\mathrm{\Omega }_x=0.5`$ MeV in <sup>192</sup>Hg, see Fig. 3. It is also of interest to mention that the sharp increase in $`J^{(2)}`$ of the yrast SD band in <sup>190</sup>Hg is also reproduced in the present calculations. One should note that the calculations slightly overestimate the magnitude of $`J^{(2)}`$ at the highest observed frequencies. The possible reasons could be the deficiencies either of the Lipkin-Nogami method or the cranking model in the band crossing region or both of them.
The comparison between calculated and experimental absolute transition quadrupole moments $`Q_t`$ is less straightforward. This is because the uncertainties in absolute measured $`Q_t`$ values arising from the uncertainties in stopping powers can be as large as 15% . Thus the comparison of $`Q_t`$’s values obtained in different experiments should be performed with some caution since systematic errors due to different stopping powers may be responsible for the observed differences. In addition, as illustrated in Fig. 4, the experimental $`Q_t`$ values depend somewhat on the type of analysis (centroid shift or line shape) used when these quantities are extracted from the data.
The results of CRHB calculations are compared with most recent experimental data in Fig. 4. One can conclude that the results of calculations for absolute values of $`Q_t`$ will be within the ‘full’ error bars if the 15% uncertainty due to stopping powers would be taken into account (experimental data shown in Fig. 4 does not include these uncertainties). For the sake of simplicity we will not take into account these uncertainties in the subsequent discussion and will concentrate mainly on the experimental data obtained with the same stopping powers. In Fig. 4 such data are indicated by the same capital letters. While the calculated $`Q_t`$ values are close to the experimental values obtained with centroid shift and line shape analysis for Pb isotopes, most of experimental $`Q_t`$ values (with exception of exp. B) are overestimated in calculations in the case of Hg isotopes. One should note that the most recent experimental data on <sup>192</sup>Hg is contradictory since two experiments (exp. A and exp. B ) give very different values of $`Q_t`$, see Fig. 4. Definitely, the measurements of relative transition quadrupole moments between SD bands in Pb and Hg isotopes using the same stopping powers, which are not available nowadays, are needed to find out, whether this discrepancy between calculations and experiment is due to an inadequate theoretical description or the experimental problems quoted above. In the calculations, relative average quadrupole moments $`\mathrm{\Delta }Q_t`$ between yrast SD bands of Pb and Hg isotopes decrease with increasing neutron number $`N`$ ($`\mathrm{\Delta }Q_t1.6`$ $`e`$b, $`1.4`$ $`e`$b and $`1.06`$ $`e`$b for $`N=110,112`$ and 114, see Fig. 4).
Results of calculations indicate the general trend of decrease of average $`Q_t`$ values with the increase of neutron number $`N`$ both for Pb and Hg isotopes. The results of the centroid shift analysis for <sup>194,196</sup>Pb (exp. C) indicate a slight decrease in the $`Q_t`$ values with increasing $`N`$ consistent with theoretical results. Although the data on <sup>192,194</sup>Hg (exp. A) indicate similar values of $`Q_t`$ being in slight contradiction with theoretical results, definite conclusions are not possible at present due to the large error bars. In addition, with increasing rotational frequency $`\mathrm{\Omega }_x`$ the calculated $`Q_t`$ values show initially a slight increase which is followed by a subsequent decrease. In the case of <sup>190</sup>Hg this feature is hidden by the band crossing. The maximum $`Q_t`$ values within specific configurations are calculated at different frequencies $`\mathrm{\Omega }_x`$ as a function of the neutron number $`N`$. With increasing $`N`$ the maximum $`Q_t`$ is reached at lower frequencies. Similar variations of $`Q_t`$ have been also observed in the cranked Hartree-Fock calculations with Skyrme forces . Dedicated experiments aiming on the measurements of the variations of transition quadrupole moments $`Q_t`$ as a function of rotational frequency $`\mathrm{\Omega }_x`$ are needed in order to confirm or reject these results.
In conclusion, the Cranked Relativistic Hartree-Bogoliubov theory has been developed and applied to the description of yrast SD bands observed in the $`A190`$ mass region. With an approximate particle number projection performed by the Lipkin-Nogami method, the rotational features of experimental bands such as kinematic and dynamic moments of inertia are very well described in the calculations. Calculated values of transition quadrupole moments $`Q_t`$ are close to the measured ones, however, more accurate and consistent experimental data on $`Q_t`$ is needed in order to make detailed comparisons between experiment and theory.
A.V.A. acknowledges support from the Alexander von Humboldt Foundation. This work is also supported in part by the Bundesministerium für Bildung und Forschung under the project 06 TM 875.
|
no-problem/9907/astro-ph9907322.html
|
ar5iv
|
text
|
# 1 INTRODUCTION
## 1 INTRODUCTION
The Mask Manufacturing Unit (MMU) is dedicated to the off-line manufacturing, identification and preparation of the slit masks for both VIMOS and NIRMOS instruments. The MMU includes 2 sub-units: 1) the Mask Manufacturing Machine (MMM), dedicated to the machining of slits in thin sheets (masks) and 2) the Mask Handling System (MHS), dedicated to the handling of the masks, up to the loading into the Instrument Cabinets (IC). Fig 1 shows the VIMOS focal plane mask reference system.
## 2 MMU REQUIREMENTS
### 2.1 Masks
R1) \- Roughness of the slit edges.
The VIRMOS Technical Specifications required $`<`$ 5 $`\mu `$m peak to peak, regardless of the slit width. This specification has been translated into the following quantities measurable by means of a mechanical roughness meter equipped with a knife-type probe:
1. max number of deviations from the mean $`>`$ $`\pm `$ 2.5 $`\mu `$m in 1 cm: 2 (parameter Pc))
2. r.m.s. as measured by the Rq parameter : $``$ 2 $`\mu `$m
3. profile shape as measured by the waviness parameter Wt : $``$ 3 $`\mu `$m.
R2) \- Time necessary to manufacture a mask.
The required cutting speed is $`>`$ 7 m/hr. This specification is given to the MMM manufacturer as: $``$ 5 mm/sec (18 m/hr) with a quality of the cut as specified above and $``$ 30 mm/sec (108 m/hr) without maintaining the quality of the cut needed for the slits.
R3) \- Global slit positioning accuracy.
The requirement is $`<`$ 30 $`\mu `$m, including mask positioning at the focal plane and temperature variations between fabrication and operation.
R4) \- Unique and automatic mask identification.
Each mask must be uniquely identified for the time in which it can be used (until it is discarded).
R5) \- Mask surfaces.
The masks surfaces must have the lowest possible reflectivity at the operating wavelengths.
### 2.2 Instrument Cabinets
R6) \- Each instrument has 4 ICs (4 quadrants). Each IC can hold 15 masks and has a specific mechanical interface allowing it to be inserted in only one position on the instrument. The final design of the ICs is not yet defined at the time of writing. A remotely controlled device moves the masks from the ICs to the focal plane, and back.
### 2.3 Storage Cabinets
R7) \- The manufactured mask must be temporarily stored in 2 Storage Cabinets (one for each instrument) waiting for the insertion in the ICs or for the discarding. Each SC is requested to contain 400 masks (100 mask-sets).
## 3 CHOICE OF THE CUTTING TECHNOLOGY
### 3.1 Short history
In the initial concept, the MMM was a milling machine which would cut the slits in a 0.1 mm thin brass sheet supported by an aluminium frame. We assembled a small milling machine with a 3 axes displacement system and a high speed mandrel (up to 80000 t/m). The minimum obtainable slit width was 300 $`\mu `$m. Because of the frames, the ICs were large and quite heavy. Furthermore, the accuracy in the slit positioning was hampered by the composition of errors due to the machine positioning accuracy, the interface error between the mask frame and the machine working platform, and between the mask frame and the focal plane. Furthermore, because of the thermal expansion of brass it was difficult to meet the specification on positioning accuracy, given the temperature differences between the time a mask is manufactured and used in the instrument focal plane. Subsequent developments were aimed at minimising the sources of errors, by making use of thicker (but still $`<`$ 0.3 mm) Al unframed masks, reducing the size and the weight of the IC and eliminating the manpower necessary to open the frames, remove the brass sheet and put in a new one. The next natural step was the use of a material with a very low thermal expansion coefficient like carbon fibre, kevlar, graphite or Invar, but it was impossible to obtain the slit edge quality with the milling machine. It was only recently that a new type of laser cutting machines, called Stencil Lasers, became available on the market : these machines and one in particular were proven able to meet the specifications by making use of 0.2 mm thick Invar sheets. The cutting speed made possible to cut the mask contour on the machine itself, and thus it can be customised to the quadrant interface where the mask will be placed, and, as a further bonus, also any slits width $`>`$ 100 $`\mu `$m became a possibility.
### 3.2 Milling vs laser cutting
For the milling technique the case of unframed aluminium masks has been considered for the comparison. A summary is shown here.
Material
Milling: Aluminium (Anticorodal 100), thickness 0.3 mm, Therm.Exp.Coeff.: 23 $`\mu `$/m/C.
Laser: Invar (Pernifer 36), thickness 0.2 mm, Therm.Exp.Coeff.: 0.8 $`\mu `$m/m/C.
Coating
Milling: black anodization.
Laser: black antireflection paint.
Slit characteristics
Milling: Width : from 300 to 1000 $`\mu `$m in steps of 100 $`\mu `$m. Intermediate width require two passes.
Laser: Width : any width $`>`$ 100 $`\mu `$m.
Slit edge quality
Milling: The requested specifications can be reached but depend strongly on the material, the diameter of the cutting tool, the cutting speed, the raw mask fixing system. The optimisation and the control of these parameters is quite delicate over a long period of time. Quality control of the slit edge must be done very frequently.
Laser: The requested specifications can be reached but depend on the manufacturers. Quality control can be scheduled weekly.
Cutting of the mask contour
Milling: The mask contour cutting is difficult. Masks with pre-shaped contours are needed (cannot be customised).
Laser: The mask contour can be cut to customise the mechanical interface of the masks to the focal plane assembly.
Cutting machine components
Milling: The machine must be equipped with a high frequency mandrel, an automatic tool exchange mechanism, a tool checking system, a working platform with lubricating liquid tank and a cleaning unit to remove cutting oil. We have not found Milling Machines that completely meet our requirements on the market. A customisation is always necessary.
Laser: Laser machines meeting our requirements can be found on the market with integrated control systems.
Mask production rate
Milling: about 15 minutes for the whole cycle. This performance is the lower limit of our test machine and depends from the cutting tool diameter.
Laser: about 10 minutes including cutting the contour of the mask.
Purchase Cost
Milling: The estimated cost of the components for a home designed milling machine plus cleaning unit is about 300 kDM. Engineering costs must be added.
Laser: The cost of a stencil laser cutting machine ranges from 530 (Lumonics) to 630 (LPKF) kDM.
### 3.3 Choice of the Laser cutting machine manufacturer
A statement of work has been sent to 22 laser cutting machine manufacturers. 11 companies answered our enquiry, 8 of which expressed their interest and 6 requested Invar samples to try out their product. The most critical parameter to measure on test samples was the roughness of the slit edges. A first qualitative evaluation has always been done using a microscope from 50 $`\times `$ to 500 $`\times `$, to check whether
$``$ the slit cuts show a regular pattern (ripple) or a random noisy profile
$``$ the laser cutting has left some residual or re-melted material
$``$ the black coating has been damaged
$``$ the nominal slit width and shape has been respected
A quantitative evaluation of the profile roughness has been done using a mechanical roughness meter. The results of the measures performed on the laser cutting samples provided by the manufacturers are summarised in Table 1. Tests from LPKF and Lumonics have been done with proprietary complete laser cutting machines, while other manufacturers have used their laser head with unspecified motion systems.
Table 1 - Slit edge roughness measurements
A high value of Pc means that the cutting edge has a residual ripple. In the table we report 2 rows for the Lumonics tests (about 50 cutting samples in 4 successive tests with different cutting parameters), since we noticed that the quality of the slit edges was not constant and that some areas with a quasi-sinusoidal ripple were almost always present. The effort done, together with Lumonics staff, to overcome the problem was not successful. This means that the Lumonics machine will need a tuning up of the cutting parameters to improve the performances which, so far, are critical with respect to the specifications. The only machine that completely fulfills our requirement was the LPKF, Garbsen, Germany, one.
## 4 THE ADOPTED SOLUTIONS
### 4.1 Mask material
The masks material is Invar with thickness 0.2 mm and dimensions 305 $`\times `$ 305 mm. The main mechanical and thermal characteristics of Invar (Krupp VDM trade name is Pernifer 36), at 20 C are listed in Table 2.
Table 2 - Invar characteristics
### 4.2 Mask black coating
The mask manufacturing include, as the last operation, the cutting of the external border and thus the raw Invar sheets must have bigger dimensions to allow for the mechanical fixing on the working platform of the laser cutting machine. A 340 by 450 mm sheet is presently adopted. The raw masks must be:
$``$ coated with a black anti-reflection paint
$``$ cut to the proper size
$``$ protected against scratches
$``$ packed to be shipped to Paranal.
The requested characteristics for the coating are:
$``$ thickness $`<`$ 20 $`\mu `$m
$``$ good adhesion to the metallic substrate
$``$ dull black color
$``$ uniformity of the coating over the 2 surfaces.
The mask yearly need for VIRMOS is about 2400. The aim of our work was to find a method to prepare a large quantity of raw masks at the lowest cost. The quotation of the 0.2 mm thick Invar (from Krupp) is approximately 25 DM/kg for quantities of at least 1000 kg. A 340 $`\times `$ 450 Invar foil weighs 0.250 kg; from 1000 kg of material about 4000 masks can be obtained. The cost of the material for a single mask is then about 6.3 DM.
The Invar material is delivered by Krupp in rolls with the requested width; the possible solutions to provide the raw masks are to cut the strip in foils and to varnish them, or to varnish the strip before cutting. We have tested both possibilities. The second method is the cheapest one and has been tested using a strip of stainless steel 450 mm wide. The process consists of:
$``$ chemical (alkaline bath) and mechanical (brushing) cleaning
$``$ coating of the two sides of the strip, using a roller system
$``$ warm curing of the varnish
$``$ insertion of a low adhesion plastic protective film
$``$ straightening to eliminate the roll curvature
$``$ cutting to the requested dimensions
$``$ piling of the raw masks on a transport pallet.
The performed test has produced about 600 raw masks with good results from the point of view of the quality and of the adhesion of the coating, but the first and the last part of the strip must be discarded and the Invar cost for a single mask becomes 7 DM. The cost of the whole coating process is about 6 DM per foil. So the total cost of a raw mask is 13 DM.
### 4.3 Mask coding
The chosen solution for mask identification was the direct cutting of a 6 digit bar code on a border of the masks using the 2/5 interleaved code. It can be read by decoders during all the operations of the Mask Handling System.
### 4.4 The mask manufacturing machine
The technical characteristics of the LPKF StencilLaser System 600 x 600 that has been choosen as the most appropriate for our purposes, can be found on the LPKF web site www.lpkf.de/laser\_en/laser\_en.htm .
## 5 THE MASK HANDLING SYSTEM
### 5.1 Current configuration
The overall hardware and software configuration of the MMU is depicted in the block diagram in Fig.2 and includes the following components:
$``$ LPKF Stencil Laser machine model 600 x 600; ((a) in figure) with additional components supplied by LPKF
$``$ control electronics rack (b)
$``$ water cooling unit (c)
$``$ Maximator pressure duplicator from 8 to 16 bar (d)
$``$ vacuum extractor (e)
$``$ associated piping and cabling
$``$ dryer/filters (f) (additional element, not supplied by LPKF)
$``$ 2 serial interface cards (n) hosted in MMCU computer
$``$ BoardMaster software (o) (running on MMCU)
$``$ CircuitCam software (p) (running on MHCU)
$``$ Mask Manufacturing Control Unit (MMCU) computer (m)
$``$ running LPKF BoardMaster software (o)
$``$ Storage Cabinets (SC, holding 4 $`\times `$ 100 masks) (g), built in house, with Datalogic DS2100 bar code reader (h) connected to serial port of MHCU
$``$ IC robot unit (i), under development, with Datalogic DS2100 bar code reader (j) connected to serial port of MHCU
$``$ hosting Instrument Cabinets (ICs) exchanged with the instrument focal plane ; each IC has 15 numbered mask slots
$``$ Mask Handling Control Unit (MHCU) computer (r)
$``$ running the Mask Handling Software (q) developed in house
$``$ with slaved LPKF CircuitCam software (p)
$``$ bar code support software
$``$ Taylor - Hobson Talysurf roughness meter (k)
$``$ with serial connection to spare computer
$``$ Spare computer (s)
$``$ with roughness meter acquisition software
$``$ with Microsoft Visual Basic development environment (t) used for MHS.
All computers are identical Dell model Optiplex GX1 under Windows NT 4.0 Workstation with 64 Mb RAM and a 3$`\times `$2 GB disk. They are configured in an identical way (with the exception of the serial card connections), so that each one of them could be used as Line Replaceable Unit for all functions. In particular the spare computer (currently used as development environment) will be kept in cold redundancy, and only occasionally used offline with the roughness meter to perform quality checks on the manufactured masks. The function of the Mask Handling Software developed in house and its interaction with the LPKF supplied software modules will be described below.
### 5.2 Mask movement scheme
The masks can be moved/relocated exclusively as shown in Figure 3. The movements between parts of the MMU system is controlled by the indicated MHS software functions (store, load, unload, discard). The movements inside the Instrument will be controlled by OS software functions. The exchange of entire ICs back and forth between Instrument and MMU buildings will be a manual operation.
### 5.3 Mask data files flow
Paper (a) describes the function of the VIMOS and NIRMOS Mask Preparation Software (MPS) as front end to the MHS, and outlines the concept of Orders and Reports used to regulate the flow between OHS and MPS. There is a one-to-one correspondence between such Orders and Reports (OHS-MPS layer) with Jobs and Termination reports (MPS-MHS layer). For each Order sent by OHS to MPS, MPS sends a Job to MHS. MHS sends a Termination report to MPS, which uses it to generate a Report to OHS.
Note that MPS is responsible to associate to each Observing Block (OB) a mask set, i.e. 4 masks identified by a unique 5 digit identifier. The 6-digit barcode is composed prepending a 1-digit quadrant identifier (1-4 for VIMOS and 5-8 for NIRMOS) to the mask code. MHS only knows about masks (sets), and knows nothing about OBs.
A Job is an ASCII file with a list of mask identifiers, and a Termination report is a similar ASCII file associating an array of status codes to each mask. In addition to Jobs and Termination reports, there are other types of files exchanged between MPS and MHS.
$``$ A series of Machine Slit Files (MSFs) generated by MPS and associated to a Mask Manufacturing Job. They are described in paper (a)
$``$Storage Cabinet Table files (SCT) are maintained by MHS and list all the masks currently stored in SCs.
$``$Instrument Cabinet Table files (ICT) are maintained by MHS and list which masks are currently loaded in the numbered slot of each IC.
The exchange of files between MPS and MHS occurs, for security reasons, exclusively via ftp sessions initiated by the MPS side. MPS will put Jobs (and eventual MSFs) into an instrument staging area on MHCU. MHS will move the files being processed and place back Termination reports and a copy of the ICT and SCT in the same area, from which MPS will get them. Further file types are used internally in MHS as described below.
### 5.4 Description of the MMU cycles
In the following we describe the procedures used during the typical lifetime of a mask set required for spectroscopic observations. Different (simplified) procedures may apply to masks required for instrument maintenance.
#### 5.4.1 Manufacturing and storage cycle
$``$ In response to a Mask Manufacturing Order sent from OHS to MPS, MPS translates it into a Mask Manufacturing Job (MMJ) for MHS, and supplies an ASCII Machine Slit File (MSFs) for each mask
$``$ MHS convert function in turn:
$``$ converts all MSFs to the CAD industrial standard Gerber format
$``$ runs LPKF CircuitCam to convert Gerber files into proprietary binary format (LMD)
$``$ moves LMD files for entire mask sets to the MMCU disk.
$``$ Only complete mask sets (all four quadrant successfully converted) are considered for manufacturing.
$``$ Operator (on MMCU) uses the LPKF BoardMaster program to manufacture one mask at a time.
$``$ Masks are manufactured and stored 4 by 4 into an intermediate repository, to prevent storage of incomplete mask sets.
$``$ Operator (on MHCU) uses MHS store function to identify and store all masks of a mask set in the Storage Cabinet. MHS store function updates the SCT (Storage Cabinet Table) and generates a Mask Manufacturing Termination report (MMT) for MPS.
$``$ MPS translates the MMT into a Mask Manufacturing Report for OHS.
#### 5.4.2 Loading and unloading cycle
$``$ Some time later (at least one night in advance of the observation) OHS issues a Mask Insertion Order to MPS, and MPS translates it into a Mask Insertion Job (MIJ) for MHS.
$``$ Instrument Cabinets (ICs) are physically moved from instrument to IC robot
$``$ Operator on MHCU runs MHS unload function, which arranges for any mask not in MIJ to be unloaded from IC and put back in SC, while leaves in IC any mask already there and requested by MIJ.
$``$ Operator on MHCU runs MHS load function, which arranges to load from SC into IC any mask in MIJ not already loaded and generates a Mask Insertion Termination report (MIT) for MPS. Both functions also update ICT and SCT as appropriate
$``$ MPS translates the MIT into a Mask Insertion Report for OHS.
#### 5.4.3 Discarding cycle
$``$ Some time later OHS issues a Mask Discard Order to MPS for OBs which have either been successfully executed or have expired and MPS translates it into a Mask Discarding Job (MDJ) for MHS.
$``$ Operator on MHCU runs MHS discard function, which arranges for masks to be removed from SC and associated LMD files to be deleted from MMCU (where they have been kept until this time to allow reproduction in case of damages : this means that such masks cannot be manufactured again any longer) and generates a Mask Discard Termination (MDT) report for MPS.
$``$ MPS translates the MIT into a Mask Discard Report for OHS: as a result the relevant OBs are marked as no longer schedulable.
|
no-problem/9907/astro-ph9907075.html
|
ar5iv
|
text
|
# Radiation Recoil from Highly Distorted Black Holes
## I Introduction
Many popular models of active galactic nuclei, quasars and even archetypical galaxies rely on the relativistic influence of black holes on the surrounding environment to provide power sources for observed spectral emissions and inferred motions of gaseous or stellar material. However, due to the strong gravitational effects of black holes, their role in the evolution of galactic cores and quasars is uncertain since numerical relativity computations are needed to perform detailed investigations of the near–field regime. In particular, gravitational waves generated from sufficiently asymmetric systems (such as collapsing stellar cores and coalescing black holes) can carry a nonzero linear momentum component and impart a recoil velocity to the emitting objects, dominantly from the interplay between mass–quadrupole and mass–octupole or mass quadrupole and current–quadrupole contributions . These velocities would be astrophysically significant if they were large enough to eject the emitting objects from the center of the host galaxy and send them hurtling through intergalactic space. Because the efficiency of momentum radiation emission is not known precisely, the dynamics and stability of systems containing black hole engines remain important but unresolved issues. If radiation reaction effects are significant, they may have considerable observable consequences for astrophysics and cosmology, including the redistribution and depletion of black holes from host galaxies, the disruption of active galactic core energetics, the introduction of black holes and stellar material into the intergalactic medium, and the general formation and structure attributes of galaxies.
Although approximation studies of radiation recoil have been performed for more than two decades now, results from these calculations based on quasi–Newtonian and relativistic perturbation formalisms present an uncertain picture due to their incomplete treatment. More recently, Anninos and Brandt have numerically computed the recoil effect from fully general relativistic head–on collisions of two unequal mass black holes with time–symmetric initial data, and have shown that recoil velocities are of order 10 – 20 km/sec for black holes with moderately large initial separations ($`\stackrel{>}{}10M`$, where $`M`$ is the mass of the larger black hole). Their results are in rough agreement with, and generally confirm, various estimates from perturbation calculations.
Here we continue to explore the radiation reaction process by computing the energies and recoil velocities from single black holes distorted by axisymmetric gravitational (Brill ) waves. We extend previous Brill wave + black hole investigations by relaxing the equatorial mirror symmetry imposed in earlier work, thus allowing for mixtures of consecutive (even/odd) multipole contributions to the emitted radiation. In addition to investigating radiation reactions in this new class of physical systems, our implementation of Brill waves allows for very highly distorted black holes which can be thought of as models for the late stage behavior of binary coalescing black holes. In fact, we have been able to simulate single black hole distortions , as characterized by the ratio of polar to equatorial circumferences of apparent horizons, that are significantly greater than what we have observed in the merged state of two colliding black holes . Although the horizon distortion is not the only factor influencing recoil efficiency, one might nevertheless expect to obtain some idea, or perhaps even an absolute upper limit, of the recoil magnitude during the late stages of binary interactions by investigating strongly distorted single black hole systems.
We generalize the prescription developed in references (and summarized for convenience in §II) to specify equatorially asymmetric initial data and to parameterize Brill wave perturbations of Schwarzschild black holes by the amplitude, shape, location, and spectral mixture of the even and odd $`\mathrm{}`$–modes. Results from numerical evolutions are presented in §III, where we show embeddings of the black hole apparent horizons, energies emitted in the most dominant quasinormal modes of the final black hole, and recoil velocities arising from the mixing of consecutive radiative modes. The computations are carried out for both even and odd parity distortions of black holes, and over a wide range of wave strengths, initial placements, and mode distributions. We conclude in §IV.
## II Initial Data
For even parity distortions, we utilize the conformally flat approach of Bowen & York to solve the initial value problem in axisymmetry and write the spatial 3–metric at the initial time as
$$dl^2=\mathrm{\Psi }^4\left[e^{2(qq_0)}\left(d\eta ^2+d\theta ^2\right)+\mathrm{sin}^2\theta d\varphi ^2\right],$$
(1)
where $`\mathrm{\Psi }(\eta ,\theta )`$ is the conformal factor, $`q(\eta ,\theta )`$ is a function subject to certain constraints in its form but is otherwise freely specifiable, $`q_0(\eta ,\theta )`$ is chosen so that the Kerr metric is recovered if $`q=0`$ and the appropriate extrinsic curvature is specified, $`\eta `$ is a logarithmic radial coordinate centered on the black hole throat, and $`(\theta ,\varphi )`$ are the usual angular coordinates. The more general 3–metric (applicable to both even and odd parity perturbations) is of the form
$$\gamma _{ij}=\mathrm{\Psi }^4\left[\begin{array}{ccc}A(\eta ,\theta )& 0& 0\\ 0& B(\eta ,\theta )& F(\eta ,\theta )\mathrm{sin}\theta \\ 0& F(\eta ,\theta )\mathrm{sin}\theta & D(\eta ,\theta )\mathrm{sin}^2\theta \end{array}\right],$$
(2)
with $`F=0`$ ($`0`$) for the even (odd) parity cases. The remaining metric components ($`\gamma _{\eta \theta }`$ and $`\gamma _{\eta \varphi }`$) are set to zero by the gauge freedom in choosing the shift vector.
As in Ref. , the somewhat arbitrary function $`q`$ is restricted by symmetry conditions on the throat and axis, and fall–off rates at large radii . The function $`q`$ is constructed to have an inversion symmetric Gaussian part given by
$$q=Q_0f(\theta )\left(e^{s_+}+e^s_{}\right)+q_1,$$
(3)
where
$$s_\pm =\frac{(\eta \pm \eta _0)^2}{\sigma ^2},$$
(4)
and $`q_1=0`$ or $`q_0`$ for perturbations of the stationary Kerr solution or the Bowen & York spacetime respectively. With this form, the Brill waves are characterized by their amplitude $`Q_0`$, width $`\sigma `$, center coordinate location $`\eta _0`$, and their angular dependence $`f(\theta )`$. This allows a convenient way to parameterize the strength, shape and placement of the waves, and to easily tune the wave data for a broad range of spectral mode mixtures.
In previous work, we had considered the case $`f=\mathrm{sin}^n\theta `$ which possesses mirror symmetry across the equator. However, this data does not allow for the emitted gravitational waves to carry any linear momentum, as it excludes odd multipole components. Here, we relax the constraint of equatorial symmetry and consider
$$f(n,\xi ,\theta )=(1\xi +\xi \mathrm{cos}\theta )\mathrm{sin}^n\theta ,$$
(5)
for which $`q`$ in (3) obeys the isometry conditions ($`\eta \eta `$, $`\theta \theta `$, and $`\theta 2\pi \theta `$). The form of (5) also has the necessary property $`f(0)=f(\pi )=0`$, and regulates the even and odd mode power distributions through the parameters $`n`$ and $`\xi `$. The parameter $`\xi `$ determines the asymmetry of the wave, and the relative excitation of the odd and even numbered, even parity $`\mathrm{}`$ modes. When $`\xi =0`$ and $`n=2`$, $`\mathrm{}=2`$ is the dominant mode; when $`\xi =1`$ and $`n=2`$, $`\mathrm{}=3`$ is the dominant mode. For some intermediate value of $`\xi `$ there will be a roughly even distribution of energy between the $`\mathrm{}=2`$ and $`\mathrm{}=3`$ modes, and at this value the gravity waves will produce a maximum recoil velocity on the black hole, as demonstrated in §III. The initial value problem for the even parity cases is completed by solving the Hamiltonian constraint for the conformal factor $`\mathrm{\Psi }`$ in the metric (1) with the specified free data. We also impose time symmetry, hence the extrinsic curvature is set to zero and the momentum constraint is trivially satisfied. This implies that the Brill wave packet is a combination of ingoing and outgoing radiation.
For odd parity distortions, the free data is specified in the only nontrivial momentum constraint equation arising from the $`\varphi `$ component in “time–rotation” symmetry and maximal slicing ($`trK=0`$). Defining the initial extrinsic curvature as
$$K_{ij}=\mathrm{\Psi }^2\left[\begin{array}{ccc}0& 0& \widehat{H}_E\mathrm{sin}^2\theta \\ 0& 0& \widehat{H}_F\mathrm{sin}\theta \\ \widehat{H}_E\mathrm{sin}^2\theta & \widehat{H}_F\mathrm{sin}\theta & 0\end{array}\right],$$
(6)
the momentum constraint reduces to
$$_\eta (\widehat{H}_E\mathrm{sin}^3\theta )+_\theta (\widehat{H}_F\mathrm{sin}^2\theta )=0,$$
(7)
and is satisfied by
$`\widehat{H}_E`$ $`=`$ $`f_1(\theta )+f_2(\eta )\left[4\mathrm{cos}\theta f_3(\theta )+\mathrm{sin}\theta _\theta f_3(\theta )\right],`$ (8)
$`\widehat{H}_F`$ $`=`$ $`_\eta f_2(\eta )\mathrm{sin}^2\theta f_3(\theta ),`$ (9)
where $`f_1`$, $`f_2`$ and $`f_3`$ are arbitrary functions with the following symmetries: $`f_1(\theta )=f_1(\theta )=f_1(2\pi \theta )`$, $`f_3(\theta )=f_3(\theta )=f_3(2\pi \theta )`$, $`f_2(\eta )=f_2(\eta )`$, and $`f_20`$ as $`\eta \mathrm{}`$ so the spacetime asymptotically approaches the Kerr solution. To construct a model for odd parity waves analogous to the even parity case described above, we choose the following free functions:
$`f_1`$ $`=`$ $`0,`$ (10)
$`f_2`$ $`=`$ $`Q_0\left(e^{s_+}+e^s_{}\right),`$ (11)
$`f_3`$ $`=`$ $`\left(1\xi +\xi \mathrm{cos}\theta \right)\mathrm{sin}^n\theta .`$ (12)
Since $`\widehat{H}_E`$ falls off sufficiently rapidly at large radii, any spacetime constructed using this conformal extrinsic curvature with $`f_1=0`$ will have zero angular momentum. The Hamiltonian constraint is then solved for $`\mathrm{\Psi }`$ given the above extrinsic curvature and a conformally flat 3-metric with $`q=q_0=0`$ in (1).
## III Results
In this section we present results from several dozen numerical calculations of both even and odd parity axisymmetric distortions of single Schwarzschild black holes using the initial data parameterization described in §II. The results are presented as functions of various key Brill wave parameters, and their effect on radiation recoil is evaluated. In most cases we have used a numerical grid with 300$`\times `$65 (radial$`\times `$angular) zones to cover radial distances out to several hundred $`M_{ADM}`$, where $`M_{ADM}`$ is the ADM mass of the spacetime, and to include the entire polar domain $`0\theta \pi `$. However, we have also confirmed that the results are robust and relatively unchanged at different grid resolutions. Our simulations utilize the maximal slicing condition ($`K=\dot{K}=0`$) and are generally run to 50 – 70$`M_{ADM}`$, which is more than enough time to extract the radiation content. The radiated wave energies and recoil velocities are computed from the energy–momentum flux across a spherical shell of radius 15$`M_{ADM}`$ from the center of the black hole throat.
### A Even Parity
First we consider the effects of varying three independent parameters $`Q_0`$, $`\eta _0`$ and $`\xi `$, corresponding to the amplitude, peak location, and dominant mode of the Brill waves, on the radiation reaction and dynamical evolution of even parity distortions of black holes. The remaining free initial data parameters described in §II have been held fixed: $`\sigma =1`$ for unit width wave profiles, $`q_0=0`$ since we do not consider rotating black holes, and $`n=2`$ to allow maximum grid resolution over the angular variations.
From the equatorially symmetric examples in Refs. , it is known that increasing the amplitude parameter $`Q_0`$ increases the strong field coupling of the Brill wave and black hole, substantially distorts the spacetime from spherical symmetry, and emits a greater fraction of the ADM mass in the form of gravitational radiation. To demonstrate the degree by which a black hole is distorted from sphericity, we first look at the geometric characteristics of the spacetime, namely the apparent horizon since it can easily be found in the spacelike slices. The horizon shape parameters and flat space embeddings are evaluated for the case $`\eta _0=0`$ in which the Brill wave is placed directly on the black hole throat for maximum horizon distortion. We use a Newton–Raphson procedure to solve the nonlinear equation defining the trapped surface conditions (zero expansion of outgoing null normals to the 2–surface). The geometric properties are extracted from the two–dimensional sub–metric induced on the horizon surface
$$dl^2=\mathrm{\Psi }^4\left\{\left[B+A\left(\frac{dh}{d\theta }\right)^2\right]d\theta ^2+Dd\varphi ^2+2Fd\theta d\varphi \right\},$$
(13)
where $`h(\theta )`$ is the radial coordinate defining the horizon. Visual representations of the horizon are achieved by embedding the 2–surface given by (13) in a higher three–dimensional flat space. Introducing a new coordinate $`z`$ on a flat 3–metric, the 2–metric of the horizon surface is identified as
$$dz^2+d\rho ^2+\rho ^2d\varphi ^2=B^{}(\theta )d\theta ^2+D^{}(\theta )d\varphi ^2,$$
(14)
where $`B^{}=\mathrm{\Psi }^4(B+A(dh/d\theta )^2+F^2/D)`$ and $`D^{}=\mathrm{\Psi }^4D`$ are the metric components of the horizon surface transformed to a diagonal form. Solving for the coordinate $`z`$ gives
$$z=𝑑\theta \sqrt{B^{}(_\theta \sqrt{D^{}})^2},$$
(15)
which is integrated numerically to obtain the embedding functions $`z(\theta )`$ and $`\rho (\theta )=\sqrt{D^{}}`$, although an embedding is not in general guaranteed to exist.
In figure 1 we show embeddings of the horizon 2–surface in the initial data (where distortions are greatest with $`\eta _0=0`$) for a highly perturbed $`Q_0=0.9`$ case with different mode parameters $`\xi `$. Embeddings of the more prolate odd mode distortions, i.e. $`\xi 0.7`$, are undefined (and therefore not displayed) for the more negative values of $`z`$ due to the radical in equation (15) which becomes negative. Figure 2 shows the equivalent embeddings for $`\xi =0.5`$ as a function of wave amplitude $`Q_0`$. These embedded distortions eventually damp out in an oscillatory fashion over time as the horizon evolves towards sphericity after the dynamic component has either been absorbed by the black hole, or has propagated to asymptotic infinity in the form of gravitational waves. Together, the embedding diagrams indicate that horizon distortions are roughly spherical for small amplitude perturbations and become generally more prolate as $`Q_0`$ is increased. The shape of distortions also varies with the mode parameter $`\xi `$, which regulates changes from equatorially symmetric even mode behavior, to predominantly asymmetric odd mode configurations for the larger values of $`\xi `$. The ratio of polar to equatorial circumferences of the horizon surfaces, used in our previous work as a measure of distortion, is not especially informative regarding the magnitude of radiation recoil. Indeed, the purely even symmetric cases generally give rise to greater distortions, but no recoil which is a function of the relative mixture of even and odd modes, as well as the perturbation amplitude. Assuming a simple definition of radial distortion in the embeddings as $`R_r=\text{max}(\sqrt{\rho ^2+z^2})/\text{min}(\sqrt{\rho ^2+z^2})`$, the displayed distortions range from $`R_r=1.2`$ for ($`Q_0=0.1`$, $`\xi =0.9`$), to 7.5 for ($`Q_0=1.2`$, $`\xi =0.1`$).
The transition from even to odd mode behavior observed in the near–field horizon embedding diagrams of figure 1, is also mirrored by the mode distribution in the far–field radiation zone. Figures 3 and 4 show the energy (normalized to the ADM mass of the spacetime) radiated in the most dominant $`\mathrm{}=2`$ and 3 modes for $`\eta _0=0`$ as a function of $`\xi `$ and $`Q_0`$. As the mode parameter is increased in figure 3, the energy distribution dominance changes from even to odd, consistent with the horizon embeddings in figure 1. Figure 4 indicates that the total radiated energies asymptotically approach constant maximal values for each of the mode parameters, and that the range of parameters we have investigated are reasonably representative of the most efficient radiators of gravitational energy. We have restricted current studies to wave amplitudes $`Q_01.2`$, since the numerical results are less reliable for larger amplitudes, especially at late times and in the ability to resolve both the $`\mathrm{}=2`$ and 3 modes in the extreme odd or even $`\mathrm{}`$-mode dominated evolutions.
The mixing of adjacent multipole modes gives rise to a non-vanishing flux of linear momentum along the $`z`$–axis which can be evaluated from products of consecutive Zerilli wave functions
$$\frac{dP^z}{dt}=\frac{1}{16\pi }\underset{l=2}{\overset{\mathrm{}}{}}\sqrt{\frac{(\mathrm{}1)(\mathrm{}+3)}{(2\mathrm{}+1)(2\mathrm{}+3)}}\frac{d\psi _{\mathrm{}}}{dt}\frac{d\psi _{\mathrm{}+1}}{dt},$$
(16)
where the Zerilli functions $`\psi _{\mathrm{}}`$ are normalized such that the total radiated energy in each mode is given by
$$E_{\mathrm{}}=\frac{1}{32\pi }𝑑t(\dot{\psi }_{\mathrm{}})^2.$$
(17)
For numerically practical purposes, we compute only the most significant (2,3) and (3,4) contributions. In general we find that the higher order terms in the series (16) play an increasingly greater role as $`\xi `$ is increased and as the distortions become dominantly odd functions. For an interesting large amplitude case ($`Q_0=0.9`$) the momentum ratio $`P_{(2,3)}/P_{(3,4)}`$ varies from roughly 50 to 0.2 for $`\xi =0.1`$ and 0.9 respectively, with even greater ratios for the smaller amplitude cases. However, as we show below, the greatest recoil velocities arise for roughly equal mixtures of even and odd perturbations (i.e., $`\xi =0.5`$), and for these cases the (2,3) contribution exceeds the (3,4) by at least a factor of 50 in all cases we have studied. The results presented for the radiated momentum are derived by summing both the (2,3) and (3,4) contributions. To confirm the degree to which these two dominant pairs are complete, and to independently check our calculations, we have also evaluated the momentum flux from the Landau–Lifshitz pseudotensor
$$\frac{dP^z}{dt}=\frac{r^2}{16\pi }\left[(\dot{F})^2+\frac{1}{4}(\dot{B}\dot{D})^2\right]\mathrm{cos}\theta d\mathrm{\Omega },$$
(18)
which is valid for both even ($`F=0`$) and odd ($`B=D=0`$) parity perturbations. We find excellent agreement, typically better than 10%, between the two calculations.
As a result of the momentum emission from the interplay between even and odd $`\mathrm{}`$–modes, the final black hole will acquire a recoil velocity
$$v_r^z=\frac{1}{M_{ADM}}\left(\frac{dP^z}{dt}\right)𝑑t,$$
(19)
that is opposite in direction to the momentum flux of the waves. We show these velocities in figures 5 and 6 in physical units of kilometers per second, and as a function of the parameters $`\xi `$, $`\eta _0`$ and $`Q_0`$. Figure 5 confirms that maximum recoil occurs at $`\xi =0.5`$. The asymptotic flatness of the curves at large values of $`Q_0`$ in figure 6 suggests maximum velocities of about 150, 200, and 500 km/sec for the most strongly distorted cases that we are able to compute numerically for $`\eta _0`$ = 0, 0.25 and 0.5 respectively. However, we are unable to reliably evolve greater amplitudes for $`\eta _0`$ = 0.25 or 0.5 and establish a precise turnover velocity since our code breaks at these large amplitudes, though the curves already show signs of flattening by $`Q_0=1.2`$. Hence the quoted values in these two cases are approximate extrapolations. Also notice that greater recoil velocities result when the Brill waves are placed at larger radii, beyond the perturbation potential barrier (at $`r>3M`$, or equivalently $`\eta >1.8`$). In these cases, the ingoing waves excite the ringing modes more strongly as they cross the barrier and emit a greater flux of energy–momentum. On the other hand, it is also likely that a large fraction of the radiated flux in the large $`\eta _0`$ cases can be attributed to the outgoing wave component and general distortions of the global spacetime , and not to pure black hole ringing from localized collapse or impact scenarios.
### B Odd Parity
Following the general presentation of §III A, we present in this section recoil velocities from odd parity distortions of black holes as a function of mode parameter ($`\xi `$), initial peak location ($`\eta _0`$), and wave amplitude ($`Q_0`$) in the initial data of §II.
The momentum arising from consecutive $`\mathrm{}`$–mode interactions of odd parity waves takes a form analogous to (16), except the wave functions $`\psi _{\mathrm{}}^{odd}`$ (replacing the even parity $`\dot{\psi }_{\mathrm{}}`$) are extracted from the $`\gamma _{\theta \varphi }`$ metric component by
$$\psi _{\mathrm{}}^{odd}=\sqrt{\frac{2(\mathrm{}2)!}{(\mathrm{}+2)!}}_\eta F(_\theta ^2\mathrm{cot}\theta _\theta )Y_\mathrm{}0d\mathrm{\Omega }.$$
(20)
In this form, the wave functions are related to the Regge-Wheeler perturbation variable $`h_2`$
$$\psi _{\mathrm{}}^{odd}=\sqrt{\frac{(\mathrm{}+2)!}{2(\mathrm{}2)!}}r_r\left(\frac{h_2}{r^2}\right),$$
(21)
with normalization
$$E_{\mathrm{}}=\frac{1}{32\pi }𝑑t(\psi _{\mathrm{}}^{odd})^2.$$
(22)
An important difference in the evolutions of odd (versus even) parity data is that it is necessary to keep track of the momentum contributions from a greater number of $`\mathrm{}`$–mode pairs since there is no clear dominance by the lowest order terms. The individual contributions can jump from positive to negative values of momentum with roughly the same amplitude over consecutive pairs. For example, the better behaved $`\eta _0=0`$ small amplitude cases require at least three pairs in the series to converge at the 10% level when compared with subsequent additions and to the Landau–Lifshitz formula (18). Also, the odd parity initial value problem and the dynamical evolutions over time can both generate significant even parity signals, contributing up to a few percent of the net recoil velocity in the cases we have investigated. Hence all results in this section are derived from the Landau–Lifshitz pseudotensor, thus accounting for all the modes in evaluating the velocities (though we have compared and confirmed the consistency of both methods for small and large amplitude runs). We plot in Figures 7 and 8 the velocities as a function of $`\xi `$ and $`Q_0`$ respectively. Figure 7 indicates that maximum recoil is achieved for $`\xi 0.7`$. Figure 8 shows the maximum velocity for $`\xi =0.7`$ as $`Q_0`$ is varied over a numerically robust range of amplitudes for three different initial wave positions $`\eta _0`$ = 0, 0.5 and 1. The evolutions generate maximum velocities of 23, 52 and 430 km/sec, with increasingly greater velocities for Brill waves initially concentrated at greater distances from the black holes.
In comparing these results with the even parity cases (say $`\eta _0`$ = 0), one should not necessarily conclude that odd parity radiation is less effective in producing radiation recoil for any intrinsic reason. Much has to do with the manner in which the data was constructed. For example, the even parity data distorts the spatial metric, while the odd parity data uses a conformally flat metric. One could, in principle, produce even parity distortions of the spacetime through the extrinsic curvature while maintaining a flat metric. Likewise, one could add odd parity data to the metric itself rather than the extrinsic curvature. This would make the procedures more similar (and possibly the radiation energies as well). Furthermore, there are many ways to construct initial data for both types of radiation and it is not feasible to study them all. Rather, our results represent the maximal effects of a certain class of black hole distortions.
## IV Conclusion
We have carried out a systematic study of single black holes distorted by strong–field axisymmetric Brill waves in an effort to quantify the astrophysical significance of the “rocket” effect imparted to the final black hole from the momentum carried by gravitational radiation in the system. This work compliments our previous studies of the head–on collision of two unequal mass black holes , where we found recoil velocities up to 10–20 km/sec. However, it is likely that coalescing binary black holes with arbitrary physical parameters (i.e., impact parameters, masses, and spins) may generate greater recoil velocities, so we have focused these current studies to deduce the maximum recoil expected from highly asymmetrical configurations. The Brill wave + black hole systems we have studied allow a parameterization of the wave strengths, widths, locations, and shapes of the perturbing sources such that we are able to systematically explore the role of various parameters in fully nonlinear numerical calculations of strongly distorted black hole spacetimes. With this approach, we are able to generate greater distortions and wider spectral energy distributions of black holes than observed in our simulations of colliding binary systems. We thus also consider our current results as reasonable maximum estimates of radiation recoil in single or late–stage binary black hole systems (although a more precise comparison between single and binary evolutions must also account for any residual radiation content in the initial data of the respective systems).
For the most highly distorted spacetimes, we find maximum recoil velocities in excess of 400 km/sec for both even and odd parity data with Brill waves initially centered at large distances from the black hole throat, e.g., $`\eta _0=0.5`$ (1.0) for even (odd) parity perturbations. Our results exhibit a strong dependence on the initial placement of the Brill waves, as well as their amplitude and spectral composition. Of all these effects, we are less certain of the role which the initial wave placement $`\eta _0`$ plays in generating a true maximum value, since for the numerically difficult combination of large separations and amplitudes, our code eventually breaks down. However, we expect for radiation clumps located further from the black hole, that a substantial fraction of the (outgoing component of the) Brill waves escapes to infinity since the perturbations are applied essentially to the spacetime surrounding the black hole, and not directly on the throat. Hence we expect that the bulk of emitted energy–momentum flux can be attributed to the initial wave configuration for large $`\eta _0`$, as opposed to any intrinsic ringing of the black hole associated with localized source dynamics such as ingoing wave collisions, collapsing stellar cores, or coalescing binaries. On the other hand, an ingoing wave located outside the potential barrier can scatter off and impart a much greater momentum to the hole. We were, however, unable to distinguish secondary wave pulses in our numerical data corresponding to reflected waves.
In addition to reducing the initial outgoing Brill wave content, it is also likely that the $`\eta _0=0`$ cases represent more appropriate late stage recoil models for black hole binary systems. In these cases, our results of 150 and 23 km/sec for even and odd parity perturbations are in general agreement with the bound $`v_r300`$ km/sec derived by Bekenstein in his quasi–Newtonian considerations of the interaction between quadrupole and octupole terms in non-spherical stellar core collapse to black holes. Furthermore, the odd parity recoil in our calculations is remarkably similar to the 25 km/sec found by Moncrief for non-spherical models of black hole formation. Our even parity results are approximately a factor of two times larger than the quasi–Newtonian calculations ($`v_r67`$ km/sec) of binary systems in Keplerian orbits by Fitchett . However, considering the ambiguity in choosing the final prior-to-plunging orbit and in extrapolating the perturbation calculations to the equal mass limit, our results are in fairly good agreement with the predictions of Fitchett and Detweiler who extended Fitchett’s earlier work to perturbation theory and computed a maximum velocity of about 120 km/sec for the merging of two black holes from the last stable circular orbit. We are also in agreement with Nakamura, Oohara and Kojima who estimate a maximum velocity of about 240 km/sec from numerical perturbative calculations of test bodies plunging into black holes from infinity with arbitrary orbital angular momentum.
In comparison, the escape velocity from galactic structures can vary from about several hundred km/sec for spiral galaxies such as the Milky Way, to about a thousand km/sec for the more massive giant ellipticals such as M87. Our results, however, suggest that black holes which may be located in the centers of galaxies and which undergo highly asymmetric evolutions (including strong field distortions and binary mergers) are relatively stable entities and will not likely escape from the host galaxy, assuming that the “on the throat” numerical calculations are reasonably representative models. Although we have established that the recoil effect is not generally large enough to be considered astrophysically significant, this does not, however, rule out the possibility of black hole ejections from galactic disks far from the core and in the direction of galactic rotation, nor the possibility of black hole ejections from globular cluster systems in galactic halos. Black holes can more readily escape from these systems to wander through the galaxy or even intergalactic space.
###### Acknowledgements.
The numerical simulations were performed on the Origin2000 machines at the NCSA and the Albert Einstein Institute. This work was supported by NSF grant PHY 98-00973, and performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract W-7405-Eng-48.
|
no-problem/9907/nucl-th9907109.html
|
ar5iv
|
text
|
# Charged current weak production of the Δ resonance
## 1 INTRODUCTION
The nucleon excitation spectrum is a valuable source of information about baryon structure. The $`N\mathrm{\Delta }`$ transition presents clear advantages from the experimental point of view since the $`\mathrm{\Delta }`$ is separated from the rest of resonances. The bulk of the existing information on the weak $`N\mathrm{\Delta }`$ transition form factors (FF) comes from the analysis of the ANL and BNL experiments, performed with $`\nu _\mu `$ beams, whose energies span from 0.5 to 6.0 GeV with poorly known distributions. Nowadays, with the advent of the new generation of electron accelerators in the GeV region and achieving high luminosities, it is possible to perform electron scattering experiments in the resonance region. We have considered the possibility to extend these studies to the weak charged current physics. For this reason, we have studied the reactions $`e^{}p\mathrm{\Delta }^0\nu _e`$ and $`e^+p\mathrm{\Delta }^{++}\overline{\nu }_e`$ at the typical energies of MAMI and TJNAF, and using the available information about the FF .
Since the vector $`N\mathrm{\Delta }`$ FF are related to the isovector electromagnetic ones, which can be obtained from electroproduction data, these experiments would allow to study the axial FF and, in particular, the dominant $`C_5^A`$. The determination of its value at $`q^2=0`$ is important in view of the discrepancies between the PCAC prediction and theoretical estimates obtained in most quark models . We have used the low $`q^2`$ BNL data on the ratio of $`\mu ^{}\mathrm{\Delta }^{++}`$ and $`\mu ^{}p`$ events from $`\nu _\mu d`$ collisions to extract the value of the axial vector coupling $`C_5^A(0)`$, taking into account the deuteron structure and the $`\mathrm{\Delta }`$ width .
The study of weak $`N\mathrm{\Delta }`$ transitions in nuclei is relevant for the analysis of atmospheric neutrino experiments. In fact, the energy distribution of the part of the atmospheric $`\nu `$ flux producing fully contained events at Kamiokande is such that $`<E_\nu >700`$ MeV, well above the $`\mathrm{\Delta }`$ production threshold. These $`\mathrm{\Delta }`$’s decay into pions and photons (through $`\pi ^0`$ decay), that are a source of background. For this reason, we have studied the impact of nuclear effects in $`\nu _{e(\mu )}`$ production of $`\mathrm{\Delta }`$ in $`{}_{}{}^{16}O`$ .
## 2 WEAK ELECTROPRODUCTION CROSS SECTION
The matrix element for the process $`e^{}(k)+p(p)\mathrm{\Delta }^0(p^{})+\nu _e(k^{})`$ is proportional to the product of the leptonic and hadronic currents. The hadronic current is expressed in terms of vector and axial vector FF $`C_i^V`$ and $`C_i^A`$ ($`i=3,4,5,6`$) . The imposition of the CVC hypothesis $`q_\mu J_V^\mu =0`$ implies $`C_6^V=0`$. The other three vector FF are obtained from the isovector electromagnetic ones. Assuming $`M1`$ dominance, one gets $`C_5^V=0`$ and $`C_4^V=\left(M/M_\mathrm{\Delta }\right)C_3^V`$. $`C_3^V`$ is determined from electroproduction experiments and from a quark model
$`C_3^V(q^2)`$ $`=`$ $`2.05(1q^2/0.54\mathrm{GeV}^2)^2,`$ (1)
$`C_3^V(q^2)`$ $`=`$ $`M/(\sqrt{3}m)e^{\overline{q}^2/6},`$ (2)
where $`m=330`$ MeV is the quark mass and $`\overline{q}=|𝐪|/\alpha _{HO}`$, with $`\alpha _{HO}=320`$ MeV. Concerning the axial FF, $`C_6^A`$ can be related to $`C_5^A`$ using pion pole dominance and PCAC, then $`C_6^A(q^2)=C_5^A(q^2)M^2/\left(m_\pi ^2q^2\right)`$. The value of $`C_5^A(0)`$ can be taken from the off-diagonal Goldberger-Treiman relation , $`C_5^A(0)=g_{\mathrm{\Delta }N\pi }f_\pi /(\sqrt{6}M)=1.15`$, where $`f_\pi =92.4`$ MeV, $`g_{\mathrm{\Delta }N\pi }=28.6`$; $`C_3^A(q^2)`$, $`C_4^A(q^2)`$ and $`C_5^A(q^2)/C_5^A(0)`$ are given by the Adler model
$$C_{i=3,4,5}^A(q^2)=C_i(0)\left[1\frac{a_iq^2}{b_iq^2}\right]\left(1\frac{q^2}{M_A^2}\right)^2.$$
(3)
with $`C_3^A(0)=0`$, $`C_4^A(0)=0.3`$, $`a_4=a_5=1.21`$, $`b_4=b_5=2`$ GeV<sup>2</sup> and $`M_A=1.28`$ GeV. The value of $`M_A`$ comes from a best fit to the $`\mu ^{}\mathrm{\Delta }^{++}`$ events at BNL . For a comparison, we also use a non-relativistic quark model calculation
$$C_5^A(q^2)=\left(\frac{2}{\sqrt{3}}+\frac{1}{3\sqrt{3}}\frac{q_0}{m}\right)e^{\overline{q}^2/6},C_4^A(q^2)=\frac{1}{3\sqrt{3}}\frac{M^2}{M_\mathrm{\Delta }m}e^{\overline{q}^2/6},C_3^A(q^2)=0.$$
(4)
From the amplitude given above, the differential cross section $`d\sigma /d\mathrm{\Omega }_\mathrm{\Delta }`$ can be obtained in the standard way. The $`\mathrm{\Delta }`$ width has been accounted for by means of the substitution
$$\delta (p^2M_\mathrm{\Delta }^2)\frac{1}{\pi }\frac{1}{2M_\mathrm{\Delta }}\mathrm{Im}\left[\frac{1}{WM_\mathrm{\Delta }+\frac{1}{2}i\mathrm{\Gamma }_\mathrm{\Delta }}\right],\mathrm{\Gamma }_\mathrm{\Delta }=\mathrm{\Gamma }_0\frac{M_\mathrm{\Delta }}{W}\frac{q_{c.m.}^3(W)}{q_{c.m.}^3(M_\mathrm{\Delta })},W=\sqrt{p^2}$$
(5)
with $`q_{c.m.}`$ being the pion momentum in the $`\mathrm{\Delta }`$ rest frame and $`\mathrm{\Gamma }_0=120`$ MeV. The angular distribution is shown in Fig. 1
for two different sets of FF: I, phenomenological \[Eqs. (1), (3)\], solid line; II, quark model, \[Eqs. (2), (4)\], dashed line. The invariant mass has been restricted to $`W<1.4`$ GeV to select $`\mathrm{\Delta }`$ events. The differential cross section is found to be high enough in a large angular region to consider the possibility of measuring them.
## 3 DETERMINATION OF THE AXIAL VECTOR COUPLING
In order to obtain $`C_5^A(0)`$ we have evaluated the ratio
$$R(Q^2)=\frac{\left(d\sigma /dq^2\right)\left(\nu d\mu ^{}\mathrm{\Delta }^{++}n\right)}{\left(d\sigma /dq^2\right)\left(\nu d\mu ^{}pp\right)},Q^2=q^2$$
(6)
at $`E_\nu =1.6`$ GeV, which is the mean energy of the BNL $`\nu _\mu `$ spectrum; the $`\mathrm{\Delta }`$ production cross section has been calculated in the impulse approximation, and using the deuteron wave function of the Paris potential. The quasielastic cross section, in the same approximation, is taken from Ref. . We found that, in the data region i.e. at $`Q^20.1`$ GeV<sup>2</sup>, deuteron effects are negligible and, hence, one can treat the BNL data as if they were data on the ratio of the free reactions
$$R(Q^2)R_0(Q^2)=\frac{\left(d\sigma /dq^2\right)\left(\nu p\mu ^{}\mathrm{\Delta }^{++}\right)}{\left(d\sigma /dq^2\right)\left(\nu n\mu ^{}p\right)}.$$
(7)
At $`Q^2=0`$, $`R_0(Q^2)`$ is given by the quotient of
$$\frac{d\sigma }{dq^2}=\left(C_5^A\right)^2\frac{1}{24\pi ^2}G^2\mathrm{cos}^2\theta _c\frac{\sqrt{s}(M+M_\mathrm{\Delta })^2(sM_\mathrm{\Delta }^2)^2}{(sM^2)M_\mathrm{\Delta }^3}𝑑k^0\frac{\mathrm{\Gamma }_\mathrm{\Delta }(W)}{(WM_\mathrm{\Delta })^2+\mathrm{\Gamma }_\mathrm{\Delta }^2(W)/4}$$
(8)
and the well known expression for the forward quasielastic cross section. Equating this ratio to the experimental value $`0.55\pm 0.05`$ , we obtain $`C_5^A=1.22\pm 0.06`$; this result is consistent with the value given by the off-diagonal Goldberger-Treiman relation. The proper inclusion of the $`\mathrm{\Delta }`$ width causes a 30 % reduction of the cross section and cannot be neglected in the extraction of $`C_5^A(0)`$.
## 4 NEUTRINO PRODUCTION OF $`𝚫`$ IN $`{}_{}{}^{\mathrm{𝟏𝟔}}𝐎`$
When the reactions $`\nu _lp(n)l^{}\mathrm{\Delta }^{++}(\mathrm{\Delta }^+)`$ and $`\overline{\nu }_lp(n)l^+\mathrm{\Delta }^0(\mathrm{\Delta }^{})`$ take place in the nucleus, the nucleon momentum is constrained within a density dependent Fermi sea. The produced $`\mathrm{\Delta }`$ does not have this constraint, but its decay is inhibited by the Pauli blocking of the final nucleon. On the other side, there are other disappearance channels open through particle-hole excitations. The situation is well described if one replaces in the $`\mathrm{\Delta }`$ propagator $`\mathrm{\Gamma }_\mathrm{\Delta }\widehat{\mathrm{\Gamma }}_\mathrm{\Delta }2\mathrm{I}\mathrm{m}\mathrm{\Sigma }_\mathrm{\Delta }`$ and $`M_\mathrm{\Delta }M_\mathrm{\Delta }+\mathrm{Re}\mathrm{\Sigma }_\mathrm{\Delta }`$, where $`\widehat{\mathrm{\Gamma }}_\mathrm{\Delta }`$ is the Pauli blocked decay width and $`\mathrm{\Sigma }_\mathrm{\Delta }`$ is the $`\mathrm{\Delta }`$ selfenergy in the nuclear medium . The pions produced inside the nucleus are rescattered and absorbed in their propagation through the nucleus. The absorption coefficient required to estimate the produced pion flux has been calculated in the eikonal approximation, taking the pion energy dependent mean free path from Ref. . For the $`N\mathrm{\Delta }`$ transition FF, the phenomenological set I described above has been taken; possible medium modification of the FF has not been considered.
In Fig. 2 a) $`d\sigma /dE_k^{}`$ ($`k^{}`$ being the momentum of the outgoing electron) is shown for $`E_\nu =750`$ MeV. The medium modification effects cause an overall reduction of about 40 %. Therefore, the Kamiokande analysis, which makes use of free $`\mathrm{\Delta }`$ production cross sections, overestimates one pion production. However, as can be seen in Fig. 2 b), the ratio of total pion production cross sections induced by electron and muon type neutrinos and antineutrinos $`R(E_\nu )=\sigma _\mathrm{\Delta }(\mu )/\sigma _\mathrm{\Delta }(e)`$ is not affected by these modifications.
## 5 ACKNOWLEDGEMENTS
L.A.R. acknowledges financial support from the Generalitat Valenciana and S.K.S., from the Spanish Ministerio de Educación y Cultura. This work has been partially supported by DGYCIT contract PB 96-0753.
|
no-problem/9907/astro-ph9907203.html
|
ar5iv
|
text
|
# Evolution of the Galactic Potential and Halo Streamers with Future Astrometric Satellites
## 1 Introduction
Tidal streams in the Galactic halo are a natural prediction of hierarchical galaxy formation, where the Galaxy builds up its mass by accreting smaller infalling galaxies. They are often traced by luminous horizontal and giant branch (HB and GB) stars outside the tidal radius the satellite, by which we mean either a dwarf galaxy or a globular cluster in the Galactic potential. These extra-tidal stars have been seen for the Sagittarius dwarf galaxy (Ibata, Gilmore, & Irwin 1994) and for globular clusters (cf. Grillmair et al. 1998, Irwin & Hatzidimitriou 1995) as a result of tidal stripping, shocking or evaporation. That extra-tidal material (stars or gas clouds) traces the orbit of the satellite or globular cluster has long been known to be a powerful probe of the potential of the Galaxy in the halo. This technique has been exploited extensively particularly in the case of the Magellanic Clouds and Magellanic Stream (Murai & Fujimoto 1980, Putman et al. 1999) and the Sagittarius dwarf galaxy (Ibata, Gilmore & Irwin 1995, Ibata, Wyse, Gilmore & Suntzeff 1997, Zhao 1998 and references therein).
Helmi, Zhao & de Zeeuw show that streams can be identified by as peaks in the distribution in the angular momentum space, measurable with GAIA. Once identified, we can fit a stream with an orbit or more accurately a simulated stream in a given potential. Johnston, Zhao, Spergel & Hernquist (1999) show that a few percent precision in the rotation curve, flattening and triaxiality of the halo is reachable by mapping out the proper motions (with SIM accuracy) and radial velocities along a tidal stream $`\mathrm{\hspace{0.25em}20}`$ kpc from the Sun. In particular, they show that the fairly large error in distance measurements to outer halo stars presents no serious problem since one can predict distances theoretically using the known narrow distribution of the angular momentum or energy along the tails associated with a particular Galactic satellite. We expect these results should largely hold for streams detectable by GAIA. These numerical simulations are very encouraging since they show that it is plausible to a learn great deal about the Galactic potential with even a small sample of stream stars from GAIA. Some unaddressed issues include whether stream members will still be identifiable in angular momentum in potentials without axial symmetry, and the robustness of both methods if the Galactic potential evolves in time.
Here we illustrate the effects of including a realistic evolution history of the Galaxy’s potential. The simulations are observed with GAIA accuracy. We discuss whether bright stars in a stream might still be identified using the 6D information from GAIA.
## 2 Streams in realistic time-varying Galactic potentials
### 2.1 Evolution of the Galactic potential
To simulate the effect of the evolution and flattening of the potential on a stream, we will consider a satellite orbiting in the following simple, flattened, singular isothermal potential $`\mathrm{\Phi }(r,\theta ,t)`$
$$\mathrm{\Phi }_G(r,\theta ,t)=V_0^2\left[A_s\mathrm{log}r+\frac{ϵ}{2}\mathrm{cos}2\theta \right],$$
(1)
where
$$A_s(t)=1ϵ_0+ϵ(t),ϵ(t)=ϵ_0\mathrm{cos}\frac{2\pi t}{T_G}.$$
(2)
This model simulates the effect that the Galaxy becomes more massive and flattened in potential as it grows a disc. It is time-dependent but maintains a rigorously flat rotation curve at all radii, where $`(r,\theta )`$ are the spherical coordinates describing the radius and the angle from the North Galactic Pole, $`t`$ is defined such that $`t=0`$ would be the present epoch. The time-evolution is such that the Galactic potential grows from prolate at time $`t=T_G/2`$ to spherical at time $`t=T_G/4`$, and then to oblate at $`t=0`$; a more general prescription of the temporal variation might include a full set of Fourier terms.
We adopt parameters
$$V_0=200\text{ km\hspace{0.17em}s}^1,ϵ_0=0.1,$$
(3)
for the present-day circular velocity, and the flattening of the equal potential contour of the Galaxy respectively. A small $`ϵ_0`$ guarantees a positive-definate volume density of the model everywhere at all time. We set $`T_G/4=4`$Gyr, a reasonable time scale for the growth of the Galactic disc.
We have also considered Galactic potentials with a flipping disc and with a massive perturber (Zhao et al. 1999). The results are qualitatively the same. In the following we will illustrate our points with the potential with a growing disc $`\mathrm{\Phi }_G`$.
### 2.2 Evolution of the satellite
Following Helmi & White (1999), we assume that the particles in the disrupted satellite are initially distributed with an isotropic Gaussian density and velocity profile with dispersions $`0.4`$ kpc and $`4\text{ km\hspace{0.17em}s}^1`$ respectively. These particles are released instantaneously at the pericenter $`8`$ kpc from the center $`4`$ Gyrs ago. These parameters might be most relevant for satellites such as the progenitor of the Sagittarius stream. We simulate observations of mock data of 100 bright Horizontal Branch stars convolved with GAIA accuracy.
Johnston (1998) showed that a satellite on rosette orbits in the outer halo $`20`$kpc leaves behind a nice thin spaghetti-like tail. Helmi & White (1999) put their satellites on plunging orbits which come within the solar circle, and found that they become colder in velocity space, but very mixed in the coordinate space after evolving for a Hubble time. While the cold, linear structures seen in Johnston’s simulations are ideal for studying the potential, these stars at large distance are likely too faint for GAIA except for those high up on the tip of the giant branch.
Our choice of the pericenter of the satellite is somewhere in between previous authors. We concentrate on satellites which fall in and are disrupted recently (about 4 Gyrs ago, well after the violent relaxation phase) and maintain a cold spaghetti-like structure. We put the satellites on relatively tight orbits but which lie outside the solar circle (pericenter of about 8 kpc and apocenter of about 40 kpc) such that the bright member stars in the stream are still within the reach of detectability of GAIA. Such streams typically go around the Galaxy less than 5 times since disruption, and are typically far from fully phase-mixed.
Fig.1 shows the orbit and morphology of the simulated stream in the potential described in §2.1. The orbit of the disrupted satellite is chosen so that the released stream stays in the polar $`xz`$ plane, which passes through the location of the Sun and the Galactic center; the $`xyz`$ coordinate is defined such that the Sun is at $`x=8`$ kpc and $`y=z=0`$.
Fig. 2 shows the simulated streams in energy and angular momentum space. By and large the energy $`E`$ of particles across each stream is spread out only in a narrow range at each epoch in the three models; the same holds but to a lesser extent for the angular momentum vector $`𝐉`$. This implies that stars in the stream are largely coeval even in the presence of realistic, moderate evolution of the Galactic potential. The energy and angular momentum is also modulated with particle position in a sinusoidal way across the stream, an effect which in principle can be used to infer the evolution rate of the Galactic potential and the flattening of the potential.
### 2.3 Some analytical arguments
To understand the above results analytically, let’s follow the energy evolution of two particles (1 and 2) in an extremely simplified evolution history of the Galactic potential
$$\mathrm{\Phi }(r,\theta ,t)=V_0^2\mathrm{log}rg|z|\frac{t+t_G}{t_G},$$
(4)
where the potential varies linearly on a time scale $`t_G`$ from spherical at time $`t=t_G`$ to a present-day $`t=0`$ flattened potential with a uniform razor thin disk in the $`z=0`$ plane; $`g`$ is the surface gravity of the disk. Assume the two particles are released from the satellite with a slight initial energy difference $`\mathrm{\Delta }_i`$, which causes them to drift apart in orbital phase. Given enough time the initial phase-bunching of the stream particles should all be dispersed away as a stream can develop tails that wrap around the sky. More generally, we let $`t_{\mathrm{ph}}(\mathrm{\Delta }_i)`$ be a timescale for the two particles to drift sufficiently apart and become out of phase with each other. The two particles’ energy difference $`(E_2E_1)`$ at the present epoch $`t=0`$ is then readily computed from
$$(E_2E_1)=\mathrm{\Delta }_i+_{t_G}^0𝑑t\frac{}{t}(\mathrm{\Phi }_2\mathrm{\Phi }_1)=\mathrm{\Delta }_i+\left(\frac{t_{\mathrm{ph}}}{t_G}\right)\xi gz_{\mathrm{max}},$$
(5)
where $`z_{\mathrm{max}}`$ is a typical scale height for the orbit, and the dimensionless factor
$$\xi \frac{1}{t_{\mathrm{ph}}}_{t_G}^0𝑑t\left(\eta _1\eta _2\right),0\eta \frac{|z|}{z_{\mathrm{max}}}1.$$
(6)
Interestingly the factor $`\xi `$ is of order unity if not smaller. This is because $`\eta _1\eta _2`$ should take any value between -1 and 1 with equal chance once the two particles are completely out of phase. Hence, we are integrating over an oscillatory function of $`t`$ in the range $`t_G+t_{\mathrm{ph}}t0`$. This limits $`|\xi |`$ at the upper end.
The above equations suggest that: (i) The spread of energy among particles is proportional to the change of the potential, here $`g`$ of the disk: particles stay on the same orbit for a static potential. (ii) If the change of the potential is abrupt, e.g., like a step-function, then the energy spread $`(E_2E_1)\mathrm{\Delta }_i+g\left[|z_2||z_1|\right]`$ is roughly proportional to the separation of the particles at the time of change. Adjacent particles with $`(z_1z_2)z_{\mathrm{max}}`$ should also be adjacent in the energy space. This kind of sudden change of potential might be relevant for galaxy formation models with minor infalls on the orbital time scale. (iii) If the potential grows slowly, e.g., adiabaticly with $`t_G\mathrm{}`$, then we expect only a very small spread in the energy space, no greater than $`|\mathrm{\Delta }_i|+\left(\frac{t_{\mathrm{ph}}}{t_G}\right)\left(gz_{\mathrm{max}}\right)`$. This is consistent with the argument that energy differences between initially similar orbits remain small because of adiabatic invariance of the actions of orbits (as explicitly shown in the spherical case by Lynden-Bell & Lynden-Bell 1995).
### 2.4 Accuracy of parallax and radial velocity from GAIA
Fig. 3 shows the stream in various slices of the 6D phase space observable by GAIA. One of the challenges of using streams to constrain the potential is the measurement errors of proper motion ($`\mu `$ in $`\mu \mathrm{as}\mathrm{yr}^1`$), parallax ($`\pi `$ in $`\mu \mathrm{as}`$), and heliocentric radial velocity ($`V_h`$ in $`\text{ km\hspace{0.17em}s}^1`$), particularly of the latter two. For HB stars, the errors are functions of parallax. We find that this simple formula
$$\mathrm{Err}[\pi ]=1.6\mathrm{Err}[\mu ]=\mathrm{Err}[V_h]=5+50(50/\pi )^{1.5},$$
(7)
works well in approximating GAIA specifications (Lindegren 1998, private communications) on errors on parallax, proper motion and radial velocity.
Proper motions will be precisely measured everywhere in the Galaxy. A remarkable feature of GAIA is that it can resolve the internal proper motion dispersion of a satellite. Even a small satellite (e.g. a globular cluster) has a dispersion $`\sigma 4\text{ km\hspace{0.17em}s}^1`$. This means that the dispersion in the proper motion is larger than the resolution limit of GAIA for HB stars anywhere within $`20`$kpc ($`\pi 50\mu \mathrm{as}`$, $`\mathrm{Err}[\mu ]40\mu \mathrm{as}/\mathrm{yr}=4\text{ km\hspace{0.17em}s}^1/20\mathrm{k}\mathrm{p}\mathrm{c}`$, cf. 7). In comparison, radial velocities remain accurate at $`20\text{ km\hspace{0.17em}s}^1`$ level, and astrometric parallaxes remain superior than photometric parallaxes only within $`10`$ kpc ($`\pi 100\mu \mathrm{as}`$, $`V16`$mag.) because of rapid growth of error bars with magnitude of a star.
Fortunately, a “theoretical” parallax and heliocentric velocity can be predicted to a good accuracy from the property that the angular momentum and energy are roughly constant, i.e.,
$$𝐉=𝐫\times 𝐕\mathrm{constant},$$
(8)
$$E(200\text{ km\hspace{0.17em}s}^1)^2\mathrm{log}r+\frac{1}{2}𝐕^2\mathrm{constant}.$$
(9)
Here we pretend that the Galactic potential is the simplest spherical, static and isothermal potential, and feed in the accurately measured proper motions. Surprisingly, these very rough approximations yield fairly accurate parallaxes ($`10\%`$) and heliocentric velocities ($`30\text{ km\hspace{0.17em}s}^1`$) as shown by the narrow bands in Fig. 3; predictions tend to be poorer for particles in the center and anti-center directions, where the angular momentum $`𝐉`$ becomes insensitive to the heliocentric velocity. But overall it appears promising to apply this method to predict velocities and parallaxes for fainter stars, where the predictions are comparable to or better than directly observable by GAIA. The accuracy of these predictions is verifiable with direct observations brighter members of a stream.
In essence our method is a variation of the classical method of obtaining “secular parallaxes”. A polar stream with zero net azimuthal angular momentum, $`J_z0`$, makes the simplest example. Any proper motion in the longitude direction $`\mu _l`$ is merely due to solar reflex motion, so parallaxes can be recovered to about $`10\mu \mathrm{as}`$ accuracy!) from the linear regression $`\pi |\mu _l|/40`$ shown in the top right panel of Fig. 3.
### 2.5 Cool streams
Fig. 3 and 4 show a somewhat surprising result: streams stay identifiable in a variety of realistic time-dependent potentials. Fig. 3. shows a slice of a stream in the proper motion vs. proper motion diagram as well as the position-proper motion diagram. A stream shows up as a narrow feature, clearly traceable in the position-proper motion diagram after convolving with GAIA errors. The narrowness of the distribution in the proper motion space argues that stars in a stream are distinct from random field stars.
We have also run simulations with various parameters for the Galactic potential and orbit and initial size of the satellite. Fig. 4 shows a few streams in the position vs. proper motion diagram. While extensive numerical investigations are clearly required to know whether these “cool” linear features can be used to decipher the exact evolution history, evolution itself clearly does not preclude the identification of streams. We caution that the structure of a stream can become very noisy for highly eccentric orbits with a pericenter smaller than 8 kpc and/or for potentials where the temporal fluctuation of the rotation curve is greater than 10%. These noisy structures as a result of strong evolution can be challenging to detect.
We conclude that tidal streams are excellent tracers of the Galactic potential as long as a stream maintains a cool, spaghetti-like structure, in particular, the results of Johnston et al. (1999) and Helmi et al. (1999) for static Galactic potentials are largely generalizable to realisticly evolving potentials. However, perhaps the most exciting implication of these preliminary results is that by mapping the proper motions along the debris with GAIA we could eventually set limits on the rate of evolution of the Galactic potential, and distinguish among scenarios of Galaxy formation.
## 3 Discussions of Strategies
### 3.1 Why targeting streams?
Which is the better tracer of the Galactic potential, stars in a cold stream or random stars in the field? Classical approaches use field stars or random globular clusters or satellites as tracers of the Galactic potential. Assuming that they are random samples of the distribution function (DF) of the halo, one uses, e.g., the Jeans equation, to obtain the potential. One need a large number of stars to beat down statistical fluctuations, typically a few hundred stars at each radius for ten different radii. The problem is also often under-constrained because of the large number of degrees of freedom in choosing the 6-dimensional DF. Another complication is that one generally cannot make the assumption that the halo field stars are in steady state as an ensemble because it typically takes much longer than a Hubble time to phase-mix completely at radii of 30 kpc or more.
Stars in a stream trace a narrow bunch of orbits in the vicinity of the orbit of the center of the mass, and are correlated in orbital phase: they can all be traced back to a small volume (e.g., near pericenters of the satellite orbit) where they were once bound to the satellite. Hence we expect a tight constraint on parameters of the Galactic potential and the initial condition of the center of the mass of the satellite (about a dozen parameters in total) by fitting the individual proper motions of one hundred or more stars along a stream since the fitting problem is over-constrained.
We propose to select bright horizontal and giant branch stars as tracers of tidal debris of a halo satellite (which we take to be either a dwarf galaxy or a globular cluster). They are bright with $`M_V0.75`$mag, easily observable $`V18`$mag within 20 kpc from the Galactic center with GAIA.
### 3.2 Field contamination
There are numerous HB and GB stars in a satellite. Assume between $`f=0.5\%`$ and $`f=50\%`$ of the stars in the original satellite are freed by two-body relaxation processes (as for a dense globular cluster) and/or by tidal force (as for a fluffy dwarf galaxy). Then the number of HB stars in a stream is
$$N_{\mathrm{stream}}=\frac{fL_V}{(L_V/N_{\mathrm{HB}})}=10^210^4,$$
(10)
where we adopt one HB star per $`L_V/N_{\mathrm{HB}}=(540\pm 40)L_{}`$ (Preston et al. 1991) for a globular cluster or a dwarf galaxy with a total luminosity $`L_V=10^{57}L_{}`$.
In comparison, the entire metal-poor halo has a total luminosity of $`(3\pm 1)\times 10^7L_{}`$ and only about $`N_{\mathrm{halo}}=(6\pm 2)\times 10^4`$ HB stars (Kinman 1994). These halo stars are very spread out in velocity with a dispersion $`\sigma _{\mathrm{halo}}3000\mu \mathrm{as}\mathrm{yr}^1`$ in proper motion. So the number of field stars which happen to share the same proper motion with a stream is
$$N_{\mathrm{field}}=N_{\mathrm{halo}}\left(\frac{\sigma _{\mathrm{stream}}}{\sigma _{\mathrm{halo}}}\right)^270,$$
(11)
where the dispersion of a stream $`\sigma _{\mathrm{stream}}`$ is generously set at $`100\mu \mathrm{as}\mathrm{yr}^1`$, appropriate for a nearby massive dwarf galaxy with velocity dispersion of $`10\text{ km\hspace{0.17em}s}^1`$ at $`20`$kpc. The chance of confusing a HB/GB star in the field with in a stream becomes even smaller if we select stars in a small patch of the sky with similar radial velocities and photometric parallax. In fact Sgr was discovered just on the basis of radial velocity and photometric parallax despite a dense foreground of bulge stars (Ibata, Gilmore, & Irwin 1994). We conclude as far as identifying stars in a cold stream with GAIA is concerned the contamination from field halo stars is likely not a serious problem.
We thank Amina Helmi for discussions and Tim de Zeeuw for helpful comments on an earlier draft.
|
no-problem/9907/nucl-ex9907004.html
|
ar5iv
|
text
|
# Resilient Reducibility in Nuclear Multifragmentation
## Abstract
The resilience to averaging over an initial energy distribution of reducibility and thermal scaling observed in nuclear multifragmentation is studied. Poissonian reducibility and the associated thermal scaling of the mean are shown to be robust. Binomial reducibility and thermal scaling of the elementary probability are robust under a broad range of conditions. The experimental data do not show any indication of deviation due to averaging.
The complexity of nuclear multifragmentation underwent a remarkable simplification when it was empirically observed that many aspects of this process were: a) “reducible”; and b) “thermally scalable” .
“Reducibility” means that a given many-fragment probability can be expressed in terms of a corresponding one-fragment probability, i.e., the fragments are emitted essentially independent of one another.
“Thermal scaling” means that the one-fragment probability so extracted has a thermal-like dependence, i.e., it is essentially a Boltzmann factor.
Both “reducibility” and “thermal scaling” were observed in terms of a global variable, the transverse energy $`E_t`$ (defined as $`E_t=_iE_i\mathrm{sin}^2\theta _i`$, i.e. the sum of the kinetic energies $`E`$ of all charged particles in an event weighted by the sine squared of their polar angles $`\theta `$), which was assumed (see below) to be proportional to the excitation energy of the decaying source(s) .
In particular, it was found that the $`Z`$-integrated multiplicity distributions $`P(n)`$ were binomially distributed, and thus “reducible” to a one-fragment probability $`p`$. With higher resolution, it was noticed that for each individual fragment species of a given $`Z`$, the $`n_Z`$-fragment multiplicities $`P(n_Z)`$ obeyed a nearly Poisson distribution, and were thus “reducible” to a single-fragment probability proportional to the mean value $`n_Z`$ for each $`Z`$ .
The one-fragment probabilities $`p`$ showed “thermal scaling” by giving linear Arrhenius plots of $`\mathrm{ln}p`$ vs $`1/\sqrt{E_t}`$ where it is assumed that $`\sqrt{E_t}T`$. Similarly $`n`$-fragment charge distributions $`P_n(Z)`$ were shown to be both “reducible” to a one-fragment $`Z`$ distribution as well as “thermally scalable” . Even the two-fragment angular correlations $`P_{1,2}(\mathrm{\Delta }\varphi )`$ were shown to be expressible in terms of a one-body angular distribution with amplitudes that are “thermally scalable” . Table I gives a summary of the “reducible” and “thermal scaling” observables.
Empirically, “reducibility” and “thermal scaling” are pervasive features of nuclear multifragmentation. “Reducibility” proves nearly stochastic emission. “Thermal scaling” gives an indication of thermalization.
Recently, there have been some questions on the significance (not the factuality) of “reducibility” and “thermal scaling” in the binomial decomposition of $`Z`$-integrated multiplicities . For instance, had the original distribution in the true excitation-energy variable been binomially distributed and thermally scalable, wouldn’t the process of transforming from excitation energy $`E`$ to transverse energy $`E_t`$ through an (assumedly) broad transformation function $`P(E,E_t)`$ destroy both features?
Specifically, under a special choice of averaging function (Gaussian), for a special choice of parameters (variance from GEMINI ), and for special input $`p`$ (the excitation energy dependent one-fragment emission probability) and $`m`$ (the number of “throws” or attempts) to the binomial function, the binomial parameters extracted from the averaged binomial distribution are catastrophically altered, and the initial thermal scaling is spoiled . This “spoiling” in is not due to detector acceptance effects (which has been commented on extensively in ), but rather is due to the intrinsic width of correlation between $`E_t`$ and $`E`$ as discussed below.
It should be pointed out that, while the decomposition of the many-fragment emission probabilities $`P(n)`$ into $`p`$ and $`m`$ may be sensitive to the averaging process, the quantity $`mp`$ is not . However, both $`p`$ and $`mp`$ are known to give linear Arrhenius plots with essentially the same slope (see below). This by itself demonstrates that no damaging average is occurring.
Furthermore, we have observed that by restricting the definition of “fragment” to a single $`Z`$, the multiplicity distributions become nearly Poissonian and thus are characterized by the average multiplicity $`mp`$ which gives well behaved Arrhenius plots . Thus, the linearity of the Arrhenius plots of both $`p`$ and $`mp`$ extracted from all fragments, and the linearity of the Arrhenius plots of $`mp`$ for each individual $`Z`$ value eliminate observationally the criticisms described above. In fact, it follows that no visible damage is inflicted by the true physical transformation from $`E`$ to $`E_t`$. Therefore, the experimental Poisson “reducibility” of multiplicity distributions for each individual $`Z`$ and the associated “thermal scaling” of the means eliminates observationally these criticisms.
We proceed now to show in detail that: 1) binomial reducibility and thermal scaling are also quite robust under reasonable averaging conditions; 2) the data do not show any indication of pathological behavior.
We first discuss the possible origin and widths of the averaging distribution.
It is not apparent why the variance of $`P(E,E_t)`$ calculated from GEMINI should be relevant. GEMINI is a low energy statistical code and is singularly unable to reproduce intermediate mass fragment (IMF:$`3Z20`$) multiplicities, the magnitudes of $`E_t`$, and other multifragmentation features. There is no reason to expect that the variance in question is realistic.
Apparently, $`E_t`$ does not originate in the late thermal phase of the reaction. Rather, it seems to be dominated by the initial stages of the collision. Consequently its magnitude may reflect the geometry of the reaction and the consequent energy deposition in terms of the number of primary nucleon-nucleon collisions. This is attested to by the magnitude of $`E_t`$ which is several times larger than predicted by any thermal model. Thus, the worrisome “thermal widths” are presumably irrelevant.
Since there is no reliable way to determine the actual resolution of the correlation between $`E_t`$ and $`E`$, experimentally or via simulation calculations , instead of using large or small variances, we will show:
a) which variables control the divergence assuming a Gaussian distribution, and in what range of values the averaging is “safe”, i.e. it does not produce divergent behavior;
b) that the use of Gaussian tails is dangerous and improper unless one shows that the physics itself requires such tails.
The input binomial distribution is characterized by $`m`$, the number of throws (assumed constant in the calculations in ), and $`p`$ which has a characteristic energy dependence of
$$\mathrm{log}\frac{1}{p}\frac{B}{\sqrt{E_t^{}}}.$$
(1)
We denote the one-to-one image of $`E`$ in $`E_t`$ space with a prime symbol.
The averaging in is performed by integrating the product of an exponential folded with a Gaussian (Eq. (12) of ).
$$p\mathrm{exp}\left(\frac{B}{\sqrt{x}}\frac{(xx_0)^2}{2\sigma ^2}\right)𝑑x.$$
(2)
If the slope of the exponential is large, there will be 1) a substantial shift $`ϵ`$ in the peak of the integrand, and 2) a great sensitivity to the tail of the Gaussian.
The shifts $`ϵ_p`$ and $`ϵ_{p^2}`$ can be approximately evaluated:
$$ϵ_p=\frac{\sigma ^2B}{2x_0^{3/2}}$$
(3)
$$ϵ_{p^2}=2\frac{\sigma ^2B}{2x_0^{3/2}}.$$
(4)
This illustrates the divergence at small values of $`x_0`$ both in the shift of the integrand in $`p`$ and $`p^2`$ and the corresponding divergence in $`\sigma _p^2=p^2p^2`$. The scale of the divergence is set by the product $`\sigma ^2B`$. Thus one can force a catastrophic blowup by choosing a large value of $`\sigma ^2`$, of $`B`$, or of both. This is what has been shown to happen with large values of $`\sigma ^2`$ and $`B`$. The counterpart to this is that there possibly exists a range of values for $`B`$ and $`\sigma ^2`$ which leads to a “safe” averaging process.
In order to illustrate this, we have calculated the “apparent” values of the single fragment emission probability $`p_{app}`$ for widths characterized by the ratio of the full width at half maximum $`\mathrm{\Gamma }_{E_t}`$ over $`E_t`$. Specifically we have extracted $`p_{app}`$:
$$p_{app}=1\frac{\sigma _n^2}{n}$$
(5)
and $`m_{app}`$:
$$m_{app}=\frac{n}{p_{app}}$$
(6)
by calculating the observed mean:
$$n=\underset{n=0}{\overset{m}{}}nP_n^m(E_t^{})g(E_t^{})dE_t^{}$$
(7)
and variance:
$$\sigma _n^2=\left[\underset{n=0}{\overset{m}{}}n^2P_n^m(E_t^{})g(E_t^{})dE_t^{}\right]n^2$$
(8)
for “thermal” emission probabilities $`P_n^m`$ folded with a Gaussian distribution $`g(E_t)`$. We have assumed $`m`$ is constant.
For a value of $`\mathrm{\Gamma }_{E_t}/E_t`$=0.3, $`m`$=12, and $`B`$=40MeV<sup>1/2</sup> (consistent with the upper limits of the slopes observed in the Xe induced reactions ), the onset of divergence is observed in the Arrhenius plot at small values of $`E_t`$ (top left panel of Fig. 1, open circles). For $`\mathrm{\Gamma }_{E_t}/E_t`$=0.2 (open circles in bottom left panel of Fig. 1), the divergent behavior is “shifted” to even lower energies and the resulting Arrhenius plot remains approximately linear. Therefore, the thermal signature survives. For both widths, the linear (thermal) scaling survives in the physically explored range of $`1/\sqrt{E_t}0.08`$ ($`E_t`$ 150 MeV) shown by the dashed lines in Fig. 1. As we shall see below, the effect is weaker for even lower values of $`B`$ which are commonly seen experimentally.
The divergent behavior manifests itself as well in the parameter $`m`$, the number of “throws” in the binomial description. Values of $`m_{app}`$ are plotted (open circles) as a function of $`E_t`$ in the right column of Fig. 1 for $`\mathrm{\Gamma }_{E_t}/E_t`$=0.3 (top panel) and $`\mathrm{\Gamma }_{E_t}/E_t`$=0.2 (bottom panel).
While the distortions depend mostly on the variance of the energy distribution, distributions with similar widths can be associated with very different variances. For instance, a Lorentzian distribution with finite $`\mathrm{\Gamma }`$ has infinite variance. Its use would lead to a divergence even for infinitely small values of $`\mathrm{\Gamma }`$. Thus, even innocent trimmings to the (non-physical) tails of a Gaussian can produce big differences in the variance of the distribution and in the ensuing corrections. We exemplify this point in two ways.
a) We use a “square” distribution with a width equal to the full width at half maximum of the Gaussian. As can be seen by the star symbols of Fig. 1 this simple exercise dramatically extends the range over which the average can be performed safely.
b) We truncate the tails of the Gaussian at 1$`\sigma `$ (diamonds) and 2$`\sigma `$ (solid circles) in Fig. 1. Already the cut at $`2\sigma `$ shows a dramatic improvement over a full Gaussian. The 1$`\sigma `$ cut actually makes things even better than the square distribution (as seen in Fig. 1).
To illustrate the conditions under which the “thermal” scaling survives (i.e. linear Arrhenius plots as a function of $`1/\sqrt{E_t}`$), we have traced the evolution of the “divergence energy” (or the point at which $`m_{app}`$ and $`p_{app}`$ change sign) as a function of the two parameters which control the strength of the divergence: the slope parameter $`B`$ and the variance $`\sigma ^2`$ (hereafter characterized by its full width at half maximum value $`\mathrm{\Gamma }_{E_t}2.35\sqrt{\sigma ^2}`$).
A particular example for $`\mathrm{\Gamma }_{E_t}/E_t`$=0.3 is shown by the open circles in the top panel of Fig. 2. In addition, values of the divergence energy for 1$`\sigma `$ and 2$`\sigma `$ truncations of the Gaussian as well as a square distribution are also plotted. For all intents and purposes, divergencies that occur at less than 100 MeV do not alter substantially the linear Arrhenius plots as they have been observed to date in the $`E_t`$ range of 150 to 1600 MeV.
In a similar manner, the dependence of the divergence energy can also be determined as a function of the relative width $`\mathrm{\Gamma }_{E_t}/E_t`$ (for a fixed value of $`B`$). This behavior is demonstrated in the bottom panel of Fig. 2.
A more global view of the parameter space is shown in Fig. 3 where the divergence energy is plotted (contour lines) as a function of the width $`\mathrm{\Gamma }_{E_t}/E_t`$ and slope $`B`$. The shape of the contour lines reflects the $`\sigma ^2B`$ scale deduced in Eqs. (3) and (4). The calculation in ref. sits nearly in the upper right hand corner of the graph. But, as is clearly demonstrated, large regions exist where binomial reducibility and thermal scaling survive (roughly given by the region with divergence energies less than 100 MeV).
From the above exercises it is concluded that there is abundant room for the survival of binomiality and thermal scaling.
In this second part, we show that none of the symptoms of divergence are present in the available experimental data . Furthermore, the average fragment multiplicity $`n`$ is expected to be “distortion free” . As such, it provides a baseline reference with which to compare the “distorted” variable, $`p_{app}`$ (to verify whether the label “distorted” is appropriate). In addition, we can force the divergence to appear in the data, by artificially broadening the $`E_t`$ bins, thus establishing that it is not present with ordinary (small) $`E_t`$ bins. Finally, we show that thermal scaling is present and persists in the data even when the divergence is forced.
First, we draw attention again to the two pathologic features arising from excessive averaging. 1) The quantity $`m`$ diverges near $`E_t`$=0. 2) The quantity $`1/p`$ suffers a corresponding discontinuity at the same low energy.
Inspection of the published data shows that:
1) $`m`$ never diverges near $`E_t`$ = 0. To the contrary $`m`$ remains relatively constant or actually decreases with decreasing $`E_t`$. This is particularly true for all of the Xe induced reactions (see Fig. 6);
2) $`\mathrm{log}1/p`$ is nearly linear vs. $`1/\sqrt{E_t}`$ over the experimental $`E_t`$ range without the indications of trouble suggested by the calculations in the previous section.
Thus the experimental data do not show any signs of pathological features.
The quantity $`n=m_{app}p_{app}`$ does not suffer from the distortions due to averaging. In fact, $`n`$ is a suitable alternative for constructing an Arrhenius plot in those cases where $`m`$ depends only weakly on $`E_t`$ (as observed in many of the data sets we have studied). A comparison of the Arrhenius plots constructed from $`1/p_{app}`$ and $`1/n`$ is shown in Fig. 4. The striking feature of this comparison is that the $`1/p_{app}`$ values have the same slope as the “distortion-free” case of $`1/n`$. Similar observations can be made for all the other reactions studied so far. As a consequence both the “fragile” $`p`$ and the “robust” $`mp`$ survive the physical transformation $`P(E,E_t)`$ unscathed.
When the probability becomes small, the binomial distribution reduces to a Poisson distribution. This can be achieved experimentally by limiting the selection to a single $`Z`$ . The observed average multiplicity is now experimentally equal to the variance. Thus we are in the Poisson reducibility regime and can check the thermal scaling directly on $`n_Z`$. For a Poisson distribution, $`\mathrm{log}n_Z`$ should scale linearly with $`1/\sqrt{E_t}`$. This can be seen experimentally for the average yield of individual elements of a given charge (see Fig. 5) for the reaction Ar+Au at $`E/A`$=110 MeV. For the case of a single species, the reducibility is Poissonian, and the thermal (linear) scaling with $`1/\sqrt{E_t}`$ is readily apparent. As pointed out at the outset of the paper, this evidence, together with that of Fig. 4 indicates that no significant averaging is occurring even in the case of binomial decomposition.
The data can be “encouraged” to demonstrate the sort of catastrophic failures described here. By widening the bins in transverse energy ($`\mathrm{\Delta }E_t`$), we can induce an artificial broadening to mimic a broad correlation between $`E`$ and $`E_t`$. For example, the behavior of $`p_{app}`$ and $`m_{app}`$ is shown in Fig. 6 for three different widths and two different reactions. The divergencies of $`p_{app}`$ and $`m_{app}`$ are readily visible for large $`\mathrm{\Delta }E_t`$ values, but are noticeably absent for small values. The spectacularly large binning in $`E_t`$ (100 MeV!) necessary to force the anticipated pathologies to appear is reassuring indeed. Notice that here the absolute width, not the relative width, was kept fixed even at the lowest energies! Furthermore, the stability of $`n`$ is readily apparent from the complete overlap of the values of $`n`$ extracted for different windows of $`E_t`$ (open symbols of Fig. 6).
In summary:
a) Binomial reducibility and the associated thermal scaling survive in a broad range of parameter space. The single case shown in is an extreme one based on unsupported assumptions about the averaging function.
b) The experimentally observed simultaneous survival of the linear Arrhenius plot for parameter $`p`$ and the robust average $`mp`$ suggests that no serious damage is generated by the physical transformation $`P(E,E_t)`$.
c) The multiplicity distributions for any given $`Z`$ value are Poissonian and the resulting average multiplicity $`n=mp`$ gives linear Arrhenius plots confirming the conclusion in b).
d) Finally, the data themselves do not show any indication of pathological behavior. This can be seen, for instance, by comparing the behavior of $`p`$ with $`n`$. The pathology can be forced upon the data by excessively widening the $`E_t`$ bins. Even then, the thermal scaling survives in the average multiplicity.
Acknowledgments
This work was supported by the Director, Office of Energy Research, Office of High Energy and Nuclear Physics, Nuclear Physics Division of the US Department of Energy, under contract DE-AC03-76SF00098.
|
no-problem/9907/astro-ph9907209.html
|
ar5iv
|
text
|
# The cloud-in-cloud problem for non-Gaussian density fields
## 1 Introduction
One of the distinctive features in the Universe is the presence of gravitationally collapsed structures, like galaxies, groups and clusters of galaxies. The distribution of masses of these structures, usually called the mass or multiplicity function, has been determined observationally \[Ashman, Salucci & Persic 1993, Henry & Arnaud 1991, Eke et al. 1998, Markevitch 1998\], and is one of the most important characteristics of the Universe that proposed cosmological models attempting to explain the formation of structure need to reproduce.
In order to proceed with a comparison between these observations and the theoretical expectations of different structure formation models, it is of fundamental importance to be able to predict with reasonable accuracy the mass function associated with the theoretical models. The most direct way of doing this is to perform N-body simulations (for a recent review see Bertschinger 1998), where a distribution of dark matter particles alone, or in conjunction with gas particles, is evolved under gravity. However, these simulations take a considerable time to complete, being impracticable if a large number of structure formation models are being studied simultaneously. Another method for estimating the theoretical mass function is to use analytical approximations. Among the several that have been proposed, the framework put forward by Press and Schechter (1974, hereafter PS) has proved the most successful in reproducing the mass function obtained through N-body simulations, albeit the very simplified assumptions that are made.
Until recently the approach proposed by Press and Schechter to estimate the mass function was almost always (but see Lucchin & Matarrese 1988) applied in the context of structure formation models where the perturbations induced in the density field have a Gaussian random-phase distribution independently of the scale considered. This assumption is not only the simplest to take, but is also expected when perturbations are produced by an inflationary phase in the very early Universe (e.g. Liddle & Lyth 1993). However, there is at present a renewed interest in structure formation models which predict a non-Gaussian density distribution, either within the context of inflation \[Peebles 1999, Salopek 1999, Martin, Riazuelo & Sakellariadou 1999\], or as a result of the dynamics of topological defects \[Avelino et al. 1998a, Albrecht, Battye & Robinson 1999, Avelino, Caldwell & Martins 1999, Contaldi, Hindmarsh & Magueijo 1999\]. In an attempt to compare their predictions with the observed mass function, particularly at the scale of galaxy clusters (Chiu, Ostriker & Strauss 1998; Koyama, Soda & Taruya 1999; Robinson, Gawiser & Silk 1999a,b; Willick 1999) a particular generalization of the Press-Schechter framework has been used, so that the assumed density field no longer needs to be Gaussian. This so-called Extended Press-Schechter (EPS) approach has recently been proved to be quite successful in reproducing the results for the mass function obtained from N-body simulations with non-Gaussian conditions \[Robinson & Baker 1999\].
The EPS method was obtained by closely following the reasoning behind the original PS work, in the hope that it would end up as successful in predicting the mass function. The fact that this seems to be the case increases our perplexity as to why the general PS framework works at all, given all the simplifications it entails, like spherical collapse and that at a given smoothing scale all the structures that form have equal mass. The particular issue we will study here is the so-called cloud-in-cloud problem, with the others addressed in forthcoming papers. This aspect of PS-based approaches to the calculation of the mass function has been investigated previously (Epstein 1983, 1984; Schaeffer & Silk 1988; Peacock & Heavens 1990; Bond et al. 1991; Jedamzik 1996; Yano, Nagashima & Gouda 1996; Monaco 1998), although always in the context of Gaussian initial conditions.
Until now the cloud-in-cloud problem within the EPS framework has been dealt with in the same simplified manner as in the original PS derivation. In order to determine to what extent such a treatment of the problem is justified, we follow the numerical approach laid down by Monaco (1997a,b; 1998), simulating density fields with non-Gaussian one-point probability distributions in cubic boxes with periodic boundary conditions.
In the first section we describe succinctly the Press-Schechter approach to the derivation of the mass function, and how it can be extended to accommodate non-Gaussian initial conditions. In the following section we discuss the cloud-in-cloud problem and how several authors have tried to avoid it, describing in detail the numerical approach we have chosen to study it. Finally, in the last two sections we present the results of our analysis and discuss their importance to the proposed Extended Press-Schechter framework.
## 2 Extended Press-Schechter
The Press-Schechter theory was originally proposed \[Press & Schechter 1974\], in the context of initial Gaussian density perturbations, as a simple analytical tool for predicting the mass fraction associated with collapsed objects with mass larger than some given mass thereshold $`M`$. This is obtained by measuring the fraction of space in which the evolved linear density field exceeds a given density contrast $`\delta _\mathrm{c}`$,
$$F(>M)=\frac{\mathrm{\Omega }_\mathrm{m}(>M(R))}{\mathrm{\Omega }_\mathrm{m}}=_{\delta _\mathrm{c}}^{\mathrm{}}𝒫(\delta )𝑑\delta .$$
(1)
For spherical collapse in an Einstein-de Sitter universe $`\delta _\mathrm{c}`$ equals $`1.7`$, being almost insensitive to a change in the assumed background cosmology (e.g. Eke, Cole & Frenk 1996). In the case of Gaussian initial conditions then,
$$F(>M)=\frac{1}{2}\mathrm{erfc}\left(\frac{\delta _\mathrm{c}}{\sqrt{2}\sigma (R)}\right),$$
(2)
where $`\mathrm{erfc}`$ is the complementary error function and $`\sigma (R)`$ is the dispersion of the density field at the scale $`R`$. For a top-hat window, $`M`$ is related to $`R`$ via $`M=4\pi R^3\rho _\mathrm{b}/3`$, with $`\rho _\mathrm{b}`$ being the background density. The right hand side in expression (2) is usually multiplied by a factor of $`2`$, as originally suggested by Press and Schechter, so that
$$_0^{\mathrm{}}F(>M)=1,$$
(3)
thus taking into account the accretion of material initially present in regions underdense at the smoothing scale $`R`$. The mass function of collapsed objects can be obtained simply by deriving expression (2) (multiplied by the factor $`2`$) with relation to $`M`$, and then dividing by $`\rho _\mathrm{b}/M`$.
The original PS approach can be easily generalized for non-Gaussian density perturbations, simply by considering in expression (1) a non-Gaussian one-point probability distribution function (henceforth PDF) $`𝒫(\delta )`$. In order for all the mass in the Universe to be accounted for the expression then needs to be multiplied by $`f=1/_0^{\mathrm{}}𝒫(\delta )𝑑\delta `$. This re-normalization is equivalent to multiplying by the factor $`2`$ in the Gaussian case. Surprisingly, this very simple extension of the PS approach to non-Gaussian initial conditions was only taken seriously for the first time by Chiu, Ostriker and Strauss (1997). Since then it has been tested with some success for a few non-Gaussian structure formation models against N-body simulations \[Robinson & Baker 1999\].
It should be noted that as long as the dispersion of the density field depends on the smoothing scale $`R`$ considered, $`\sigma (R)`$, the PDF for the matter density is necessarily scale-dependent, $`𝒫(R,\delta )`$. What is usually meant by saying that such PDF is invariant with scale, is that the reduced distribution, $`𝒫_R(\delta )=𝒫[R,\delta /\sigma (R)]/\sigma (R)`$, is always the same, i.e. the shape of the reduced PDF does not depend on the scale under consideration. For example, the simplest inflationary models predict a Gaussian PDF for the matter density at all scales (e.g. Liddle & Lyth 1993), in the sense that at any scale the reduced form of the PDF is always equal to a Gaussian with zero mean and dispersion unity.
In the case of alternative structure formation models one could have two other scenarios. In the more general one, the shape of the reduced PDF for the matter density depends on the smoothing scale being considered. This is indeed what is expected when the matter density distribution is generated, for example, through the dynamics of topological defects \[Avelino et al. 1998b, Avelino, Wu & Shellard 1999\]. The second possibility is that the reduced PDF for the matter density is scale-independent, but non-Gaussian, i.e. not equal to a Gaussian with zero mean and dispersion unity. Here, for simplicity, we will focus our study on this second scenario, with the knowledge that it can be easily generalized to the first one.
## 3 Solving the cloud-in-cloud problem
As we have previously shown, there is a fundamental difficulty with the normalization of expression (1). We have seen that in the Gaussian case a factor of $`2`$ had to be introduced to correctly take into account the fact that material in initially underdense regions at some smoothing scale R is eventually accreted and incorporated into the collapsed objects that form from the initially overdense regions at that scale. In other words, the factor $`2`$ accounts for the material that although not in the regions predicted to collapse at the smoothing scale $`R`$, will nevertheless become part of collapsed objects associated with scales larger than $`R`$. Only by smoothing the density field at these scales would this material count as collapsed, but clearly this should happen in the first place when $`F(>M(R))`$ is calculated. This is the so-called cloud-in-cloud problem in the PS approach.
Several authors have tried to find a more satisfactory solution for the cloud-in-cloud problem than just multiplying expression (1) by $`2`$, as proposed by Press and Schechter. The first to approach the problem were Epstein (1983, 1984) and Schaeffer & Silk (1988). But it was only with the work of Peacock and Heavens (1990) and Bond et al. (1991) that a comprehensive framework was put in place to study the cloud-in-cloud problem and its ramifications. The assumptions behind the two approaches are very similar, with the later using a more formal line of reasoning based on the theory of excursion sets. The method we will use here to study the cloud-in-cloud problem within the EPS framework was first used by Monaco (1997a,b; 1998) for the case of Gaussian density fields. It is basically a numerical implementation of the approach pioneered by Peacock and Heavens (1990) and Bond et al. (1991). Density fields with an assumed PDF, and which only differ in the scale at which the smoothing is applied, are generated in cubic grids. Starting from the largest scale, trajectories for all the points in the density field are then constructed with the values for the density contrast recorded at each smoothing scale. In the Gaussian case, and if the smoothing is performed with a sharp-k window, the trajectories followed by the points are Brownian random walks. In this very particular example, the number of points which exceed some density contrast at any smoothing scale is equal to the number of points which though at such scale do not exceed the assumed density contrast, at some other larger scale do. In this case the re-normalization factor one needs to multiply expression (1) with is exactly $`2`$, as Press and Schechter initially proposed. This was formally proved by Bond et al. (1991) using excursion set theory. Unfortunately, it is the only instance when their approach can be used to solve the cloud-in-cloud problem. For any other smoothing window or probability distribution the random walk characteristic disappears, and the problem becomes analytically intractable. In these cases, either one uses the method proposed by Peacock and Heavens (1990), and expanded in Monaco (1997a,b; 1998), which also has its limitations, or the numerical approach considered here.
## 4 Results
Our numerical realizations of density fields were performed in cubic grids with $`64^3`$ points. They were generated in the same manner as the density fields used to set initial conditions in N-body simulations (see e.g. Bertschinger & Gelb 1991; Klypin & Holtzman 1997). We considered one example of a scale-independent reduced non-Gaussian PDF for the density contrast, the Chi-square, the PDF having been shifted so that its mean is zero. Such a PDF is expected in certain models of structure formation, involving either isocurvature density perturbations generated during an inflationary period in the very early Universe \[Peebles 1999\] or cosmic string seeded perturbations \[Avelino et al. 1998b, Avelino, Wu & Shellard 1999\]. We also considered the Gaussian case for comparison.
The Chi-square has the added attractive feature that the reduced PDF remains approximately the same when smoothed on a variety of length scales using either gaussian (GAU) or top-hat (TH) windows (e.g. White 1999). The reduced PDF that results from the smoothing becomes increasingly different from the original as the number of degrees of freedom, $`\nu `$, gets smaller, with the most important departures relatively to the original reduced PDF shape being a decrease in the probability of $`\delta /\sigma `$ taking values around zero, i.e. the mean, and the appearance of a non-zero probability of $`\delta /\sigma `$ taking values just outside the cut-off, equal to $`\nu /\sqrt{2\nu }`$. Fortunately, these differences only become noticeable for $`\nu `$ smaller than about 10. However, when the smoothing is performed using a sharp-k (SK) window, the departure of the smoothed PDF from the original increases considerably. Now, as soon as $`\nu `$ becomes less than about 100 the difference starts to show. Nevertheless, we will be only interested in the fraction of the reduced PDF that lies above $`\delta _\mathrm{c}/\sigma `$, and this part of the distribution is little changed by smoothing, even in the case of a SK window.
With the above in mind, for different $`\nu `$ values we assumed that the power spectrum associated with the density contrast was a power-law, $`𝒫_\delta (k)k^n`$, where $`k`$ is the wavenumber and $`n=2,1,0`$. We generated several sets of density fields for each combination of $`\nu `$ and $`n`$ values, such that all realizations within each set only differed in the smoothing scale applied. The three smoothing windows mentioned above were considered: sharp-k, top-hat, and a gaussian. However, we will only show the results for SK and GAU smoothing, given that those for TH smoothing turn out very similar to the GAU ones. Also, we opted for showing just the results obtained for $`n=2`$, which is closest to the slope of the matter power spectrum on the scale of galaxy clusters \[Markevitch 1998\], as they are basically indistinguishable from the results for the two other spectral indexes, $`n=1`$ and $`n=0`$.
On the left panel of Fig. 1 we show, for the reduced Gaussian PDF, the fraction of mass collapsed above a certain smoothing scale as a function of the value of the dispersion of the density field at that scale, $`F(\sigma )`$, calculated using the method presented here and the EPS approach. The same quantities are shown on the left panel of Fig. 2 for a reduced Chi-square PDF with $`\nu =1`$, $`\nu =10`$, $`\nu =100`$ (note that the $`\nu =\mathrm{}`$ case is equivalent to a reduced Gaussian PDF).
On the right panels of Fig. 1 and Fig. 2 we show $`dF(\sigma )/d\sigma `$ instead. The theoretical mass function can be easily obtained by multiplying $`dF(\sigma )/d\sigma `$ by $`d\sigma /dM`$ and dividing by $`\rho _\mathrm{b}/M`$. In the case of power-law spectra as the ones we consider here, $`\sigma (M)M^{(n+3)/6}`$.
In all cases the thereshold for the density contrast above which we assume a field point to be collapsed is $`\delta _\mathrm{c}=1.7`$, which is equivalent to assuming spherical collapse in an Einstein-de Sitter cosmology. The results presented can be easily generalized for any other thereshold, $`\delta _\mathrm{c}^{}`$, by making the identification $`F^{}(\sigma )F(\sigma \delta _\mathrm{c}/\delta _\mathrm{c}^{})`$.
In the case of SK smoothing and in the limit of a reduced Gaussian PDF, the EPS approach (which then simply reduces to PS) correctly predicts $`F(\sigma )`$, and thus $`dF(\sigma )/d\sigma `$, as expected. The famous factor of $`2`$ in the normalization is recovered. As the number of degrees of freedom decreases the assumed reduced PDF starts to differ from a reduced Gaussian, and the EPS prediction increasingly overestimates the numerically determined collapsed mass fraction and mass function. In the case of the GAU window (as mentioned before the results for TH smoothing are very similar), the numerical results deviate from those predicted through the EPS approach even when assuming a Gaussian PDF. For this particular PDF, we find that for small values of $`\sigma `$ (i.e. large mass scales), both the predicted collapsed mass fraction and the mass function seem to approach the result one would obtain if the PS approach was used without the normalization factor of 2. The same conclusion had already been reached analytically by Peacock and Heavens (1990), being numerically confirmed by Bond et al. (1991) (see also Monaco 1997b).
## 5 Conclusions
We have shown that the EPS approach does not correctly take into account the existence of regions, which though at some smoothing scale are unable to pass a certain collapse thereshold are nevertheless able to do it at larger scales, in the estimation of the mass function. This problem was already known to exist within the PS framework, except when the smoothing of the density field was performed using a sharp-k window. Now, we have found that even when this window is used, the EPS approach cannot adequately solve the cloud-in-cloud problem.
The mass function predicted by the EPS approach deviates most from the numerical results for small values of $`\sigma `$, which correspond to the large mass end in models where structure builds up hierarchically. The only N-body simulations that have been used to check whether the EPS approach provides a good fit to the mass function were limited in range to values of $`\sigma `$ larger than around 0.5, except for a couple of points with relatively large error bars, and at the limit of statistical significance \[Robinson & Baker 1999\]. For these $`\sigma `$ values, the EPS prediction for the mass function has not yet entered the regime where it differs significantly from the numerical results obtained in this paper, particularly when one considers sharp-k smoothing. It is therefore not surprising that the N-body results seem to vindicate the EPS approach. However, our analysis should throw a note of caution. In the regime where the mass function is defined essentially through the abundance of rare density peaks, the EPS approach may not be reliable when one is dealing with strongly non-Gaussian PDFs. This may affect some recent conclusions regarding the gaussianity of the density field on large scales, drawn from the abundance of rich galaxy clusters at different redshifts and their present-day correlation length (Chiu, Ostriker & Strauss 1998; Koyama, Soda & Taruya 1999; Robinson, Gawiser & Silk 1999a,b; Willick 1999). It would be very interesting if larger N-body simulations could be carried out, able to extend the mass function further into the low-$`\sigma `$, rare events regime.
The mass functions determined in this paper can still be further improved. Here we focused our attention on the cloud-in-cloud problem. Other issues were left untouched, more importantly, possible deviations from spherical collapse and the relation between smoothing radius versus the mass of the structures identified after each smoothing. We are presently looking at these issues.
## 6 Acknowledgments
P.P.A. and P.T.P.V. are supported through the PRAXIS XXI program of FCT.
|
no-problem/9907/physics9907026.html
|
ar5iv
|
text
|
# The Physics Opportunities and Technical Challenges of a Muon Collider 11footnote 1Paper submitted to Columbia University of New York in partial fulfillment of the requirements for a Doctorate of Philosophy.
## 1 Introduction
The continued success of the standard model (SM) of elementary particle physics has gradually but fundamentally altered the the character of experimental high energy physics in the past decade or so. Ever more precise, expensive and time-consuming experiments continue to agree with the predictions of the SM, and the only really good chance for new discoveries appears to be by searching at energies higher than previously attained (in the TeV energy range).
The high energy frontier also has its problems, as emphasized by the cancellation of the SSC accelerator. Colliding beam facilities tend to be very large, technically challenging and expensive.
The SSC and the proposed Large Hadron Collider (LHC) at CERN were designed to collide protons. Proton collisions have two main drawbacks:
* Protons are complex composite particles. The hard scattering interactions that could produce new high mass particles actually occur between the quark and gluon constituents of the proton, and each constituent particle carries only a fraction of the proton momentum. This lowers the actual collision energy and means that interactions occur at a range of center of mass (CoM) energies and rest frames. The mass reach of hadron colliders for discovering new particles is diluted by this, by a factor of roughly 10 to 20.
* The strongly interacting protons produce enormous numbers of uninteresting background particles from soft collisions. This tends to obscure the rare interesting processes and causes serious radiation and event triggering problems for the particle detectors.
The problems of hadron colliders are avoided by colliding electrons (and positrons). However, electrons have severe problems with synchrotron radiation which are specifically related to their light mass ($`\mathrm{M}_\mathrm{e}=0.511`$ MeV):
* The energy loss per revolution from synchrotron radiation for a charged particle in a circular cyclotron accelerator of radius R is given by
$$\mathrm{\Delta }\mathrm{E}(\mathrm{MeV})=8.85\times 10^2\frac{[\mathrm{E}(\mathrm{GeV})]^4}{\mathrm{R}(\mathrm{meters})}$$
(1)
This loss must be compensated for by using radio-frequency cavities to accelerate the beam. This quickly becomes prohibitive as the electron energy is increased. The most powerful cyclotron accelerator for electrons will probably be the LEP-II accelerator at the CERN laboratory in Switzerland, which will come on-line in the next few years. The 27 kilometer ring will provide $`e^+e^{}`$ collisions at CoM energies of 170 GeV. The only practical way of colliding electrons at energies higher than this is using single-pass collisions from pairs of opposed linear accelerators.
* Even linear electron colliders have the serious problem of “beamstrahlung” at the collision point. In future planned $`e^+e^{}`$ colliders the magnetic fields generated from the intersection of high density electron and positron beams will reach thousands of Teslas, inducing the particles to emit intense synchrotron radiation. This lowers and spreads out the CoM energies of the collisions, and also creates a serious background of photons in the detector. In addition, the photons can interact with either individual electrons or the macroscopic electromagnetic field of the oncoming beam to produce low energy electron pairs, which also form an experimental background. Pair production becomes a prohibitive background when the critical synchrotron radiation energy of the magnetic fields (equation 14.85 of Jackson) approaches the electron beam energy.
The above problems and the multi-billion dollar expense of proposed $`e^+e^{}`$ and proton colliders have provoked a pessimism in the high energy physics community about the experimental future of the field. Nevertheless, the importance of further experimental progress to the advancement of the field cannot be overstated. To quote Harvard theorist Sidney R. Coleman “Experiment is the source of imagination. All the philosophers in the world thinking for thousands of years couldn’t come up with quantum mechanics”. This impasse underlines the importance of novel accelerator technologies. In the opinion of well known experimental physicist Samuel C. Ting “We need revolutionary ideas in accelerator design more than we need theory. Most universities do not have an accelerator course. Without such a course, and an infusion of new ideas, the field will die.”
One idea that shows promise is to avoid the synchrotron radiation problems of electrons by using muons instead. These “fat electrons” have 200 times the mass of electrons ($`\mathrm{M}_\mu =105.66`$ MeV, c.f. with 0.511 MeV for electrons) and, in keeping with the idea of lepton universality, have otherwise nearly identical physics properties. They can be produced copiously by impinging proton beams on a target to produce pions and then letting the pions decay to muons. The one very serious drawback of muons is that they are unstable, decaying with a rest-frame lifetime of 2.2 $`\mu \mathrm{s}`$ into electrons and neutrinos:
$$\mu ^{}e^{}+\overline{\nu _e}+\nu _\mu .$$
(2)
This fact means that muon colliders must do everything very fast. The muons must be collected, “cooled” into small dense bunches, accelerated and collided before a significant fraction of them decay.
## 2 Physics Opportunities at the High Energy Frontier
The top quark and the Higgs boson are the two undiscovered elementary particles required to complete the original (and simplest) version of the SM – sometimes called the Minimal Standard Model (MSM). Experiments have set lower limits on the masses of the top quark and the Higgs particle of $`\mathrm{M}_{\mathrm{top}}130`$ GeV and $`\mathrm{M}_{\mathrm{Higgs}}=48`$ GeV, respectively, while the consistency of the MSM requires $`\mathrm{M}_{\mathrm{top}}`$ to be below about 250 GeV and $`\mathrm{M}_{\mathrm{Higgs}}`$ to be below $`1`$ TeV. This means that a muon collider could be used to discover and/or study the properties of either of these.
The Fermilab Tevatron $`p\overline{p}`$ collider, operating at either 900 GeV or 1 TeV, appears to have a reasonable chance of discovering the top quark in the next few years, and it will almost certainly be discovered if and when the LHC starts taking data around the turn of the century. However, hadron colliders will probably only to be able to determine $`\mathrm{M}_{\mathrm{top}}`$ to within about 5 GeV. The cleaner experimental conditions in lepton colliders could improve this to better than 1 GeV, and provide better tests of QCD predictions for top quark decays.
The Higgs boson is a much more difficult experimental target because of its low production cross section. The dominant production modes for lepton colliders are shown in figures 1a–d and the production modes for hadron colliders are shown in figures 2a and 2b.
The cross section contributions at lepton colliders from figures 1a and 1b are shown in figure 3. Note that the higher order process of 1b actually rises with increasing CoM energy, and this is the main Higgs production mechanism for TeV scale lepton colliders. The cross section for figure 1c is smaller than 1b because of the smaller NC coupling and $`M_Z>M_W`$, and so it hasn’t been considered seriously in the lepton collider studies I have seen. (I am not sure how much smaller – it is reduced by a factor of about seven at the HERA ep collider and I would guess a similar or smaller reduction at a higher energy lepton collider.) However, it appears to give a much cleaner signature for the Higgs particle than the corresponding W -fusion process because $`\mathrm{M}_{\mathrm{Higgs}}`$ can be reconstructed from the outgoing leptons and the known beam energies. Figure 1d is enhanced for $`\mu ^+\mu ^{}`$ colliders relative to $`e^+e^{}`$ colliders by a factor of $`(\mathrm{M}_\mu /\mathrm{M}_\mathrm{e})^240,000`$. It makes an insignificant contribution for electron colliders but for $`\mu ^+\mu ^{}`$ colliders and $`\mathrm{M}_{\mathrm{Higgs}}\stackrel{<}{}200`$ GeV there is a significant Higgs production resonance at $`E_{CM}=\mathrm{M}_{\mathrm{Higgs}}`$. Once the Higgs has been discovered a “Higgs factory” muon collider could be built to sit on this resonance.
The Higgs decays preferentially to the heaviest particle–antiparticle pair lighter than $`\mathrm{M}_{\mathrm{Higgs}}`$. At the lighter end of the expected mass range for $`\mathrm{M}_{\mathrm{Higgs}}`$ the decay to $`b\overline{b}`$ pairs is favored, while heavier Higgs can decay to $`t\overline{t}`$ or W and Z bosons. Hadron colliders have such enormous background problems for most of these decays that the Higgs must be searched for in less common decay modes.
Another topic in the MSM that lepton colliders will be particularly useful for studying is the triple and quartic gauge boson couplings: $`WW\gamma `$, $`WWZ`$, $`WWWW`$, $`WWZZ`$, $`WW\gamma \gamma `$ and $`WWZ\gamma `$. The anticipated observation of these couplings at LEP-II will provide the first experimental verification of the non-abelian nature of the standard model, and they can be studied with greater precision at higher energy lepton colliders.
The MSM is known to be only a good phenomenological theory that becomes inconsistent at experimentally inaccessible energy scales. The verification of the MSM at the next generation of colliders is only the most conservative scenario, and many physicists think that there is a good chance that exotic new processes will be revealed. This might take the form of extra Higgs particles, missing energy from the new particles predicted in various “supersymmetric” theories, or something even more unexpected. These exciting possibilities provide some of the main motivation for building new accelerators.
## 3 Luminosity, and Ionization Cooling of Muons
The production of high mass particles is expected to be a very rare process, requiring enormous collision rates – this is motivated by the observation that point-like cross sections fall as the inverse square of the center of momentum (CoM) energy. For example, the production of $`e^+e^{}`$ pairs in muon collisions is given by
$$\sigma (\mu ^+\mu ^{}e^+e^{})1R=\frac{4\pi \alpha ^3}{3s}=\frac{87fbarn}{E_{CM}^2(TeV^2)}.$$
(3)
The number of events produced at an accelerator is given by the product of the cross section for that process, $`\sigma `$, and the luminosity of the accelerator, $``$, integrated over its running time
$$\mathrm{number}\mathrm{of}\mathrm{events}=\sigma 𝑑t.$$
(4)
Design luminosities for the next generation of planned accelerations are typically $`=10^{33}10^{34}\mathrm{cm}^2\mathrm{sec}^1`$. For a canonical year of $`10^7`$ seconds this corresponds to an integrated luminosity of $`\sigma 𝑑t=10100`$ inverse $`fbarn`$. (So equation 3 predicts that a muon collider with 1 TeV CoM energy and $`=10^{34}\mathrm{cm}^2\mathrm{sec}^1`$ would produce around 10,000 electron pairs in a year’s running.)
The luminosity of an accelerator is given by
$$=\frac{N^2f}{A},$$
(5)
where $`N`$ is the number of $`\mu ^+`$ or $`\mu ^{}`$ in a bunch (assumed equal), $`f`$ is the frequency of collisions and $`A`$ is the (effective) cross-sectional area of the beams at the collision point. The primary goal of accelerator design is deliver as large an $``$ as possible at the specified energy.
The cross-sectional area, $`A`$, is minimized by designing a magnet lattice to focus strongly at the collision point and by minimizing the phase space volume of the particle bunches so that they will come to a good focus at the collision point. The phase space volume, $`PS`$, of the beam can be written as a 6-dimensional product of the beam spread in coordinate and momentum space
$$PS=\mathrm{\Delta }x\mathrm{\Delta }p_x\mathrm{\Delta }y\mathrm{\Delta }p_y\mathrm{\Delta }z\mathrm{\Delta }p_z.$$
(6)
The $`PS`$ of the particle bunch is conserved in any interactions with macroscopic external electromagnetic fields, including the time-dependent fields applied during the acceleration and storage of the bunch in the accelerator. The product of the momentum spread and the spatial spread in each dimension is usually also separately conserved (with a few caveats), but momentum spread is easily traded for spatial spread by focusing or defocusing the bunch. However, $`PS`$ does tend to increase due to the following effects
1. The bunch tends to be pushed apart by its own charge – the “space-charge” effect. This tendency must be opposed by longitudinal and transverse focusing in the accelerator.
2. Disruptions of the bunches can induced by (e.g.) interaction of the beam charge with accelerator elements (particularly r.f. cavities). While in principle this may not increase the true phase space volume the practical effect is to cause “filamentation” of the bunch so that it acts as though it is occupying a larger phase space volume.
Since producing muons from pion decays gives very large values of $`PS`$ it is necessary to cool the muons considerably before acceleration.
Muons can be cooled by a very simple method known as ionization cooling. The concept is illustrated in figure 4a. A bunch of muons is passed through a slab of material to reduce the muon energies. This reduces the transverse momentum spread by a factor equal to the fractional energy loss. The momentum in the direction of the beam is also reduced, but this can then be restored by accelerating the bunch in r.f. cavities. The net effect is that the bunch ends up with the same energy but a lower transverse momentum spread. A variation is shown in figure 4b. A wedge of matter is placed in a dispersive region of the magnet lattice where the high energy muons are displaced from lower energy muons. The higher energy muons pass through more material than the lower energy ones and lose more energy. The original mean energy is then restored with an r.f. cavity, and this time the longitudinal momentum spread of the beam has been reduced.
This cooling mechanism is unique to muons. Electrons and hadrons such as protons would interact in the cooling material, and the only other heavy lepton – the tau – decays far too quickly for cooling or acceleration.
There are two heating mechanisms that compete with the cooling process
* The transverse momentum spread of the beam is increased by multiple coulomb scattering (MCS)
$$\frac{d(\mathrm{\Delta }p_{x,y})^2}{dz}=\frac{1}{L_R}(13.6MeV/c)^2,$$
(7)
where $`L_R`$ is the radiation length of the material.
* The longitudinal momentum spread is increased by energy straggling
$$\frac{d(\mathrm{\Delta }p_z)^2}{dz}=\frac{dE}{dz}I,$$
(8)
where $`I`$ is the mean energy exchange ($`12\mathrm{Z}`$ eV), the additional energy losses from hard single scatters have been neglected and the approximation $`p_zE`$ is used.
Cooling is optimized by
1. Using a low Z material such as beryllium to maximize the energy loss per radiation length and reduce the energy straggling. (Beryllium has an energy loss of 105 MeV per radiation length, compared with only 7.2 MeV for lead.)
2. Focusing the muons into a tight bunch at the material to blow up the longitudinal and transverse momentum spreads to large values which can be effectively reduced by cooling.
3. Using low energy beams so that the fractional energy loss per radiation length is maximized. The energy cannot be below about 0.3 GeV because below this the muons are no longer relativistic minimum-ionizing particles and the energy spread of the bunch increases quickly when passed through material.
An interesting idea that unfortunately probably won’t work is to use crystals to cool the beam even further. Certain axes of crystals tend to channel charged particles and hold them while they lose energy – giving cooling without MCS. Large, high quality crytals of silicon, germanium and tungsten have been grown and used for extensive studies of particle channeling, and bent crystals have been used to steer particle beams. Unfortunately, the solid angle for capturing particles is very small ($``$milliradians at 50 MeV, falling as $`1/\sqrt{E}`$ citeChen crystal)and the particles dechannel over characteristic lengths of centimeters at 10 GeV, rising in proportion to the beam energy. This appears to be too small by about two orders of magnitude for net cooling.
Beam cooling at a muon accelerator would be expected to consist of some tens of slabs of beryllium or some other low Z material inside a lattice of magnets and accelerating structures to transport the beam and manipulate its distribution in phase space.
## 4 Conceptual Design of a Muon Collider
The idea of muon storage rings has probably been around since the 1960’s or earlier, and muon colliders have been seriously discussed at least as early as 1980. A conceptual design of a muon collider is shown in figure 5 . This section discusses each of the components of the accelerator.
The requirement of colliding bunches containing $`10^{11}10^{12}`$ muons means that the hadron accelerator must deliver $`10^{13}10^{14}`$ protons into the target at a rate of 10 Hz or higher. This is more than any existing accelerator, but this technology has been studied in detail for the planned meson factories KAON and PILAC. The KAON design calls for bunches of $`610^{13}`$ 30 GeV protons at a rate of 10 Hz.
Possible modifications to the KAON design that might be improvements for a muon collider are
* The muon collider needs both charges of muons, while protons produce predominantly $`\mu ^+`$ (from $`\pi ^+`$). This could be solved by using deuterium ions instead of protons.
* There is no need to be above the energy threshold for kaon production, and nucleon (proton or neutron) kinetic energies as low as 700 MeV produce pions copiously. This would be cheaper, would decrease the decay length of the pions and would decrease the energy flux onto the production target. It would also open up the speculative possibility of using an induction linac instead of a storage ring for accelerating the protons/deuterium ions. (Induction linacs can produce accelerating gradients in excess of 1 MeV/m and reach good efficiencies of better than 50% for short, intense particle bunches – which sounds ideal for a muon collider.)
The thermal shock on the target is a difficult design problem. A bunch of $`10^{14}`$ 1 GeV protons delivers 6000 joules onto the target spot in a nanosecond timescale, some fraction of which will go into shock heating of the target. This load is repeated 10 times or more every second. This must be handled by maintaining a large spot size and intensive cooling of the target. A more exotic option which has already been tested at accelerators is using a liquid jet target of either water or a molten metal.
A schematic diagram of the pion collection and decay channel is shown in figure 6. One speculative alternative is to use a long ($`50100`$ m) solenoidal magnet with a large aperture. The transverse momenta of the pions coming off the production target range up to around 300 MeV/c. Almost all of these pions would be confined in spiral orbits by an iron solenoidal magnet with a 2 Tesla field and 50 cm aperture radius, or by a superconducting magnet with a 6 Tesla field and a 20 cm aperture radius. The pions would decay to muons inside the magnet, and the positive and negative muons could be separated by including an additional transverse magnetic field. This idea would be much more practical if r.f. acceleration could be provided inside the magnet (I have no idea whether this is possible). In this case the acceptance could be a large fraction of unity for both $`\mu ^+`$ and $`\mu ^{}`$.
The acceleration of the muons must proceed relatively quickly to avoid losing too big a fraction to decays. The average accelerating gradient required is several MeV/m, which is easily within today’s technology since the SLC electron linac currently operates with an average gradient of 20 MeV/m. A simple numerical integration finds that when muons are accelerated from 300 MeV to 2 TeV at a constant gradient of 5 (or 10, or 20) MeV/m the fraction surviving is 74% (or 85%, or 93%).
Figure 5 uses a linac to accelerate the muon beams. This is likely to be a very expensive option – almost half the cost of a $`e^+e^{}`$ linear collider just for acceleration. Bob Palmer suggests using instead a recirculation linac, as shown in figure 7. The particles pass through each of the superconducting linacs several times over, and are transported between the linacs by the bending magnets in the recirculation loops. The motivation for this design is that r.f. accelerating cavities are very expensive, so it is cheaper to use the same cavities several times per bunch. This design is basically a higher energy copy of the existing CEBAF $`e^+e^{}`$ accelerator, which also uses superconducting r.f. cavities.
After acceleration the $`\mu ^+`$ and $`\mu ^{}`$ bunches are injected into the collider rings in opposing directions. Since muons are heavy enough that synchrotron radiation is not a problem their beam transport properties are similar to protons. For example, 1 TeV muons would require a ring of radius about 1 km, being the same energy as the protons in the Fermilab Tevatron accelerator. The decay length of the muons in the ring is given by
$$\mathrm{decaylength}=\mathrm{\hspace{0.25em}6233}\mathrm{km}\mathrm{E}_\mu (\mathrm{TeV}).$$
(9)
This means that the number of muons in a bunch decays by a factor of 1/e in about 1000 turns – independent of energy.
One advantage for muon colliders over hadron colliders is that the storage time required is only milliseconds rather than hours, so the requirements on beam stability are much less demanding. Palmer suggests using an “isochronous” ring, with few r.f. cavities to compress the bunch length.
## 5 Detector Design Issues
The particle detectors at the interaction point would be expected to be similar to those at other high energy colliders, with particle tracking in a magnetized space surrounding the interaction point and with calorimeters enclosing this region. (One difference might be a greater emphasis on the precise determination of muon momenta.)
The backgrounds emanating from the vertex itself would be expected to much smaller than for hadron colliders, and probably smaller than at TeV energy electon colliders. However, the decay of the muons to electrons will still lead to serious backgrounds at the detectors. For 2 TeV muons approximately one in $`10^7`$ will decay per meter, so a bunch of $`10^12`$ muons will produce about $`10^5`$ electrons per meter with an average energy of about 2/3 TeV. All of these electrons will eventually hit the beam pipe somewhere in the ring, initiating electromagnetic showers. This leads to two types of backgrounds
1. The electromagnetic showers from electrons striking the final focus magnets close to the interaction point can leak into the detector.
2. Electromagnetic showers anywhere along the straight sections before the interaction point will occasionally produce a muon pair. This is suppressed relative to $`e^+e^{}`$ pair production by a factor of $`(\mathrm{M}_\mu /\mathrm{M}_\mathrm{e})^2=40,000`$, but the muons can pass through any shielding placed in front of the detector.
These backgrounds must be suppressed by a combination of shielding and design of the final focus magnets, and the detector must have enough electronic channels of tracking and calorimetry to be able to correct for the remaining background.
A reasonable design for the beam-line might include a final focus region consisting of iron quadrupole magnets many meters long with a conical aperture decreasing from several cm at the entrance to about 1 mm at the end closest to the interaction point. Much of the remaining 1–2 meters distance to the interaction point might have a small aperture surrounded by a tungsten shield. The thickness of the tungsten would be determined by a compromise between the background suppression and the loss of angular acceptance into the detector. Such tungsten shields have also been discussed for TeV scale $`e^+e^{}`$ colliders, blocking up to 10 degrees of angular acceptance about the beam-pipe.
## 6 Spin-off Physics Opportunities at a Muon Collider Facility
A muon collider facility would provide for much useful physics research apart from muon collisions. Further physics topics include
* spallation neutron experiments
* neutrino physics
* muon fixed target physics.
The short intense bunches of deuterium ions used for creating the pions are also ideal for producing neutrons, and designs for spallation neutron sources include just such a beam. The neutrons could either be collected from the primary proton target or from the beam dump downstream of the target. Neutrons are somewhat complementary to x-rays as important probes for condensed matter experiments, and the interest in neutron sources is illustrated by the plans to build the Advanced Neutron Source in the U.S.A. at a cost of over 1 billion dollars.
Muon decays in the accelerator straight sections around the interaction points would provide a neutrino source unique in its intensity and composition. Each cycle of the muon bunch would produce sub-nanosecond bursts of roughly $`10^7`$ $`\nu _\mu `$’s and $`\overline{\nu }_\mathrm{e}`$’s (or $`\overline{\nu }_\mu `$’s and $`\nu _\mathrm{e}`$’s for the $`\mu ^+`$ bunch traveling in the opposite direction). These would have an average energy of around 1/3 the muon beam energy, and would have an angular divergence of only about $`1/\gamma _\mu 0.1`$ mr or the angular spread in the muon directions along the straight section (whichever is larger). This would allow substantial improvements in both precise measurements and seaches for exotic physics processes in neutrino-nucleon scattering. For example, the large neutrino-induced event samples could substantially improve current measurements of nucleon structure functions and weak mixing angle measurements from neutrino-nucleon scattering, and the purity of the beam and the 50% component of electron neutrinos would allow unprecedented sensitivities in detector-based searches for neutrino oscillations (a topic which is currently popular). In fact, the neutrino beam would be strong enough to be a radiation hazard, and it is likely that human habitation would have to be forbidden along a line extending out from the accelerator straight sections.
## 7 Feasibility and Cost
The parameters of two conceptual designs for a muon collider by Palmer are given in table 1. Achieving the design luminosities given by Palmer would make such muon colliders extremely attractive for exploring the TeV energy scale. It should be stressed that a lot of work will be required before one can estimate with any confidence what are reasonable design parameters for a muon collider.
Palmer also provided an “order of magnitude” cost estimate for a 4 TeV CoM muon collider, with the caveat that it was an extremely crude estimate which should not be taken seriously. He obtained the proton source cost (0.5 billion) using the KAON cost estimates, the linac cost (1.0 billion) using estimates for the Next Linear Collider $`e^+e^{}`$ machine and the tunnel and magnet cost (0.2 billion + 0.9 billion) by scaling to the SSC. Adding 0.5 billion dollars for the facility and 0.3 billion for the muon cooling gives a very tentative estimate for a total cost of 3.4 billion dollars. This is certainly a very hefty price tag, but it is competitive with and probably cheaper than the competing technologies, and the price would be less for a lower CoM energy.
## 8 Summary
Muon colliders show great promise for exploring the the high energy frontier in elementary particle physics. However, it will take a lot of detailed study to determine whether they are actually feasible or are just another good idea that won’t quite work.
|
no-problem/9907/math9907144.html
|
ar5iv
|
text
|
# Flag vectors of Eulerian partially ordered sets
## 1 Introduction
The study of Eulerian partially ordered sets (posets) originated with Stanley (). Examples of Eulerian posets are the posets of faces of regular CW spheres. These include face lattices of convex polytopes, the Bruhat order on finite Coxeter groups, and the lattices of regions of oriented matroids. (See and .)
The flag $`f`$-vector (or simply flag vector) of a poset is a standard parameter counting chains in the partially ordered set by ranks. In the last twenty years there has grown a body of work on numerical conditions on flag vectors of posets and complexes, especially those arising in geometric contexts. Early contributions are from Stanley on balanced Cohen-Macaulay complexes () and Bayer and Billera on the linear equations on flag vectors of Eulerian posets (). A major recent contribution is the determination of the closed cone of flag vectors of all graded posets by Billera and Hetyei (). Results on flag vectors and other invariants of Eulerian posets and special classes of them are surveyed in .
Our goal has been to describe the closed cone $`𝒞_{}^{n+1}`$ of flag $`f`$-vectors of Eulerian partially ordered sets. This problem was posed explicitly in . The ideal description would give explicitly both the facets (i.e., crucial inequalities on flag vectors) and posets that generate the extreme rays. We have a complete solution only for rank at most seven. For arbitrary ranks we give some of the facets and extreme rays. The extreme rays of the general graded cone () play an important role. We introduce half-Eulerian partially ordered sets in order to incorporate these limit posets in this work.
The remainder of this section provides definitions and other background, and the definition of the flag $`L`$-vector, which simplifies the calculations. Section 2 describes the extreme rays of the general graded cone, defines half-Eulerian posets, identifies which limit posets are half-Eulerian, and computes the corresponding $`cd`$-indices. Section 3 gives two general classes of inequalities on Eulerian flag vectors. Section 4 shows that the half-Eulerian limit posets all give extremes of the Eulerian cone, identifies some inequalities in all ranks as facet-inducing, and describes completely the cone for rank at most 7.
### 1.1 Background
A graded poset $`P`$ is a finite partially ordered set with a unique minimum element $`\widehat{0}`$, a unique maximum element $`\widehat{1}`$, and a rank function $`\rho :P\text{N}`$ satisfying $`\rho (\widehat{0})=0`$, and $`\rho (y)\rho (x)=1`$ whenever $`yP`$ covers $`xP`$. The rank $`\rho (P)`$ of a graded poset $`P`$ is the rank of its maximum element. Given a graded poset $`P`$ of rank $`n+1`$ and a subset $`S`$ of $`\{1,2,\mathrm{},n\}`$ (which we abbreviate as $`[1,n]`$), define the $`S`$–rank–selected subposet of $`P`$ to be the poset
$$P_S:=\{xP:\rho (x)S\}\{\widehat{0},\widehat{1}\}.$$
Denote by $`f_S(P)`$ the number of maximal chains of $`P_S`$. Equivalently, $`f_S(P)`$ is the number of chains $`x_1<\mathrm{}<x_{|S|}`$ in $`P`$ such that $`\{\rho (x_1),\mathrm{},\rho (x_{|S|})\}=S`$. The vector $`\left(f_S(P):S[1,n]\right)`$ is called the flag $`f`$-vector of $`P`$. Whenever it does not cause confusion, we write $`f_{s_1\mathrm{}s_k}`$ rather than $`f_{\{s_1,\mathrm{},s_k\}}`$; in particular, $`f_{\{m\}}`$ is always denoted $`f_m`$.
Various properties of the flag $`f`$-vector are more easily seen in different bases. An often used equivalent encoding is the flag $`h`$-vector $`(h_S(P):S[1,n])`$ given by the formula
$$h_S(P):=\underset{TS}{}(1)^{|ST|}f_T(P),$$
or, equivalently,
$$f_S(P)=\underset{TS}{}h_T(P).$$
The $`ab`$-index $`\mathrm{\Psi }_P(a,b)`$ of $`P`$ is a generating function for the flag $`h`$-vector. It is the following polynomial in the noncommuting variables $`a`$ and $`b`$:
$$\mathrm{\Psi }_P(a,b)=\underset{S[1,n]}{}h_S(P)u_S,$$
(1)
where $`u_S`$ is the monomial $`u_1u_2\mathrm{}u_n`$ with $`u_i=a`$ if $`iS`$, and $`u_i=b`$ if $`iS`$.
The Möbius function of a graded poset $`P`$ is defined recursively for any subinterval of $`P`$ by the formula
$$\mu ([x,y])=\{\begin{array}{cc}1& \text{if }x=y,\hfill \\ _{xz<y}\mu ([x,z])& \text{otherwise}.\hfill \end{array}$$
Equivalently, by Philip Hall’s theorem, the Möbius function of a graded poset $`P`$ of rank $`n+1`$ is the reduced Euler characteristic of the order complex, i.e., it is given by the formula
$$\mu (P)=\underset{S[1,n]}{}(1)^{|S|+1}f_S(P).$$
(2)
(See \[14, Proposition 3.8.5\].)
A graded poset $`P`$ is Eulerian if the Möbius function of every interval $`[x,y]`$ is given by $`\mu ([x,y])=(1)^{\rho (x,y)}`$. (Here $`\rho (x,y)=\rho ([x,y])=\rho (y)\rho (x)`$.)
The first characterization of all linear equalities holding for the flag $`f`$-vectors of all Eulerian posets was given by Bayer and Billera in . The equations of the theorem are called the generalized Dehn-Sommerville equations. Call the subspace of $`𝐑^{2^n}`$ they determine the Eulerian subspace; its dimension is the Fibonacci number $`e_n`$ ($`e_0=e_1=1`$, $`e_n=e_{n1}+e_{n2}`$).
###### Theorem 1.1 (Bayer and Billera)
Every linear equality holding for the flag $`f`$-vector of all Eulerian posets of rank $`n+1`$ is a consequence of the equalities
$$\left((1)^{i1}+(1)^{k+1}\right)f_S+\underset{j=i}{\overset{k}{}}(1)^jf_{S\{j\}}=0$$
for $`S[1,n]`$ and $`[i,k]`$ a maximal interval of $`[1,n]S`$.
Fine discovered that the $`ab`$-index of a polytope can be written as a polynomial in the noncommuting variables $`c:=a+b`$ and $`d:=ab+ba`$. Bayer and Klapper proved that for a graded poset $`P`$, the equations of Theorem 1.1 hold if and only if the $`ab`$-index is a polynomial with integer coefficients in $`c`$ and $`d`$. This polynomial is called the $`cd`$-index of $`P`$. Stanley () gives an explicit recursion for the $`cd`$-index in terms of intervals of $`P`$ for Eulerian posets. (He thus gives another proof of the existence of the $`cd`$-index for Eulerian posets.)
### 1.2 The flag $`\mathrm{}`$-vector and the flag $`L`$-vector
The introduction of another vector equivalent to the flag $`f`$-vector simplifies calculations.
###### Definition 1
The flag $`\mathrm{}`$-vector of a graded partially ordered set $`P`$ of rank $`n+1`$ is the vector $`(\mathrm{}_S(P):S[1,n])`$, where
$$\mathrm{}_S(P):=(1)^{n|S|}\underset{T[1,n]S}{}(1)^{|T|}f_T(P).$$
As a consequence,
$$f_S(P)=\underset{T[1,n]S}{}\mathrm{}_T(P).$$
(3)
The flag $`\mathrm{}`$-vector was first considered by Billera and Hetyei () while describing all linear inequalities holding for the flag $`f`$-vectors of all graded partially ordered sets. It turned out to give a sparse representation of the cone of flag $`f`$-vectors described in that paper.
A variant significant for Eulerian posets is the flag $`L`$-vector.
###### Definition 2
The flag $`L`$-vector of a graded partially ordered set $`P`$ of rank $`n+1`$ is the vector $`(L_S(P):S[1,n])`$, where
$$L_S(P):=(1)^{n|S|}\underset{T[1,n]S}{}\left(\frac{1}{2}\right)^{|T|}f_T(P).$$
Inverting the relation of the definition gives
$$f_S(P)=2^{|S|}\underset{T[1,n]S}{}L_T(P).$$
When the poset $`P`$ is Eulerian, the parameters $`L_S(P)`$ are actually the coefficients of the $`ce`$-index of the poset $`P`$. The $`ce`$-index was introduced by Stanley () as an alternative way of viewing the $`cd`$-index. The letter $`c`$ continues to stand for $`a+b`$; now let $`e:=ab`$. The $`ab`$-index of a poset can be written in terms of $`c`$ and $`d`$ if and only if it can be written in terms of $`c`$ and $`ee`$. It is easy to verify that $`L_S(P)`$ is exactly the coefficient in the $`ce`$-index of $`P`$ of the word $`u_S=u_1u_2\mathrm{}u_n`$ where $`u_i=c`$ if $`iS`$, and $`u_i=e`$ if $`iS`$. Since the existence of the $`cd`$-index is equivalent to the validity of the generalized Dehn-Sommerville equations, we get the following proposition. (It can be proved directly from the definition of the flag $`L`$-vector, yielding an alternative way to prove the existence of the $`cd`$-index for Eulerian posets.) A subset $`S[1,n]`$ is even if all the maximal intervals contained in $`S`$ are of even length.
###### Proposition 1.2
The generalized Dehn-Sommerville relations hold for a poset $`P`$ if and only if $`L_S(P)=0`$ whenever $`S`$ is not an even set.
The generalized Dehn-Sommerville relations hold (by chance) for some nonEulerian posets. A poset is Eulerian, however, if these relations hold for all intervals of the poset.
###### Corollary 1.3
A graded partially ordered set is Eulerian if and only if $`L_S([x,y])=0`$ for every interval $`[x,y]P`$ and every subset $`S`$ of $`[1,\rho (x,y)1]`$ that is not an even set.
## 2 Half-Eulerian posets
In this section we find special points in the closed cone of flag vectors of Eulerian posets. First consider the extremes of the closed cone of flag vectors of all graded posets, found by Billera and Hetyei ().
###### Definition 3
Given a graded poset $`P`$ of rank $`n+1`$, an interval $`I[1,n]`$, and a positive integer $`k`$, $`D_I^k(P)`$ is the graded poset obtained from $`P`$ by replacing every $`xP`$ with rank in $`I`$ by $`k`$ elements $`x_1,\mathrm{},x_k`$ and by imposing the following relations.
1. If for $`x,yP`$, $`\rho (x)I`$ and $`\rho (y)I`$, then $`x_i<y`$ in $`D_I^k(P)`$ if and only if $`x<y`$ in $`P`$, and $`y<x_i`$ in $`D_I^k(P)`$ if and only if $`y<x`$ in $`P`$.
2. If $`\{\rho (x),\rho (y)\}I`$, then $`x_i<y_j`$ in $`D_I^k(P)`$ if and only if $`i=j`$ and $`x<y`$ in $`P`$.
Clearly $`D_I^kP`$ is a graded poset of the same rank as $`P`$. Its flag $`f`$-vector can be computed from that of $`P`$ in a straightforward manner.
An interval system on $`[1,n]`$ is any set of subintervals of $`[1,n]`$ that form an antichain (that is, no interval is contained in another). (Much of what follows holds even if the intervals do not form an antichain, but the assumption simplifies the statements of some theorems.) For any interval system $``$ on $`[1,n]`$, and any positive integer $`N`$, the poset $`P(n,,N)`$ is defined to be the poset obtained from a chain of rank $`n+1`$ by applying $`D_I^N`$ for all $`I`$. It does not matter in which order these operators are applied. (Different values of $`N`$ can be used for each interval $`I`$, but we do not need that generality here.) Consider the sequence of posets for a fixed interval system $``$ as $`N`$ goes to infinity. Billera and Hetyei () showed that the normalized flag vectors of such a sequence converge to a vector on an extreme ray of the cone of flag vectors of all graded posets. More precisely,
###### Theorem 2.1 (Billera and Hetyei)
Suppose $``$ is an interval system of $`k`$ intervals on $`[1,n]`$. Then the vector
$$\left(\underset{N\mathrm{}}{lim}\frac{1}{N^k}f_S(P(n,,N)):S[1,n]\right)$$
generates an extreme ray of the cone of flag vectors of all graded posets. Moreover, all extreme rays are generated in this way.
Unfortunately, none of the posets $`P(n,,N)`$ are Eulerian, and none of these extreme rays are contained in the closed cone of flag vectors of Eulerian posets. However some of the posets are “half-Eulerian”, and lead us to extreme rays of the Eulerian cone.
For the interval system $`=\{[1,1],[2,2],\mathrm{},[n,n]\}`$, abbreviate $`D_{}^2(P)`$ as $`DP`$, and call this the horizontal double of $`P`$. Thus the horizontal double of $`P`$ is the poset obtained from $`P`$ by replacing every $`xP\{\widehat{0},\widehat{1}\}`$ with two elements $`x_1,x_2`$ such that $`\widehat{0}`$ and $`\widehat{1}`$ remain the minimum and maximum elements of the partially ordered set, and $`x_i<y_j`$ if and only if $`x<y`$ in $`P`$. (In the Hasse diagram of $`P`$, every edge is replaced by $``$.)
###### Definition 4
A half-Eulerian poset is a graded partially ordered set whose horizontal double is Eulerian.
For more information on half-Eulerian posets, see .
The flag $`f`$-vectors of $`P`$ and its horizontal double are connected by the formula $`f_S(DP)=2^{|S|}f_S(P)`$. Thus,
$$L_S(DP)=\mathrm{}_S(P).$$
(4)
Applying the definition of Eulerian to the horizontal double of a poset we get
###### Proposition 2.2
A graded partially ordered set $`P`$ is half-Eulerian if and only if for every interval $`[x,y]`$ of $`P`$,
$$\underset{i=1}{\overset{\rho (x,y)1}{}}(1)^{i1}f_i([x,y])=(1+(1)^{\rho (x,y)})/2.$$
Corollary 1.3 can now be restated for half-Eulerian posets.
###### Proposition 2.3
A graded partially ordered set is half-Eulerian if and only if $`\mathrm{}_S([x,y])=0`$ for every interval $`[x,y]P`$ and every subset $`S`$ of $`[1,\rho (x,y)1]`$ that is not an even set.
The flag vectors of the horizontal doubles of half-Eulerian posets span the Eulerian subspace, the subspace defined by the generalized Dehn-Sommerville equations. But the cones they determine may be different. Recall $`𝒞_{}^{n+1}`$ is the closed cone of flag vectors of Eulerian posets. Now write $`𝒞_𝒟^{n+1}`$ for the closed cone of flag vectors of horizontal doubles of half-Eulerian posets. We do not know if the inclusion $`𝒞_𝒟^{n+1}𝒞_{}^{n+1}`$ is actually equality.
For which interval systems $``$ is $`P(n,,N)`$ half-Eulerian?
###### Definition 5
An interval system $``$ on $`[1,n]`$ is even if for every pair of intervals $`I,J`$ the intersection $`IJ`$ has an even number of elements. (In particular, $`|I|`$ must be even for every $`I`$.)
Our goal is to show that the posets $`P(n,,N)`$ are half-Eulerian if and only if $``$ is an even interval system. For this we need to understand the intervals of the posets $`P(n,,N)`$.
###### Proposition 2.4
The interval $`[x,y]P(n,,N)`$ is isomorphic to
$`P(\rho (x,y)1,𝒥,N)`$, where $`𝒥=\{I\rho (x):I,I[\rho (x)+1,\rho (y)1]\}`$.
Proof: Let $`\rho (x)=r`$ and $`\rho (y)=s`$. Construct $`P(n,,N)`$ by applying the operators $`D_I^N`$ for all $`I`$ to a chain. Since the order of applying these operators is arbitrary, we may choose to apply first those for which $`I`$ is not a subset of $`[r+1,s1]`$. At this point for every $`x^{}`$ of rank $`r`$ and $`y^{}`$ of rank $`s`$ with $`y^{}x^{}`$, the interval $`[x^{},y^{}]`$ is isomorphic to a chain of rank $`\rho (x^{},y^{})`$. Applying the remaining operators $`D_I^N`$ leaves the elements of rank at most $`r`$ or of rank at least $`s`$ unchanged, and has the same effect on $`[x^{},y^{}]`$ as applying the operators $`D_{Ir}^N`$ to a chain of rank $`\rho (x^{},y^{})`$. $`\mathrm{}`$
The effect on the flag $`f`$-vector of applying the operator $`D_I^N`$ to a poset of rank $`n+1`$ is given by the formula
$$f_S(D_I^N(P))=\{\begin{array}{cc}Nf_S(P)\hfill & \text{if }IS\mathrm{}\text{,}\hfill \\ f_S(P)\hfill & \text{otherwise.}\hfill \end{array}$$
(5)
This enables us to write an $`\mathrm{}`$-vector formula.
###### Lemma 2.5
For $`P`$ a graded poset of rank $`n+1`$, $`S[1,n]`$, and $`N`$ a positive integer,
$$\mathrm{}_S(D_I^N(P))=N\mathrm{}_S(P)(N1)\underset{TI=S}{}\mathrm{}_T(P).$$
(6)
Proof: From the definition of $`\mathrm{}_S`$ and equation (5),
$`\mathrm{}_S(D_I^N(P))`$ $`=`$ $`(1)^{n|S|}{\displaystyle \underset{R[1,n]S}{}}(1)^{|R|}f_R(D_I^N(P))`$
$`=`$ $`(1)^{n|S|}{\displaystyle \underset{R[1,n]S}{}}(1)^{|R|}Nf_R(P)`$
$`(1)^{n|S|}{\displaystyle \underset{\genfrac{}{}{0pt}{}{R[1,n]S}{R[1,n]I}}{}}(1)^{|R|}(N1)f_R(P)`$
$`=`$ $`N\mathrm{}_S(P)(1)^{n|S|}{\displaystyle \underset{\genfrac{}{}{0pt}{}{R[1,n]S}{R[1,n]I}}{}}(1)^{|R|}(N1)f_R(P)`$
By (3), the coefficient in $`(1)^{n|S|}_{\genfrac{}{}{0pt}{}{R[1,n]S}{R[1,n]I}}(1)^{|R|}(N1)f_R(P)`$ of $`\mathrm{}_T(P)`$ is
$$(N1)(1)^{n|S|}\underset{\genfrac{}{}{0pt}{}{R[1,n]S}{R[1,n](TI)}}{}(1)^{|R|},$$
which is an empty sum if $`(TI)`$ is not contained in $`S`$, zero if $`(TI)`$ is properly contained in $`S`$, and $`(N1)(1)^{n|S|}(1)^{|[1,n]S|}=(N1)`$ if $`(TI)=S`$. This gives the recursion of the lemma. $`\mathrm{}`$
From this we can determine which of the posets $`P(n,,N)`$ are half-Eulerian.
###### Proposition 2.6
Let $``$ be an interval system on $`[1,n]`$.
1. If $``$ is an even system of intervals, then for all $`N`$ the partially ordered set $`P(n,,N)`$ is half-Eulerian.
2. If for some $`N>1`$, $`P(n,,N)`$ is half-Eulerian, then $``$ is an even system of intervals.
Proof: Using Lemma 2.5 we can show by induction on $`||`$ that for every $`N`$, $`\mathrm{}_S^{n+1}\left(P(n,,N)\right)`$ is zero unless $`S`$ is the union of some intervals of $``$. In particular, if $``$ is an even system of intervals, then $`\mathrm{}_S\left(P(n,,N)\right)=0`$ whenever $`S`$ is not an even set. The same observation holds for every interval $`[x,y]P(n,,N)`$ as well, since by Proposition 2.4 $`[x,y]`$ is isomorphic to $`P(m,𝒥,N)`$ for some $`mn`$ and some even system of intervals $`𝒥`$. Therefore the conditions of Proposition 2.3 are satisfied by $`P(n,,N)`$ for every $`N`$, if $``$ is an even system of intervals.
Now assume $``$ is a system of intervals that is not even. First consider the case where $``$ contains an interval $`I_m=[a,b]`$ with $`ba`$ even (hence $`I_m`$ is odd). Let $`𝒥=\{I_ma+1\}=\{[1,ba+1]\}`$. For $`S`$ nonempty, $`f_S(P(ba+1,𝒥,N))=N`$, so
$`\mathrm{}_{[1,ba+1]}(P(ba+1,𝒥,N))`$
$`=`$ $`{\displaystyle \underset{T[1,ba+1]}{}}(1)^{|T|}f_T(P(ba+1,𝒥,N))`$
$`=`$ $`1+{\displaystyle \underset{\genfrac{}{}{0pt}{}{T[1,ba+1]}{T\mathrm{}}}{}}(1)^{|T|}N=1N.`$
So $`\mathrm{}_{[1,ba+1]}(P(ba+1,𝒥,N))0`$ for $`N>1`$. Fix $`N>1`$, and choose $`x`$ and $`y`$ in $`P(n,,N)`$ with $`\rho (x)=a1`$, $`\rho (y)=b+1`$, and $`xy`$. Then by Proposition 2.4, $`\mathrm{}_{[1,\rho (x,y)1]}([x,y])=\mathrm{}_{[1,ba+1]}(P(ba+1,𝒥,N))0`$, with $`|[1,ba+1]|`$ odd. So $`P(n,,N)`$ is not half-Eulerian.
Now suppose $``$ contains only even intervals, but some two intervals have an odd overlap. Let $`I_p=[a,d]`$ and $`I_q=[c,b]`$, where $`a<cd<b`$ and $`da`$ and $`bc`$ are odd, but $`dc`$ is even. Then $`ba`$ is also even. We show that we may assume no other interval of $``$ is in the union $`I_pI_q`$. Suppose $`I_r=[e,f]`$ is another interval of $``$ with $`[e,f][a,b]`$ (and $`fe`$ is odd). Since $``$ is an antichain, $`a<e<cd<f<b`$. If $`ea`$ is even, then $`|I_qI_r|=|[c,f]|=fc+1=(fe)+(ea)(da)+(dc)+1`$, which is odd, because it is the sum of three odds and two evens. If $`ea`$ is odd, then $`|I_pI_r|=|[e,d]|=de+1=(da)(ea)+1`$, which is odd because it is the sum of three odds. Thus, if two intervals of $``$ have odd intersection and their union contains a third interval of $``$, then two intervals of $``$ with smaller union have odd intersection.
So we may assume $`I_p=[a,d]`$ and $`I_q=[c,b]`$ have odd intersection, and their union $`[a,b]`$ contains no other interval of $``$. Let $`𝒥=\{I_pa+1,I_qa+1\}=\{[1,da+1],[ca+1,ba+1]\}`$. Then
$`f_S(P(ba+1,𝒥,N))`$
$`=`$ $`\{\begin{array}{cc}1\hfill & \text{if }S=\mathrm{}\hfill \\ N^2\hfill & \text{if }S(I_pa+1)\mathrm{}\text{ and }S(I_qa+1)\mathrm{}\hfill \\ N\hfill & \text{otherwise.}\hfill \end{array}`$
So
$`\mathrm{}_{[1,ba+1]}(P(ba+1,𝒥,N))`$
$`=`$ $`{\displaystyle \underset{T[1,ba+1]}{}}(1)^{|T|}f_T(P(ba+1,𝒥,N))`$
$`=`$ $`{\displaystyle \underset{T[1,ba+1]}{}}(1)^{|T|}N^2+{\displaystyle \underset{T[1,ca]}{}}(1)^{|T|}(NN^2)`$
$`+`$ $`{\displaystyle \underset{T[da+2,ba+1]}{}}(1)^{|T|}(NN^2)+(12N+N^2)=(1N)^2.`$
So $`\mathrm{}_{[1,ba+1]}(P(ba+1,𝒥,N))0`$ for $`N>1`$. Fix $`N>1`$, and choose $`x`$ and $`y`$ in $`P(n,,N)`$ with $`\rho (x)=a1`$, $`\rho (y)=b+1`$, and $`xy`$. Then by Proposition 2.4, $`\mathrm{}_{[1,\rho (x,y)1]}([x,y])=\mathrm{}_{[1,ba+1]}(P(ba+1,𝒥,N))0`$, with $`|[1,ba+1]|`$ odd. So $`P(n,,N)`$ is not half-Eulerian. $`\mathrm{}`$
As will be seen later, even interval systems give rise to extreme rays of the cone of flag vectors of Eulerian posets. It is of interest, therefore, to count them.
###### Proposition 2.7
The number of even interval systems on $`[1,n]`$ is $`\left(\genfrac{}{}{0pt}{}{n}{n/2}\right)`$.
Proof: We define a one-to-one correspondence between even interval systems on $`[1,n]`$ and sequences $`\lambda =(\lambda _1,\lambda _2,\mathrm{},\lambda _n)\{1,1\}^n`$ satisfying $`_i\lambda _i=0`$ if $`n`$ is even and $`_i\lambda _i=1`$ if $`n`$ is odd. Clearly there are $`\left(\genfrac{}{}{0pt}{}{n}{n/2}\right)`$ such sequences.
For $``$ an even interval system, define $`\lambda ()=(\lambda _1,\lambda _2,\mathrm{},\lambda _n)\{1,1\}^n`$, where $`\lambda _i=(1)^i`$ if $`i`$ is an endpoint of an interval of $``$, and $`\lambda _i=(1)^{i1}`$ otherwise. (Note that for an even interval system, no number can be an endpoint of more than one interval.) For $``$ an even interval system, summing $`(1)^i`$ over the endpoints of intervals gives 0. So
$`{\displaystyle \underset{i=1}{\overset{n}{}}}\lambda _i`$ $`=`$ $`{\displaystyle \underset{i=1}{\overset{n}{}}}(1)^{i1}+{\displaystyle \underset{\genfrac{}{}{0pt}{}{i\text{ endpoint}}{\text{of interval}}}{}}2(1)^i`$
$`=`$ $`{\displaystyle \underset{i=1}{\overset{n}{}}}(1)^{i1}=\{\begin{array}{cc}\hfill 0& \text{if }n\text{ is even}\hfill \\ \hfill 1& \text{if }n\text{ is odd}\hfill \end{array}.`$
On the other hand, given a sequence $`\lambda =(\lambda _1,\lambda _2,\mathrm{},\lambda _n)\{1,1\}^n`$ satisfying $`_i\lambda _i=0`$ if $`n`$ is even and $`_i\lambda _i=1`$ if $`n`$ is odd, construct an even interval system as follows. Let $`s_1<s_2<\mathrm{}<s_k`$ be the sequence of indices $`s`$ for which $`\lambda _s=(1)^s`$. Then $`_{i=1}^n(1)^{i1}=_{i=1}^n\lambda _i=_{i=1}^n(1)^{i1}+_{j=1}^k2(1)^{s_j}`$, so $`_{j=1}^k(1)^{s_j}=0`$. Thus the sequence of $`s_j`$’s contains the same number of even numbers as odd. Construct an interval system $`=\{[a_1,b_1],[a_2,b_2],\mathrm{},[a_m,b_m]\}`$ ($`2m=k`$) recursively as follows. Let $`a_1=s_1`$ and let $`b_1=s_j`$ where $`j`$ is the least index such that $`s_1`$ and $`s_j`$ are of opposite parity. Then $`=[a_1,b_1]^{}`$, where $`^{}`$ is the interval system associated with $`s_2<s_3<s_4<\mathrm{}<s_k`$ with $`b_1=s_j`$ removed. Clearly $`[a_1,b_1]`$ is of even length. If $`[a_1,b_1][a_i,b_i]\mathrm{}`$ for some interval $`[a_i,b_i]`$ of $`^{}`$, then $`a_i<b_1`$, so by the choice of $`b_1`$, $`a_i`$ has the same parity as $`a_1`$. Thus $`[a_1,b_1][a_i,b_i]=[a_i,b_1]`$ is of even length. Furthermore, $`b_i`$ and $`b_1`$ are of the same parity, since $`a_i`$ and $`a_1`$ are, so again by the choice of $`b_1`$, $`b_i>b_1`$. So the interval $`[a_i,b_i]`$ is not contained in the interval $`[a_1,b_1]`$. The interval system $`\{[a_m,b_m]\}`$, is even, so by induction $``$ is an even interval system.
These constructions are inverses, giving the desired bijection. $`\mathrm{}`$
Recall that Billera and Hetyei () found extremes of the cone of flag vectors of graded posets as limits of the normalized flag vectors of the posets $`P(n,,N)`$. The next proposition follows easily by induction from Lemma 2.5.
###### Proposition 2.8
Let $`=\{I_1,I_2,\mathrm{},I_k\}`$ be a system of $`k0`$ intervals on $`[1,n]`$. Then
$`\underset{N\mathrm{}}{lim}{\displaystyle \frac{1}{N^k}}\mathrm{}_S\left(P(n,,N)\right)`$
$`=`$ $`{\displaystyle \underset{j=0}{\overset{k}{}}}(1)^j\left|\{1i_1<\mathrm{}<i_jk:I_{i_1}\mathrm{}I_{i_j}=S\}\right|.`$
Write $`f_S(P(n,))=lim_N\mathrm{}f_S(P(n,,N))/N^{||}`$. The vector these form (as $`S`$ ranges over all subsets of $`[1,n]`$) is not the flag $`f`$-vector of an actual poset, but it is in the closed cone of flag $`f`$-vectors of all graded posets. We call the symbol $`P(n,)`$ a “limit poset” and refer to the flag vector of the limit poset. If $``$ is an even interval system, then $`(f_S(P(n,)):S[1,n])`$ is in the closed cone of flag vectors of half-Eulerian posets. To get Eulerian posets the horizontal double operator is applied to $`P(n,,N)`$. The vector $`(f_S(DP(n,)):S[1,n])`$ is defined as a limit of the resulting normalized flag $`f`$-vectors, and satisfies $`f_S(DP(n,))=2^{|S|}f_S(P(n,))`$. It lies in the cone $`𝒞_𝒟^{n+1}`$ of flag vectors of doubles of half-Eulerian posets, a subcone of the Eulerian cone.
Recall (equation (4)) that the $`\mathrm{}`$-vector of a poset $`P`$ equals the $`L`$-vector of its horizontal double $`DP`$. The same holds after passing to the limit posets. Thus, Proposition 2.8 gives
$$L_S(DP(n,))=\underset{j=0}{\overset{k}{}}(1)^j\left|\{1i_1<\mathrm{}<i_jk:I_{i_1}\mathrm{}I_{i_j}=S\}\right|,$$
where $`=\{I_1,I_2,\mathrm{},I_k\}`$.
We look at the associated $`cd`$-indices of the “doubled limit posets.” Think of a word in $`c`$ and $`d`$ as a string with each $`c`$ occupying one position and each $`d`$ occupying two positions. The weight of a $`cd`$-word $`w`$ is then the number of positions of the string. Associated to each $`cd`$-word $`w`$ is the even set $`S(w)`$ consisting of the positions occupied by the $`d`$’s.
###### Proposition 2.9
For each $`cd`$-word $`w`$ with $`k`$ $`d`$’s and weight $`n`$, there exists an even interval system $`_w`$ for which the $`cd`$-index of $`DP(n,_w)`$ is $`2^kw`$.
Proof: Fix a $`cd`$-word $`w`$ with $`k`$ $`d`$’s and weight $`n`$. Write the elements of $`S(w)`$ in increasing order as $`i_1`$, $`i_1+1`$, $`i_2`$, $`i_2+1`$, …, $`i_k`$, $`i_k+1`$, and let $`_w`$ be the interval system $`\{[i_1,i_1+1],[i_2,i_2+1],\mathrm{},[i_k,i_k+1]\}`$. Let $`\mathrm{\Phi }=2^kw`$. Rewrite the $`cd`$-polynomial $`\mathrm{\Phi }`$ as a $`ce`$-polynomial. Recall from Sections 1.1 and 1.2 that $`c=a+b`$, $`d=ab+ba`$, and $`e=ab`$, so $`d=(ccee)/2`$. Thus, $`\mathrm{\Phi }`$ is rewritten as a sum of $`2^k`$ terms. Each is the result of replacing some subset of the $`d`$’s by $`cc`$, and the rest by $`ee`$; the coefficient is $`\pm 1`$, depending on whether the number of $`d`$’s replaced by $`ee`$ is even or odd. Thus
$$2^kw=\underset{J[1,k]}{}(1)^Jw_J,$$
where $`w_J=w_1w_2\mathrm{}w_n`$, with $`w_{i_j}=w_{i_j+1}=e`$ if $`jJ`$ and the remaining $`w_i`$’s are $`c`$. By the $`L`$-vector version of Proposition 2.8, this is precisely the $`ce`$-index of $`DP(n,_w)`$. $`\mathrm{}`$
In Stanley first found for each $`cd`$-word $`w`$ a sequence of Eulerian posets whose normalized $`cd`$-indices converge to $`w`$. Our limit posets are closely related to Stanley’s, but this particular construction highlights the important link between the half-Eulerian and Eulerian cones.
Before turning to inequalities satisfied by the flag vectors of Eulerian posets, we consider the question of whether the two cones $`𝒞_𝒟^{n+1}`$ and $`𝒞_{}^{n+1}`$ are equal. For low ranks the two cones are the same, as seen below. We know of no example in any rank of an Eulerian poset whose flag vector is not contained in the cone $`𝒞_𝒟^{n+1}`$ of doubled half-Eulerian posets. To look for such an example we turn to the best known examples of Eulerian posets, the face lattices of polytopes. In Stanley proved the nonnegativity of the $`cd`$-index for “$`S`$-shellable regular CW-spheres”, a class of Eulerian posets that includes all polytopes. By a result of Billera, Ehrenborg, and Readdy (), the lattice of regions of any oriented matroid also has a nonnegative $`cd`$-index. Proposition 2.9 implies that nonnegative $`cd`$-indices (and the associated flag vectors) are in the cone generated by the $`cd`$-indices (flag vectors) of the doubles of limit posets associated with even interval systems.
###### Corollary 2.10
$`𝒞_𝒟^{n+1}`$ contains the flag vectors of all Eulerian posets with nonnegative $`cd`$-indices. This includes the face lattices of polytopes and the lattices of regions of oriented matroids.
###### Conjecture 2.11
The closed cone $`𝒞_{}^{n+1}`$ of flag vectors of Eulerian posets is the same as the closed cone $`𝒞_𝒟^{n+1}`$ of flag vectors of horizontal doubles of half-Eulerian posets.
## 3 Inequalities
Throughout this section we use the following notation.
###### Definition 6
The interval system $`[S]`$ of a set $`S[1,n]`$ is the family of intervals $`[S]=\{[a_1,b_1],\mathrm{},[a_k,b_k]\}`$, where $`S=[a_1,b_1]\mathrm{}[a_k,b_k]`$ and $`b_{i1}<a_i1`$ for $`i2`$. In other words, $`[S]`$ is the collection of the maximal intervals contained in $`S`$.
Note that $`S`$ is an even set if and only if $`[S]`$ is an even interval system.
The following flag vector forms can be proved nonnegative by writing them as convolutions of basic nonnegative forms . (See Appendix B.) The issue of whether they give all linear inequalities on flag vectors of Eulerian posets was raised by Billera and Liu (see the discussion after Proposition 1.3 in ). We give here a simple direct argument for their nonnegativity that avoids convolutions.
###### Proposition 3.1 (Inequality Lemma)
Let $`T`$ and $`V`$ be subsets of $`[1,n]`$ such that for every $`I[V]`$, $`|IT|1`$. Write $`S=[1,n]V`$. For $`P`$ any rank $`n+1`$ Eulerian poset,
$$\underset{RT}{}(2)^{|TR|}f_{SR}(P)0.$$
Equivalently,
$$(1)^{|T|}\underset{TQV}{}L_Q(P)0.$$
Proof: The idea is that since no two elements of $`T`$ are in the same gap of $`S`$, elements with ranks in $`T`$ can be inserted independently in chains with rank set $`S`$. For $`C`$ an $`S`$-chain (i.e., a chain with rank set $`S`$) and $`tT`$, let $`n_t(C)`$ be the number of rank $`t`$ elements $`xP`$ such that $`C\{x\}`$ is a chain of $`P`$. Since every interval of an Eulerian poset is Eulerian, $`n_t(C)2`$ for all $`C`$ and $`t`$. So
$`{\displaystyle \underset{RT}{}}(2)^{|TR|}f_{SR}(P)`$ $`=`$ $`{\displaystyle \underset{RT}{}}(2)^{|TR|}{\displaystyle \underset{C\text{ an }S\text{-chain}}{}}{\displaystyle \underset{tR}{}}n_t(C)`$
$`=`$ $`{\displaystyle \underset{C\text{ an }S\text{-chain}}{}}{\displaystyle \underset{RT}{}}(2)^{|TR|}{\displaystyle \underset{tR}{}}n_t(C)`$
$`=`$ $`{\displaystyle \underset{C\text{ an }S\text{-chain}}{}}{\displaystyle \underset{tT}{}}(n_t(C)2)0.`$
So the flag vector inequality is proved. The second inequality is simply the translation into $`L`$-vector form. $`\mathrm{}`$
Here are some new inequalities.
###### Theorem 3.2
Let $`1i<j<kn`$. For $`P`$ any rank $`n+1`$ Eulerian poset,
$$f_{ik}(P)2f_i(P)2f_k(P)+2f_j(P)0.$$
Proof: First order the rank $`j`$ elements of $`P`$ in the following way. Choose any order, $`G_1`$, $`G_2`$, …, $`G_m`$ for the components of the Hasse diagram of the rank-selected poset $`P_{\{i,j,k\}}`$. For each rank $`j`$ element $`y`$ of $`P`$, identify the component containing $`y`$ by $`yG_{g(y)}`$. Order the rank $`j`$ elements of $`P`$ in any way consistent with the ordering of components. That is, choose an order $`y_1`$, $`y_2`$, …, $`y_r`$ such that $`y_s<y_t`$ implies $`g(y_s)g(y_t)`$.
A rank $`i`$ element $`x`$ belongs to $`y_q`$ if $`q`$ is the least index such that $`x<y_q`$ in $`P`$. Write $`I_q`$ for the number of rank $`i`$ elements belonging to $`y_q`$, and $`I_q^{}`$ for the number of rank $`i`$ elements $`x`$ such that $`x<y_q`$, but $`x`$ does not belong to $`y_q`$. Similarly, a rank $`k`$ element $`z`$ belongs to $`y_q`$ if $`q`$ is the least index such that $`y_q<z`$ in $`P`$. Write $`K_q`$ for the number of rank $`k`$ elements belonging to $`y_q`$, and $`K_q^{}`$ for the number of rank $`k`$ elements $`z`$ such that $`y_q<z`$, but $`z`$ does not belong to $`y_q`$. Note that $`I_q+I_q^{}2`$ and $`K_q+K_q^{}2`$, since $`P`$ is Eulerian. A flag $`x<z`$ belongs to $`y_q`$ if $`x<y_q<z`$ and $`q`$ is the least index such that either $`x<y_q`$ or $`y_q<z`$.
Let $`F=f_{ik}(P)2f_i(P)2f_k(P)+2f_j(P)`$. Let $`F_q`$ be the contribution to $`F`$ by elements and flags belonging to $`y_q`$. Thus,
$$F_q=I_qK_q+I_q^{}K_q+I_qK_q^{}2I_q2K_q+2.$$
If $`I_q^{}2`$, then $`F_q=I_q(K_q+K_q^{}2)+(I_q^{}2)K_q+22`$.
If $`I_q^{}=K_q^{}=0`$, then $`F_q=(I_q2)(K_q2)22`$.
In all other cases it is easy to check that $`F_q0`$.
Suppose that the rank $`j`$ elements in component $`G_{\mathrm{}}`$ are $`y_s`$, $`y_{s+1}`$, …, $`y_t`$. Then $`I_s^{}=K_s^{}=0`$, so $`F_s2`$. Furthermore, $`I_t=K_t=0`$, because any rank $`i`$ element $`x`$ related to $`y_t`$ must also be related to at least one other rank $`j`$ element, and it is in the same component. That rank $`j`$ element has index less than $`t`$, so $`x`$ does not belong to $`y_t`$. This in turn implies $`I_t^{}2`$, so $`F_t2`$. For all $`q`$, $`s<q<t`$, either $`I_q^{}>0`$ or $`K_q^{}>0`$, by the connectivity of the component, so $`F_q0`$. Thus $`_{q=s}^tF_q0`$. This is true for each component $`G_{\mathrm{}}`$, so $`F=_{q=1}^rF_q0`$. $`\mathrm{}`$
These inequalities can be used to generate others by convolution (see Appendix B.)
Evaluating the flag vector inequalities of Proposition 3.1 for the horizontal double $`DP`$ of a half-Eulerian poset $`P`$ gives the inequalities, for $`S`$ and $`T`$ satisfying the hypotheses of Proposition 3.1,
$$\underset{RT}{}(1)^{|TR|}f_{SR}(P)0.$$
(9)
These inequalities are valid not just for half-Eulerian posets but for all graded posets. The proof of Proposition 3.1 uses only the fact that in every open interval of an Eulerian poset there are at least two elements of each rank. If the proof is rewritten using the assumption that in every open interval there is at least one element of each rank, the inequalities (9) are proved for all graded posets.
Similarly, the flag vector inequalities of Theorem 3.2 give inequalities for half-Eulerian posets,
$$f_{ik}(P)f_i(P)f_k(P)+f_j(P)0.$$
The proof of Theorem 3.2 can be modified in the same way to show these inequalities are valid for all graded posets. The first instance of this class of inequalities was found by Billera and Liu ().
We conjecture that all inequalities valid for half-Eulerian posets come from inequalities valid for all graded posets. Inequalities for half-Eulerian posets are to be interpreted as conditions in the subspace of $`𝐑^{2^n}`$ spanned by flag vectors of half-Eulerian posets, but we are describing them in $`𝐑^{2^n}`$. Giving inequalities using linear forms in the flag numbers $`f_S`$ over $`𝐑^{2^n}`$, the statement is as follows.
###### Conjecture 3.3
Every linear form that is nonnegative for the flag vectors of all half-Eulerian posets is the sum of a linear form that is nonnegative for all graded posets and a linear form that is zero for all half-Eulerian posets.
## 4 Extreme Rays and Facets of the Cone
We have described some points in the Eulerian cone $`𝒞_{}^{n+1}`$ and some inequalities satisfied by all points in the cone. We turn now to identifying which of these give extreme rays and facets.
If $``$ is an even interval system, then $`(f_S(P(n,)):S[1,n])`$ is on an extreme ray in the closed cone of flag vectors of all graded posets, and is in the subcone of flag $`f`$-vectors of half-Eulerian posets. Therefore it is on an extreme ray of the subcone.
###### Proposition 4.1
For every even interval system $``$, the flag vector of the limit poset $`P(n,)`$ generates an extreme ray of the cone of flag vectors of half-Eulerian posets.
What does this say about the extreme rays of the cone of flag vectors of Eulerian posets? For every even interval system $``$, the flag vector of $`DP(n,)`$ lies on an extreme ray of the subcone $`𝒞_𝒟^{n+1}`$, but we cannot conclude directly that it lies on an extreme ray of the cone $`𝒞_{}^{n+1}`$. A separate proof is needed.
For the following proofs, we use the computation of $`\mathrm{}_Q(P(n,))`$ (and $`L_Q(DP(n,))`$) from the decompositions of $`Q`$ as the union of intervals of $``$ (Proposition 2.8).
###### Theorem 4.2
For every even interval system $``$, the flag vector of the doubled limit poset $`DP(n,)`$ generates an extreme ray of the cone of flag vectors of Eulerian posets.
Proof: We work in the closed cone of $`L`$-vectors of Eulerian posets. The cone of $`L`$-vectors of Eulerian posets is contained in the subspace of $`𝐑^{2^n}`$ determined by the equations $`L_S=0`$ for $`S`$ not an even set. To prove that the $`L`$-vector of $`DP(n,)`$ generates an extreme ray, we show that it lies on linearly independent supporting hyperplanes, one for each nonempty even set $`V`$ in $`[1,n]`$. Fix an even interval system $``$. For each nonempty even set $`V[1,n]`$, we find a set $`T`$ such that $`T`$ and $`V`$ satisfy the hypothesis of Proposition 3.1 and $`_{TQV}L_Q(DP(n,))=0`$.
Case 1. Suppose $`V`$ is the union of some intervals in $``$. Let $`I_1`$, $`I_2`$, …, $`I_k`$ be all the intervals of $``$ contained in $`V`$. Set $`T=\mathrm{}`$. Then for each subset $`J[1,k]`$, the corresponding union of intervals contributes $`(1)^{|J|}`$ to $`L_Q(DP(n,))`$, for $`Q=_{jJ}I_j`$. Thus $`_{TQV}L_Q(DP(n,))=_{J[1,k]}(1)^{|J|}=0`$.
Case 2. If $`V`$ is not the union of some intervals in $``$, let $`W`$ be the union of all those intervals of $``$ contained in $`V`$. Choose $`tVW`$, and set $`T=\{t\}`$. For $`QV`$, $`L_Q(DP(n,))=0`$ unless $`QW`$. But if $`QW`$ then $`t`$ cannot be in $`Q`$. So $`_{\{t\}QV}L_Q(DP(n,))=0`$.
Now $`_{TQV}L_Q(P)=0`$ determines a supporting hyperplane of the closed cone of $`L`$-vectors of Eulerian posets, because the inequality of Proposition 3.1 is valid, and the poset $`DP(n,)`$ lies on the hyperplane. The hyperplane equations each involve a distinct maximal set $`V`$, which is even, so they are linearly independent on the subspace determined by the equations $`L_S=0`$ for $`S`$ not an even set. So the doubled limit poset $`DP(n,)`$ is on an extreme ray of the cone. $`\mathrm{}`$
Note how far we are, however, from a complete description of the extreme rays.
###### Conjecture 4.3
For every positive integer $`n`$, the closed cone of flag $`f`$-vectors of Eulerian posets of rank $`n+1`$ is finitely generated.
###### Lemma 4.4 (Facet Lemma)
Assume $`_{Q[1,n]}a_QL_Q(P)0`$ for all Eulerian posets $`P`$ of rank $`n+1`$. Let $`M[1,n]`$ be a fixed even set. Suppose for all even sets $`R[1,n]`$, $`RM`$, there exists an interval system $`(R)`$ consisting of disjoint even intervals whose union is $`R`$ and such that $`_{Q[1,n]}a_Q\mathrm{}_Q(P(n,(R)))=0`$. Then $`_{Q[1,n]}a_QL_Q(P)=0`$ determines a facet of the closed cone of $`L`$-vectors of Eulerian posets.
(Note that $`(R)`$ need not be $`[R]`$.)
Proof: The dimension of the cone $`𝒞_{}^{n+1}`$ equals the number of even subsets (a Fibonacci number). So it suffices to show that the vectors $`(\mathrm{}_Q(P(n,(R))))`$ $`=(L_Q(DP(n,(R))))`$ are linearly independent. To see this, note that for every set $`Q`$ not contained in $`R`$, $`\mathrm{}_Q(P(n,(R)))=0`$. By the disjointness of the intervals in $`(R)`$, there is a unique way to write $`R`$ as the union of intervals in $`(R)`$. So by Proposition 2.8, $`(\mathrm{}_R(P(n,(R))))=(1)^{|(R)|}`$. Thus, $`R`$ is the unique maximal set $`Q`$ for which $`(\mathrm{}_Q(P(n,(R))))0`$. So the $`L`$-vectors of the posets $`DP(n,(R))`$, as $`R`$ ranges over sets different from $`M`$, are linearly independent. $`\mathrm{}`$
###### Proposition 4.5
The inequality $`_{Q[1,n]}L_Q(P)0`$ (or, equivalently, $`f_{\mathrm{}}(P)0`$) determines a facet of the closed cone of $`L`$-vectors of Eulerian posets of rank $`n+1`$.
Proof: Apply the Facet Lemma 4.4 with $`M=\mathrm{}`$. For a nonempty even set $`R`$, the interval system $`[R]`$ of $`R`$ is nonempty, so $`_{Q[1,n]}\mathrm{}_Q(P(n,[R]))=_{𝒥[R]}(1)^{|𝒥|}=0`$. $`\mathrm{}`$
###### Theorem 4.6
Let $`V`$ be a subset of $`[1,n]`$ such that every $`I[V]`$ has cardinality at least $`2`$, and every $`I[[0,n+1]V]`$ has cardinality at most $`3`$. Assume that $`M`$ is a subset of $`V`$ such that every $`[a,b][V]`$ satisfies the following:
1. $`M[a,b]=\mathrm{},`$ $`[a,a+1]`$, or $`[b1,b]`$.
2. If $`aM`$ then $`a2\{1\}M`$.
3. If $`bM`$ then $`b+2\{n+2\}M`$.
Then
$`(1)^{|M|/2}{\displaystyle \underset{MQV}{}}L_Q(P)0`$ (10)
determines a facet of $`𝒞_{}^{n+1}`$. Furthermore, if we strengthen ($`i`$) by also requiring $`M[a,a+2]=\mathrm{}`$ for every $`[a,a+2][V]`$, then distinct pairs $`(M,V)`$ give distinct facets.
Proof: If $`M=\mathrm{}`$, then conditions ($`ii`$) and ($`iii`$) force $`V=[1,n]`$ (or $`V=\mathrm{}`$ if $`n1`$). The resulting inequality, $`_{Q[1,n]}L_Q(P)0`$, gives a facet, as shown in Proposition 4.5. Now assume that $`M\mathrm{}`$.
Step 1 is to prove that inequality (10) holds for all Eulerian posets. Note that $`[M]`$ is a nonempty collection of intervals of length two. From each such interval choose one endpoint adjacent to an element of $`[0,n+1]V`$. Let $`T`$ be the set of these chosen elements. The Inequality Lemma 3.1 applies to these $`T`$ and $`V`$ because each interval of $`V`$ contains at most one interval of $`[M]`$, and hence at most one element of $`T`$. The resulting inequality is $`(1)^{|T|}_{TQV}L_Q0`$. Now $`L_Q(P)=0`$ for all $`P`$ if $`[Q]`$ contains an odd interval. So we can restrict the sum to even sets $`Q`$. Since $`Q`$ must be contained in $`V`$, such a $`Q`$ must contain the intervals of $`M`$. Thus, $`(1)^{|M|/2}_{MQV}L_Q(P)0`$.
Step 2 is to prove that if $`I[1,n]`$ is an interval of cardinality at least 2 and $`I`$ contains an element $`i`$ not in $`V`$, then $`I`$ contains an element adjacent to an interval of $`M`$. If an interval from $`[V]`$ ends at $`i1`$, then either $`i1M`$ or $`i+1M`$ by ($`iii`$) (since $`i+1<n+2`$). Similarly, if an interval from $`[V]`$ begins at $`i+1`$, then either $`i1M`$ or $`i+1M`$. So assume no interval from $`[V]`$ begins at $`i1`$ or ends at $`i+1`$. The hypothesis of the theorem states that every interval from $`[[0,n+1]V]`$ has cardinality at most three. Thus the interval $`[i1,i+1]`$ belongs to $`[[0,n+1]V]`$. Hence $`i2\{1\}V`$ and $`i+2\{n+2\}V`$. If $`i2=1`$ then $`I[i,i+1]=[1,2]`$, condition ($`ii`$) applied to $`a=3`$ yields $`3M`$, and $`2I`$ is adjacent to $`3`$. The case when $`i+2=n+2`$ is dealt with similarly. Finally, if $`i2`$ and $`i+2`$ are both endpoints of intervals from $`[V]`$, then, since $`iM\{1,n+2\}`$, condition ($`ii`$) applied to $`a=i+2`$ and condition ($`iii`$) applied to $`b=i2`$ yield $`i+2M`$ and $`i2M`$. Either $`i1`$ or $`i+1`$ belongs to $`I`$ and each of them is adjacent to an element of $`M`$.
Recall that for $``$ an even interval system, the vector $`(\mathrm{}_Q(P(n,)):Q[1,n])`$ is in the closed cone of $`\mathrm{}`$-vectors of half-Eulerian posets. Step 3 is to show that for each even set $`RM`$, there exists an even interval system $``$ with $`_iI=R`$ such that $`(1)^{|M|/2}_{MQV}\mathrm{}_Q(P(n,))=0`$.
Let $`R`$ be an even set not equal to $`M`$. If $`MR`$, then for every $`Q`$ containing $`M`$, $`\mathrm{}_Q(P(n,[R]))=0`$. Now suppose $`MR`$, but $`RV`$. Let $`I`$ be an interval of $`[R]`$ such that $`IV`$. Then $`I`$ contains an element adjacent to an interval of $`M`$. Since $`MR`$ and $`I`$ is a maximal interval in $`R`$, $`IM\mathrm{}`$. Thus every union of intervals of $`[R]`$ containing $`M`$ must contain $`I`$ and thus an element not in $`V`$. So $`_{MQV}\mathrm{}_Q(P(n,[R]))=0`$, because all terms are zero.
Finally, suppose $`MRV`$ and $`RM`$. Let $``$ be the interval system of $`R`$ consisting only of intervals of length 2. Then every interval of $`M`$ is in $``$. This is because every interval of $`M`$ is of length 2, with at least one of its endpoints adjacent to an element not in $`V`$. So $`_{MQV}\mathrm{}_Q(P(n,))=_{[M]𝒥}(1)^{|𝒥|}=0`$, since $`RM`$ implies $`[M]`$.
By the Facet Lemma 4.4, the inequality $`(1)^{|M|/2}_{MQV}L_Q(P)0`$ gives a facet of $`𝒞_{}^{n+1}`$.
Now we show that under the added condition $`M[a,a+2]=\mathrm{}`$ for every $`[a,a+2][V]`$, the facets obtained are distinct.
Note that two $`(M,V)`$ pairs can give the same inequality only if they have the same $`M`$, because $`L_M`$ is included in the linear form for $`(M,V)`$, and $`M`$ is the minimal (by set inclusion) set for which $`L_M`$ is in the form. Now for fixed $`M`$, we show that $`(M,V_1)`$ and $`(M,V_2)`$ give distinct linear inequalities when $`V_1V_2`$. Since the sets $`V_1`$ and $`V_2`$ are different, there is an interval $`[a,b]`$ such that $`[a,b]`$ occurs in exactly one of $`[V_1]`$ or $`[V_2]`$. Let $`[a,b]`$ be a maximal interval with this property. Without loss of generality assume $`[a,b][V_1]`$. Then $`[a,b]`$ is contained in no interval of $`[V_2]`$.
Case 1. $`M[a,b]=\mathrm{}`$. Then for every $`i`$, $`aib1`$, the term $`L_{[i,i+1]M}`$ occurs in the inequality for $`(M,V_1)`$. At least one of these terms does not occur in the inequality for $`(M,V_2)`$, because $`[a,b]V_2`$.
Case 2. $`M[a,b]=[a,a+1]`$. Since $`MV_2`$ and $`[a,b]V_2`$, $`b>a+1`$. By the strengthened hypothesis on $`M`$, $`ba+3`$. Then for every $`i`$, $`a+2ib1`$, the term $`L_{[i,i+1]M}`$ occurs in the inequality for $`(M,V_1)`$. At least one of these terms does not occur in the inequality for $`(M,V_2)`$, because $`[a,b]V_2`$.
Case 3. $`M[a,b]=[b1,b]`$. The proof is similar to Case 2.
Thus, with the condition $`M[a,a+2]=\mathrm{}`$ for every $`[a,a+2][V]`$, the facets given by the theorem are all distinct. $`\mathrm{}`$
Theorem 4.6 may be restated and interpreted in terms of the convolution of chain operators. We refer the interested reader to Appendix B for that approach.
With the aid of PORTA (), we verified that the theorems above give all the extremes and facets of the Eulerian cone for rank at most 6.
###### Theorem 4.7
For rank $`n+16`$, the closed cone $`𝒞_{}^{n+1}`$ of flag vectors of Eulerian posets is finitely generated. It has $`\left(\genfrac{}{}{0pt}{}{n}{n/2}\right)`$ extreme rays, all generated by the flag vectors of the limit posets $`DP(n,)`$ for $``$ even interval systems on $`[1,n]`$. It has $`\left(\genfrac{}{}{0pt}{}{n}{n/2}\right)`$ facets, all given by Proposition 4.5 and Theorem 4.6.
At rank 7 the situation changes for both extreme rays and facets.
###### Theorem 4.8
($`i`$) The cone $`𝒞_{}^7`$ is finitely generated, with 24 extreme rays. Twenty of the extreme rays are generated by the flag vectors of the limit posets $`DP(n,)`$ for $``$ even interval systems on $`[1,6]`$.
($`ii`$) The cone $`𝒞_{}^7`$ has 23 facets. Fifteen of the facets are given by the inequalities of Theorem 4.6. Four additional facets come from the Inequality Lemma 3.1. The remaining four come from Theorem 3.2.
The four special extreme rays of the rank 7 Eulerian cone have corresponding rays in the half-Eulerian cone. The generators for the half-Eulerian cone are all obtained by adding the flag vectors of limit posets associated with noneven interval systems. The summands do not satisfy the conditions of Proposition 2.3 for half-Eulerian posets, but the sum does. The calculations are easily done in terms of the $`\mathrm{}`$-vector, using Proposition 2.8. Specific sequences of half-Eulerian posets have been constructed whose flag vectors converge to these four extremes. The half-Eulerian posets are obtained by “gluing together” posets for each summand. These are then converted to Eulerian posets by the horizontal doubling operation. Below are the sums of limit posets used. Descriptions of the half-Eulerian posets are found in Appendix A.
Extreme 1: $`P(6,\{[1,2],[2,6]\}+\{[2,5],[5,6]\})`$
Extreme 2: $`P(6,\{[1,3],[3,4],[4,6]\}+\{[1,2],[2,3]\}+\{[4,5],[5,6]\})`$
Extreme 3: $`P(6,\{[1,2],[3,4],[4,5]\}+\{[3,5],[5,6]\}+\{[1,2],[2,5]\})`$
Extreme 4: $`P(6,\{[1,2],[2,4]\}+\{[2,5],[5,6]\}+\{[2,3],[3,4],[5,6]\})`$
Note that for rank at most 7, the two cones $`𝒞_𝒟^{n+1}`$ and $`𝒞_{}^{n+1}`$ are equal, because the generators of extreme rays specified in Theorems 4.7 and 4.8 are horizontal doubles of half-Eulerian limit posets.
Perhaps all the extreme rays of the half-Eulerian cone (if not the Eulerian cone) can be obtained by gluing together Billera-Hetyei limit posets.
A complete description of the closed cone of flag vectors of Eulerian posets remains open, and, as mentioned before, the cone is not even known to be finitely generated. We do not know if convolutions of the inequalities of Proposition 3.1 and Theorem 3.2 completely determine the cone. A better understanding of the construction of extreme rays as sums of Billera-Hetyei limit posets would be valuable.
The study of Eulerian posets is motivated in part by questions about convex polytopes. Is the cone of flag vectors of all Eulerian posets the same as or close to the cone of flag vectors of polytopes? The answer is no. The inequalities of Proposition 3.1 can be strengthened considerably for polytopes. The proof of Proposition 3.1 uses only the fact that in an Eulerian poset each interval has at least two elements of each rank. For convex polytopes, each interval is at least the size of a Boolean algebra of the same rank. Thus, for example, where Proposition 3.1 gives that $`f_{1479}(P)2f_{179}(P)0`$ for Eulerian posets, for convex polytopes the inequality $`f_{1479}(P)20f_{179}(P)0`$ holds, because the rank 6 Boolean algebra has $`\left(\genfrac{}{}{0pt}{}{6}{3}\right)=20`$ elements of rank 3. For ranks 4 through 7, we have verified that none of the extreme rays of the Eulerian cone is in the closed cone of flag vectors of convex polytopes.
## Appendix Appendix A Some half-Eulerian limit posets of rank $`7`$
Here are the constructions of half-Eulerian posets whose doubles give Extremes 1, 2 and 3 of $`𝒞_{}^7`$. Extreme 4 is the dual of Extreme 3.
In the following, $`C^7`$ denotes a chain of rank $`7`$.
### A.1 $`P(6,\{[1,2],[2,6]\}+\{[2,5],[5,6]\})`$
Take $`D_{[1,2]}^ND_{[2,6]}^N\left(C^7\right)`$ and $`D_{[1,5]}^ND_{[5,6]}^N\left(C^7\right)`$. Identify the elements of both posets at rank $`1`$ and at rank $`6`$. Figure 1 represents the resulting poset for $`N=2`$.
### A.2 $`P(6,\{[1,3],[3,4],[4,6]\}+\{[1,2],[2,3]\}+\{[4,5],[5,6]\})`$
Take
$`P^I(N)`$ $`=`$ $`D_{[1,3]}^ND_{[3,4]}^ND_{[4,6]}^ND_{[4,5]}^{N+1}(C^7)`$
$`P^{II}(N)`$ $`=`$ $`D_{[1,2]}^{N+1}D_{[1,6]}^ND_{[2,4]}^N(C^7),\text{and}`$
$`P^{III}(N)`$ $`=`$ $`D_{[1,5]}^ND_{[3,5]}^ND_{[5,6]}^N(C^7).`$
Identify the elements of $`P^I(N)`$ with the elements of $`P^{II}(N)`$ at ranks $`1,4,5`$, and $`6`$. Identify the elements of $`P^I(N)`$ with the elements of $`P^{III}(N)`$ at ranks $`1,2,3`$, and $`6`$. Figure 2 represents the resulting poset for $`N=2`$.
### A.3 $`P(6,\{[1,2],[3,4],[4,5]\}+\{[3,5],[5,6]\}+\{[1,2],[2,5]\})`$
Take
$$\begin{array}{cccc}\hfill P^I(N)& =& D_{[1,2]}^{N+1}D_{[3,4]}^{N+1}D_{[3,6]}^ND_{[4,5]}^{N+1}(C^7)\hfill & \text{(Figure }\text{3}\text{)}\hfill \\ \hfill P^{II}(N)& =& D_{[1,5]}^{N+1}D_{[3,5]}^{N^2}D_{[5,6]}^N(C^7)\hfill & \text{(Figure }\text{4}\text{), and}\hfill \\ \hfill P^{III}(N)& =& D_{[1,2]}^{N+2}D_{[2,5]}^{N^2N+2}D_{[1,6]}^N(C^7)\hfill & \text{(Figure }\text{5}\text{).}\hfill \end{array}$$
Identify the elements of $`P^I(N)`$ with the elements of $`P^{II}(N)`$ at ranks $`1,2`$, and $`6`$. Identify the elements of $`P^I(N)`$ with the elements of $`P^{III}(N)`$ at rank $`6`$. Figure 6 represents the resulting poset for $`N=2`$.
## Appendix Appendix B The Billera-Liu ring of chain operators
As in Billera and Liu () we view the flag $`f`$-vector as a vector of chain operators $`\left(f_S^{n+1}:S[1,n]\right)`$; here $`f_S^{n+1}(P)=f_S(P)`$ if $`P`$ is a graded poset of rank $`n+1`$ and $`0`$ otherwise. The following multiplication of chain operators $`f_S^n`$ ($`n1`$, $`S[1,n1]`$) was introduced by Kalai in and studied for Eulerian posets by Billera and Liu in :
$$f_S^mf_T^n:=f_{S\{m\}(T+m)}^{m+n}.$$
It is straightforward that given a pair of valid linear inequalities
$$F=\underset{S[1,m1]}{}a_Sf_S^m0\text{and}G=\underset{T[1,n1]}{}b_Sf_S^n0$$
that hold for a class of graded posets, the linear inequality $`FG0`$ is also valid for the same class. It was observed by Billera and Liu in \[6, Proposition 1.3\] that for the class of all graded posets the converse holds as well: if $`FG0`$ is a valid inequality, then either both $`F0`$ and $`G0`$ are valid inequalities, or both $`F0`$ and $`G0`$ are valid inequalities. According to \[6, Theorem 2.1\] the associative algebra generated by all chain operators (whose domain is taken to be the class of all graded posets) is the free polynomial ring in variables $`\{f_{\mathrm{}}^i:i1\}`$. If we take the degree of the variable $`f_{\mathrm{}}^i`$ to be $`i`$, then linear combinations of the form $`F=_{S[1,m1]}a_Sf_S^m`$ become homogeneous polynomials. Hence, as noted by Billera and Hetyei in , one can use a result of Cohn in \[9, Theorem 3\] that the semigroup of homogeneous polynomials of a free graded associative algebra has unique factorization. Hence an inequality can be checked factor-by-factor. Billera and Hetyei also showed in that for the class of all graded posets the product of two facet inequalities is almost always a facet inequality, every exception being a consequence of the equalities
$$f_{\mathrm{}}^mf_{\mathrm{}}^n=f_m^{m+n}=\left(f_m^{m+n}f_{\mathrm{}}^{m+n}\right)+f_{\mathrm{}}^{m+n}.$$
For Eulerian and half-Eulerian posets, it is advisable to convert our expressions into the flag-$`\mathrm{}`$ or flag-$`L`$ forms respectively. Straighforward substitution into the definition shows
$$\mathrm{}_S^m\mathrm{}_T^n=\mathrm{}_{S(T+m)}^{m+n}\text{and}L_S^mL_T^n=2L_{S(T+m)}^{m+n}$$
This means that when we write $`[u_S]=L_S^n`$ as the coefficient of the $`ce`$-word $`u_S`$, the convolution of the forms $`_{S[1,m1]}a_S[u_S]`$ and $`_{T[1,n1]}b_T[u_T]`$ is a constant multiple of the form $`_{S[1,m1]}_{T[1,n1]}a_Sb_T[u_Scu_T].`$ In particular, if only monomials of $`c`$ and $`ee`$ occur in each factor, the same holds for the convolution. Hence the same result of Cohn \[9, Theorem 3\] on unique homogeneous factorization proves the following.
###### Proposition B.1
Every homogeneous linear form $`_{S[1,n]}a_S\mathrm{}_S^{n+1}`$ or
$`_{S[1,n]}a_SL_S^{n+1}`$, where $`S`$ ranges over only even sets, can be uniquely written as a product of irreducible expressions of the same kind.
Let us call such expressions even $`\mathrm{}`$-forms and even $`L`$-forms, respectively. The interest in this factorization stems from the following observation.
###### Proposition B.2
Let $`F`$ and $`G`$ both be even $`\mathrm{}`$-forms. Then $`FG0`$ holds for all half-Eulerian posets if and only if either both $`F0`$ and $`G0`$ or both $`F0`$ and $`G0`$ hold for all half-Eulerian posets. The analogous statement is true for even $`L`$-forms and Eulerian posets.
Only the “only if” implication is not completely trivial. In the half-Eulerian case, all we need to observe is that for a pair $`(P,Q)`$ of half-Eulerian posets the poset $`PQ`$ obtained by putting all elements of $`Q`$ above all elements of $`P`$, and identifying the top element of $`P`$ with the bottom element of $`Q`$, is half-Eulerian. Moreover, if for posets $`P_1,P_2`$, and $`Q`$ and forms $`F`$ and $`G`$, $`F(P_1)>0`$, $`F(P_2)<0`$, and $`G(Q)>0`$, then $`FG(P_1Q)=F(P_1)G(Q)>0`$ and $`FG(P_2Q)=F(P_2)G(Q)<0`$. The same argument works for Eulerian posets using $`D_{\{\rho (P)\}}^2(PQ)`$ instead of $`PQ`$.
In terms of convolutions, Proposition 3.1 states that the product of valid inequalities of the form $`f_{\mathrm{}}^n0`$ and $`f_i^n2f_{\mathrm{}}^n0`$ is a valid inequality for all Eulerian posets. Theorem 4.6 describes a subclass of these products that yield facet inequalities. Using ideas extracted from the proof, one can show the following, somewhat strengthened statements.
###### Proposition B.3
If $`F0`$ defines a facet of $`𝒞_{}^{n+1}`$, then $`F(f_1^{k+1}2f_{\mathrm{}}^{k+1})0`$ defines a facet of $`𝒞_{}^{n+k+2}`$.
###### Proposition B.4
If $`F0`$ defines a facet of $`𝒞_{}^{n+1}`$, and $`F`$ can be written as
$$F=\underset{S[1,n]}{}a_SL_S^{n+1}$$
where $`S`$ ranges over only even sets that contain $`n`$, then $`Ff_{\mathrm{}}^{k+1}0`$ and $`Ff_{\mathrm{}}^1f_{\mathrm{}}^10`$ define facets of $`𝒞_{}^{n+k+2}`$ and $`𝒞_{}^{n+3}`$, respectively.
It seems to be difficult, however, even in the case of these simple factors to predict which products yield facet inequalities. For example $`(f_1^52f_{\mathrm{}}^5)f_{\mathrm{}}^1=(f_1^62f_{\mathrm{}}^6)+\frac{1}{2}(f_1^32f_{\mathrm{}}^3)(f_1^32f_{\mathrm{}}^3)0`$ does not define a facet of $`𝒞_{}^6`$, while it can be shown that $`(f_1^52f_{\mathrm{}}^5)f_{\mathrm{}}^30`$ defines a facet of $`𝒞_{}^8`$.
|
no-problem/9907/astro-ph9907444.html
|
ar5iv
|
text
|
# A Kinematic Link between Boxy Bulges, Stellar Bars, and Nuclear Activity in NGC 3079 & NGC 4388
## 1 Introduction
The origin of galaxy nuclear activity<sup>1</sup><sup>1</sup>1In this paper, ‘nuclear activity’ refers to either starburst or black-hole driven activity in the nuclei of galaxies. Similarly, we refer to ‘active galaxies’ as galaxies powered by star formation (starburst galaxies) or through accretion onto a massive black hole (active galactic nuclei or quasars). is of fundamental astrophysical importance. Recent surveys suggest that a nonaxisymmetric component to the gravitational potential is necessary to start nuclear activity (e.g., Moles, Márquez, & Pérez 1995). Evidence has mounted that galaxy interactions trigger activity in high-luminosity galaxies (e.g., Sanders & Mirabel 1996; Bahcall et al. 1997; Stockton 1998). However, the evidence is less convincing in lower luminosity, interacting spiral galaxies. These are known to have higher star formation rates on the average than isolated galaxies (e.g., Kennicutt & Keel 1987; Keel & van Soest 1992), but the role of galaxy interactions in triggering nuclear activity in Seyfert galaxies is debated vigorously (e.g., Dahari 1984; Fuentes-Williams & Stocke 1988; Dultzin-Hacyan 1998; De Robertis, Yee, & Hayhoe 1998).
In low-luminosity active galaxies stellar bars may funnel gas down to the scale of the central engine. Observations (e.g., Quillen et al. 1995; Benedict, Smith, & Kenney 1996; Regan, Vogel, & Teuben 1997) and numerical simulations (e.g., Athanassoula 1992; Friedli & Benz 1993, 1995; Piner, Stone, & Teuben 1995) have shown that stellar bars can induce mass inflow at rates sufficient to fuel kpc-scale starbursts ($``$ 1 M yr<sup>-1</sup>). These results are consistent with the statistical excess of barred galaxies among starbursts (Hawarden et al. 1986; Dressel 1988; Arsenault 1989; Martin 1995; Ho 1996; Huang et al. 1996; see Pompea & Rieke 1990 and Isobe & Feigelson 1992, however). It is unclear how these significant inflow rates can be sustained down to the nuclear scale to feed the AGN, but simulations of “bars within bars” (Norman & Silk 1983; Shlosman, Frank, & Begelman 1989; Wada & Habe 1992; Friedli & Martinet 1993; Heller & Shlosman 1994; Maciejewski & Sparke 1997) and the detections of nested bars (Shaw et al. 1995; Wozniak et al. 1995; Friedli et al. 1996; Erwin & Sparke 1998) and “nuclear mini-spirals” (e.g., Ford et al. 1994; Regan & Mulchaey 1999; Martini & Pogge 1999) are promising developments.
Despite these efforts, it is surprising to find little or no observational evidence for Seyfert nuclei to occur preferentially in barred systems (e.g., McLeod & Rieke 1995; Heraudeau et al. 1996; Mulchaey & Regan 1997) or for emission-line strengths of AGNs to depend on the presence of a bar (Ho, Filippenko, & Sargent 1997). Perhaps nuclear activity in unbarred galaxies was triggered by short-lived bars that have since disappeared. Indeed, evolutionary models of disk galaxies suggest that stellar bars are transient features that form and dissolve over only a few orbital periods (e.g., Hohl & Zang 1979; Miller & Smith 1979; Combes & Sanders 1981; Combes et al. 1990; Pfenniger & Friedli 1991; Raha et al. 1991; Merritt & Sellwood 1994; Norman, Sellwood, & Hasan 1996). In these scenarios, a vertical instability in the bar kicks stars above the disk of the galaxy to produce boxy peanut-shaped bulges that eventually settle to become stellar spheroids. Recent kinematic evidence for stellar bars in galaxies with boxy bulges has provided observational support for this scenario (e.g., Bettoni & Galletta 1994; Kuijken & Merrifield 1995; Merrifield & Kuijken 1999; Bureau & Freeman 1999). Secular dynamical evolution has also been invoked to explain correlations between disk and bulge properties (e.g., Courteau, de Jong, & Broeils 1996).
However, these evolutionary scenarios remain virtually untested for active galaxies. Morphological evidence for a barred Seyfert galaxy with a boxy peanut-shaped bulge has recently been presented by Quillen et al. (1997). The present paper will discuss the first unambiguous kinematic evidence for a bar potential in two active galaxies with boxy bulges. The objects we discuss – the edge-on Sc galaxy NGC 3079 and Sb galaxy NGC 4388 – show clear signs of nuclear activity at most wavelengths. NGC 3079 is host to the most powerful windblown superbubble known (Filippenko & Sargent 1992; Veilleux et al. 1994, hereafter VCBTFS). Infrared and radio measurements of this galaxy suggest a nuclear starburst that coexists with an AGN (e.g., Lawrence et al. 1985; Irwin & Seaquist 1988; Haschick et al. 1990; Irwin & Sofue 1992; Baan & Irwin 1995). The radio morphology and optical/X-ray spectral properties of the nucleus of NGC 4388 clearly point to a powerful AGN (e.g., Stone, Wilson, & Ward 1988; Hummel & Saikia 1991; Kukula et al. 1995; Hanson et al. 1990; Iwasawa et al. 1997; Falcke et al. 1998). A complex of highly ionized line-emitting gas clouds extends several kpc above the galactic plane (e.g., Pogge 1988). The origin of this extraplanar material has been discussed in detail by Veilleux et al. (1999; hereafter VBCTM).
The proximity of NGC 3079 (17.3 Mpc or 84 pc arcsec<sup>-1</sup> based on Tully, Shaya, & Pierce 1992) and NGC 4388 (16.7 Mpc or 81 pc arcsec<sup>-1</sup> based on Yasuda, Fukugita, & Okamura 1997) allows for detailed structural studies. The stellar bulge of NGC 3079 presents a striking box/peanut shape at optical wavelengths (Shaw, Wilkinson, & Carter 1993 and references therein). A stellar bar in NGC 3079 has been posited (e.g., de Vaucouleurs et al. 1991), but such suggestions were often based on the morphology of the central region rather than on the kinematics. To date, the most detailed study of the stellar kinematics in the bulge indicates cylindrical rotation that can be explained without recourse to a non-axisymmetric distribution function (Shaw et al. 1993). However, Merrifield & Kuijken (1999) have recently argued for a bar based on the peculiar \[N II\] emission-line profiles along the major axis of NGC 3079. The situation for NGC 4388 is equally ambiguous. Recent K-band imaging by McLeod & Rieke (1995) revealed a boxy bulge, but the nearly edge-on aspect of this galaxy prevented them from detecting the morphological signature of a bar.
The two-dimensional velocity fields that we will present in this paper reveal unambiguously a bar potential in both NGC 3079 and NGC 4388. Contrary to previous work based on long-slit spectra (e.g., Kuijken & Merrifield 1995; Merrifield & Kuijken 1999; Bureau & Freeman 1999), our detection of the bar potential in these two galaxies does not rely on an inner Lindblad resonance ($`x_2`$-like orbits) or strong bar-induced shocks that split emission-line profiles when the bar is seen edge-on.
Our paper is organized as follows. In §2, we briefly describe the methods used to obtain and reduce the near-infrared images and optical Fabry-Perot observations. Results on NGC 3079 and NGC 4388 are discussed in §3 and §4, respectively. For each galaxy, we first analyze the morphology of the stellar bulge and its degree of boxiness. Next, we use the observed velocity field of the ionized gas to argue for a bar potential. Self-consistent dynamical interpretation of the disk velocity field requires analysis of the surface photometry to constrain the mass distribution, a task beyond the scope of this paper. The models presented here are purely kinematical in nature, but they show the clear kinematic signature of bar streaming motions in both objects. In §5, we discuss the implications of our results using the predictions of bar evolutionary models, and attempt to quantify how bars fuel the nuclear activity. We summarize our results in §6 along with future avenues of research.
## 2 Observations and Data Reduction
### 2.1 Fabry-Perot Spectroscopy
During the course of our spectroscopic survey of active galaxies with extended line-emitting regions, we obtained a large Fabry-Perot datacube (x, y, $`\lambda `$) of NGC 3079 that spans the H$`\alpha `$ \+ \[N II\] $`\lambda \lambda `$6548, 6583 emission-line complex, and two datacubes of NGC 4388 centered on the H$`\alpha `$ and \[O III\] $`\lambda `$5007 emission lines. Observational parameters of the Fabry-Perot spectra are listed in Table 1. The observational setup and reductions used to obtain and reduce these data have been detailed elsewhere (VCBTFS, VBCTM, and Veilleux, Cecil, & Bland-Hawthorn 1995; hereafter VCB). In both NGC 3079 and NGC 4388, the observed emission-line profiles were parameterized by simple Gaussian functions. Deviations from Gaussians will be discussed in §5.1. The best fits were determined by the least-$`\chi ^2`$ method on spectrally smoothed emission line profiles using a 1/4 – 1/2 – 1/4 spectral filter (Hanning smoothing). Spatial Gaussian smoothing with $`\sigma `$ = 1 pixel was used to improve the sensitivity to fainter features. For the H$`\alpha `$ \+ \[N II\] $`\lambda \lambda `$6548, 6583 complex in NGC 3079, an iterative fitting method was used whereby all parameters of the three Gaussian profiles were first left unconstrained (except for the \[N II\] $`\lambda `$6583/$`\lambda `$6548 ratio which was fixed to its quantum value, 2.98; Osterbrock 1989). The continuum levels and centroids of the H$`\alpha `$ and \[N II\] profiles determined from this first iteration were then used as input parameters for a second iteration.
### 2.2 Infrared Imaging
Infrared images of NGC 3079 and NGC 4388 were obtained in the course of imaging surveys of nearby galaxies by R. B. Tully and by S. Courteau and J. Holtzman, respectively. We thank our colleagues for providing us with their data. On June 23 1993, the University of Hawaii 2.2-meter telescope on Mauna Kea equipped with the NICMOS-3 camera (Hodapp, Rayner, & Irwin 1992) was used to obtain a K-band image of NGC 3079. This imaging system uses a 256 $`\times `$ 256 pixel NICMOS-3 HgCdTe detector array and interchangeable reimaging lenses to provide two spatial scales. The large angular size of the program galaxies required the 2:1 reducing optics, resulting in a field of $``$ 3 square at a scale of 0$`\stackrel{}{\mathrm{.}}`$75 pixel<sup>-1</sup>. The K filter described in Wainscoat & Cowie (1992) minimized the thermal background. The integration time for each frame was 1 minute, and the total exposure on NGC 3079 was 6 minutes.
The H-band image of NGC 4388 was obtained on April 28, 1994 using the Cryogenic Optical Bench (COB) on the KPNO 2.1-meter telescope. This system uses a 256 $`\times `$ 256 pixel InSb detector array, and provides a field of $``$ 2 square at a scale of 0$`\stackrel{}{\mathrm{.}}`$50 pixel<sup>-1</sup>. The total exposure on NGC 4388 was 1,000 seconds.
Both image sets were reduced using standard techniques (e.g., Hodapp et al. 1992; McCaughrean 1989), and were not flux calibrated because we are only interested in the bulge/disk structure.
## 3 Results on NGC 3079
### 3.1 Stellar Morphology
The K-band image of NGC 3079 is shown in Figure 1$`a`$, rotated to place the photometric major axis of the galaxy vertically. This axis was found to be along PA<sub>maj</sub> = 169$`\mathrm{°}`$ $`\pm `$ 4$`\mathrm{°}`$, in good agreement with the results of Irwin & Seaquist (1991; PA<sub>maj</sub> = 166$`\mathrm{°}`$). Also shown are two simple photometric models of the galaxy. In Figure 1$`c`$, the isophotes representing the sum of an exponential disk and a spherically symmetric bulge are shown, emphasizing the boxiness of the observed bulge. An attempt is made in Figure 1$`b`$ to model the observed isophotes more accurately with a box-shaped structure of the form $`I=I_{\mathrm{box}}+I_{\mathrm{disk}}`$ where
$`I_{\mathrm{box}}=I_{b0}(1+|{\displaystyle \frac{R}{a}}|^p+|{\displaystyle \frac{z}{b}}|^p)^{1/p},`$ (1)
$`I_{\mathrm{disk}}=I_{d0}\mathrm{exp}({\displaystyle \frac{R}{R_s}})\mathrm{exp}({\displaystyle \frac{|z|}{z_s}}).`$ (2)
This analytic formulation of the boxy structure was first introduced by Pfenniger & Friedli (1991). The parameters of the best fit model are $`I_{b0}/I_{d0}`$ = 4.6, $`a:b=1:0.43`$, $`p`$ = 3.5, $`R_s`$ = 3.0 kpc, and $`z_s`$ = 380 pc (for an inclination of 82$`\mathrm{°}`$, §3.2.2). For comparison, the spherically symmetric bulge in Figure 1$`c`$ has $`I_{b0}/I_{d0}`$ = 1.8, $`b:a=1:1`$, and $`p`$ = 2.
The residual image after subtracting the boxy model from the data is shown in Figure 1$`d`$. The striking X-structure in the central region indicates a peanut-shaped bulge. Although the boxiness of the bulge in this galaxy has been known for some time (e.g., Shaw 1987; Young, Claussen, & Scoville 1988; Shaw et al. 1993), our infrared image emphasizes its importance by minimizing emission from the young stellar disk and the effects of dust. A detailed comparison of Figure 1$`a`$ and Figure 3 from VCB indicates that the lower portions of the X-shaped line-emitting filaments reported by VCB do not coincide spatially with the peanut-shaped residual observed at infrared wavelengths. The boxiness of the infrared isophotes is therefore unlikely to be due to hot dust or line emission from the X-shaped filaments.
Also visible in the residual image is a well-defined warp to the north-west. A southern warp may also be present, but broader spatial coverage is needed to see if it bends east (“integral sign” warp) or west (warp with mirror symmetry with respect to the minor axis). The distributions of line-emitting (VCB) and H I gas (Irwin & Seaquist 1991) strongly favor the former orientation.
Finally, note the large residual nuclear core and disk in Figure 1$`d`$. This nuclear disk coincides roughly with the molecular disk and nuclear starburst in this galaxy (Lawrence et al. 1985; Young, Claussen, & Scoville 1988; Irwin & Sofue 1992; Sofue & Irwin 1992; Baan & Irwin 1995).
### 3.2 Kinematics of the Gaseous Galactic Disk
#### 3.2.1 General Description
Figure 2$`a`$ reproduces the distribution of line-emitting gas derived from our Gaussian fits that was presented in VCB. The velocity fields from these data are shown in Figures 2$`b`$ and 2$`c`$. The uncertainty on these velocities is $``$ 35 km s<sup>-1</sup> in the brighter disk H II regions, but may be 2-3 times larger in the fainter material outside of the disk. The H$`\alpha `$ and \[N II\] $`\lambda `$6583 velocity fields shown in Figures 2$`b`$ and 2$`c`$ generally agree within these errors.
In these figures a string of black dots traces the steepest gradient through the observed velocity field (using the method described in Bland 1986); this is the kinematic line of nodes. Figure 3 shows the H$`\alpha `$ rotation curve lay along this locus. It rises linearly in the inner region and flattens to $``$ 245 $`\pm `$ 25 km s<sup>-1</sup> (or a deprojected value of $`250\pm 25`$ km s<sup>-1</sup> if $`i`$ = 82$`\mathrm{°}`$; §3.2.2) beyond a radius of $``$ 1 kpc. A systemic velocity of $``$ 1150 $`\pm `$ 25 km s<sup>-1</sup> is derived from this rotation curve. This value agrees well with estimates from other optical datasets (e.g., 1177 km s<sup>-1</sup> from Humason et al. 1956; 1150 km s<sup>-1</sup> from Carozzi 1977) and CO spectra (e.g., 1150 km s<sup>-1</sup> from Sofue & Irwin 1992), but slightly exceeds the value derived from HI data (e.g., 1120 km s<sup>-1</sup> from Rots 1980; 1125 km s<sup>-1</sup> from Fisher & Tully 1981; 1118 $`\pm `$ 3 km s<sup>-1</sup> from Staveley-Smith & Davies 1988; 1124 $`\pm `$ 10 km s<sup>-1</sup> from Irwin & Seaquist 1991). This apparent discrepancy is probably due to a slight asymmetry between the inner optical/CO rotation curve ($`R`$ 8 kpc) and the outer H I rotation curve. First noted by Sofue (1996), this asymmetry may arise from a misaligned dark halo or tidal interaction with nearby companions such as NGC 3073. The systemic velocity derived from optical spectra of the stellar bulge of NGC 3079 is closer to the HI value (1114 $`\pm `$ 9 km s<sup>-1</sup> from Shaw et al. 1993).
Well-defined ‘tongues’ of highly redshifted (blueshifted) gas extend immediately south (north) of the nucleus. Interestingly, these two ‘tongues’ are slightly misaligned at the nucleus. The kinematic line of nodes also jumps 10$`\mathrm{°}`$ clockwise on either side of the nucleus immediately outside the ‘tongues’ ($`R`$ 2.5 kpc; Fig. 2), then twists slightly anti-clockwise. This complex behavior may arise from a combination of (1) a warp in the galactic disk, (2) patchy dust obscuration, (3) streaming motion along spiral arms, and (4) eccentric orbits aligned with a bar. Although dust may contribute to the slight large-scale differences between the present optical data and the H I data of Irwin & Seaquist (1991), the near-perfect bisymmetry of the velocity field near the center of the galaxy strongly suggests that dust insignificantly affects the observed velocity field. Similarly, the photometric warp detected on the outskirts of the galaxy at optical, infrared, and radio wavelengths, does not affect the kinematics of the gas inside a radius of $``$ 8 kpc. Finally, the coincidence between the clockwise shift in the kinematic line of nodes and the anti-clockwise “spiral arms” seen on the K-band image of NGC 3079 (Fig. 1$`a`$) in both the northern and southern sections of the disk is good evidence for elliptic streaming through the spiral arms. However, this type of streaming motion cannot explain the peculiarities of the velocity field near the central region.
‘Twisting’ of the isovelocity contours in the central portion of galaxies often signals a bar potential (e.g., Kalnajs 1978; Roberts, Huntley, & van Albada 1979; Sanders & Tubbs 1980; Schwarz 1981). The nuclear offset between the two ‘tongues’ in NGC 3079 can arise in this manner, as described next.
#### 3.2.2 Kinematic Models
We began our analysis by exploring the parameter space for an axisymmetric disk with inclination 50$`\mathrm{°}`$ $``$ $`i`$ $``$ 90$`\mathrm{°}`$, kinematic major axis 160$`\mathrm{°}`$ $``$ PA $``$ 200$`\mathrm{°}`$, and systemic velocity 1,000 $``$ $`V_{\mathrm{sys}}`$ $``$ 1,300 km s<sup>-1</sup>. This region of parameter space was selected based on the results of previous optical and HI kinematic studies of NGC 3079 (see references in the previous section). A smoothed (flux-weighted) version of the rotation curve derived along the kinematic line of nodes (Fig. 3) was used for the analysis. When searching for the best-fitting model, the whole disk out to R $``$ 12 kpc was considered. Deviations from Gaussian profiles were not considered in the following analysis (see §5.1). Dust obscuration, one likely source of profile asymmetry in this highly inclined galaxy, is also not included in the models.
Figure 4 shows the best-fitting axisymmetric model ($`i`$ = 82$`\mathrm{°}`$, PA = 169$`\mathrm{°}`$, and $`V_{\mathrm{sys}}`$ = 1,150 km s<sup>-1</sup>). Figure 4$`c`$ shows the residuals after subtracting the model from our measured H$`\alpha `$ velocity field. We conclude that an axisymmetric model does not fit the observed velocity field.
To look for the kinematic signature of a bar in NGC 3079, we constructed more elaborate models of gas motions in the inner disk that are similar to those described in Staveley-Smith et al. (1990). We model the intrinsic gas orbits as nonintersecting, inclined elliptical annuli that conserve angular momentum (so there are no hydrodynamic shocks or gas flows near the corotation radius or in the bar; Athanassoula 1992; Piner, Stone, & Teuben 1995). The ellipticity was held constant within a radius $`R_b`$, then was assumed to decrease linearly out to a radius $`R_d`$ where the orbits are circular.
The velocity field for the best fitting model is shown in Figure 4$`b`$; model parameters are listed in Table 2. Our model invokes a bar with moderately eccentric orbits ($`e`$ = $`b/a`$ = 0.7) with $`R_b`$ = 3.6 kpc and $`R_d`$ = 6.0 kpc aligned along PA = 130$`\mathrm{°}`$ $`\pm `$ 10$`\mathrm{°}`$ intrinsic to the disk. This projects to PA = 97$`\mathrm{°}`$ on the sky \[using $`\mathrm{tan}(PA_{\mathrm{obs}})=\mathrm{tan}(PA_{\mathrm{intrinsic}})\mathrm{sec}i`$\]. The position angle of the bar is well constrained by our models. When the bar is close to the major or minor axis, one does not get the twist of the line of node in the center, nor the NW-SE bisymmetric structure at large radius. In this respect, the intermediate angle bar models are a great success. Compared to the model with purely circular motions (Fig. 4$`c`$), the model that incorporates elliptical streaming has significantly smaller residuals in the central portion of the disk (Fig. 4$`d`$). The dispersion in the residuals within the inner disk (R $``$ 6 kpc) is only 34 km s<sup>-1</sup> compared to 40 km s<sup>-1</sup> for models without the bar.
## 4 Results on NGC 4388
### 4.1 Stellar Morphology
The top panel of Figure 5 shows the H-band image of NGC 4388. The position of the photometric major axis of NGC 4388 derived from this image is PA<sub>maj</sub> = 90$`\mathrm{°}`$ $`\pm `$ 4$`\mathrm{°}`$. As for NGC 3079, we attempted to model the morphology of NGC 4388 using either the sum of an exponential disk and a spherically symmetric bulge (Fig. 5$`c`$) or the sum of an exponential disk and a box-shaped bulge (eqns 1 & 2; Fig. 5$`b`$). The second solution clearly fits the data better. The parameters of this model are $`I_{b0}/I_{d0}`$ = 5.0, $`a:b`$ = 1:0.35, $`p`$ = 3.5, $`R_s`$ = 1.8 kpc, and $`z_s`$ = 0.32 pc (for inclination –78$`\mathrm{°}`$, derived from our kinematic data; §4.2.2).
The residuals after subtracting this model from the data are shown in Figure 5$`d`$. Residual spiral arms run east-west in the galactic plane and coincide spatially with strings of HII regions in the H$`\alpha `$ emission-line image (see Fig. 1$`a`$ of VBCTM). The excess H-band emission at these star-forming complexes is probably due to young supergiant stars. A bright fan-like residual is also visible slightly south of the nucleus, coincident with bright H$`\alpha `$ and \[O III\] emission. Some of this residual emission comes from the AGN in NGC 4388 (VBCTM).
### 4.2 Kinematics of the Gaseous Galactic Disk
#### 4.2.1 General Description
The emission-line maps and velocity fields derived from the H$`\alpha `$ and \[O III\] $`\lambda `$5007 data cubes were presented by VBCTM; the velocity fields are reproduced in Figure 6. Uncertainties range from $``$20 km s<sup>-1</sup> in the bright line-emitting regions to $``$100 km s<sup>-1</sup> in the fainter areas. The ellipse superposed on this figure differentiates between material in the disk and beyond. A clear kinematic dichotomy is evident. The models described in the next section seek to reproduce the velocity field of the disk material. The kinematics of the extraplanar gas are discussed in detail in VBCTM.
The velocity field of the \[O III\]-emitting gas in the disk resembles that of the H$`\alpha `$-emitting gas. It is characterized by a large-scale east-west gradient that indicates rotation in the galactic disk. Figure 7 shows the rotation curve derived along the line of nodes of the H$`\alpha `$ velocity field. The rotation curve derived from our data is consistent with earlier results (cf. Rubin, Kenney, & Young 1997 and references therein). The systemic velocity derived from this rotation curve (2,525 $`\pm `$ 25 km s<sup>-1</sup>) also agrees with published values (2515 $`\pm `$ 7 km s<sup>-1</sup> from Helou et al. 1981; 2,529 $`\pm `$ 3 km s<sup>-1</sup> from Corbin, Baldwin, & Wilson 1988; 2,554 $`\pm `$ 39 km s<sup>-1</sup> from Ayani & Iye 1989; 2,525 $`\pm `$ 15 km s<sup>-1</sup> from Petitjean & Durret 1993; 2,538 $`\pm `$ 26 km s<sup>-1</sup> from de Vaucouleurs et al. 1991; 2,502 $`\pm `$ 10 km s<sup>-1</sup> from Rubin, Kenney, & Young 1997). ‘Twisting’ of the isovelocity contours is clearly visible in the central 1$`\mathrm{}`$ diameter of NGC 4388. As for NGC 3079, the near-perfect bisymmetry of the velocity field in the central region of the galaxy strongly suggests that dust does not significantly affect the observed velocity field there. The kinematic models described in the next section indicate that the velocity field is best generated by a bar.
#### 4.2.2 Kinematic Models
Our kinematic models seek to reproduce the H$`\alpha `$ velocity field. In choosing the H$`\alpha `$ data for this analysis we have tried to minimize effects associated with nuclear activity. Dynamical processes such as entrainment by AGN-powered radio jets generally have a stronger effect on the kinematics of the highly ionized \[O III\]-emitting gas than on those of the low-ionization H$`\alpha `$-emitting material (e.g., Whittle et al. 1988).
The procedure used to find the best-fitting model for NGC 4388 followed the same steps as for NGC 3079. First, we explored the parameter space for an axisymmetric disk with inclination –90$`\mathrm{°}`$ $``$ $`i`$ $``$ –50$`\mathrm{°}`$ , kinematic major axis along 70$`\mathrm{°}`$ $``$ PA $``$ 110$`\mathrm{°}`$, and a systemic velocity of 2,400 $``$ $`V_{\mathrm{sys}}`$ $``$ 2,600 km s<sup>-1</sup> (this range in the parameters brackets the results of previous kinematic studies; see references in §4.2.1). Note that the negative inclination means that the north rim of the disk is the near side. This is consistent with the morphology of the high-ionization gas as explained in VBCTM. Under these conditions, the two spiral arms of bright HII regions are trailing rotation, as generally arises in spiral galaxies. To simplify the analysis, we used a smooth (projected) rotation curve of the form:
$`V=V_0{\displaystyle \frac{[1\mathrm{exp}(\alpha R)]}{[1\mathrm{exp}(\alpha )]}}+V_{\mathrm{sys}},`$ (3)
where $`R`$ is the galactocentric radius in arcseconds. Deviations from Gaussians were not considered in our analysis (see §5.1).
Figure 8$`a`$ shows the best-fitting axisymmetric model ($`i`$ = –78$`\mathrm{°}`$, PA = 90$`\mathrm{°}`$, $`\alpha `$ = 0.10 arcsec<sup>-1</sup>, $`V_0`$ = 180 km s<sup>-1</sup> and $`V_{\mathrm{sys}}`$ = 2,525 km s<sup>-1</sup>) and the result of subtracting it from the H$`\alpha `$ velocity field. The residuals in Figure 8$`c`$ are significant and reveal substantial non-circular motion. Variations in the rotation curve do not improve significantly the quality of the fit.
Plausible dynamical origins for the non-circular motions in the disk include forcing by a bar or oval distortion, density wave streaming associated with the spiral arms, and disk warping. Our spectra span only the inner 5 kpc of the disk, so the last effect is unimportant. A stellar bar has been suggested previously based on the morphology of the central region, but no consensus has emerged on its position angle (Phillips & Malin 1982: 30$`\mathrm{°}`$; Corbin, Baldwin, & Wilson 1988: 30$`\mathrm{°}`$; Colina et al. 1987: 130$`\mathrm{°}`$; see also Rubin et al. 1997). Figure 8$`b`$ presents the best fitting model that incorporates elliptical streaming; Table 2 lists the model parameters. The fit used a smooth rotation curve of form (3) ($`V_0`$ = 180 km s<sup>-1</sup>, $`\alpha `$ = 0.10 arcsec<sup>-1</sup>). Our model involves a bar with radius $`R_b`$ = 1.5 kpc and highly eccentric orbits ($`e`$ = 0.3) aligned along PA = 135$`\mathrm{°}`$ intrinsic to the disk. This projects to PA = 100$`\mathrm{°}`$ on the sky. The spiral-arm effect arises because the gas orbits become increasingly less elliptical from the ends of the bar to the edge of the disk ($`e`$ = 1 at the edge of the disk, $`R_d`$ $``$ 5 kpc), and the intrinsic bar PA going from 135$`\mathrm{°}`$ to 90$`\mathrm{°}`$. Elliptical streaming significantly reduces the velocity residuals (Fig. 8$`d`$). The residuals present a highly symmetric Gaussian distribution with a dispersion of only 20 km s<sup>-1</sup>. The fit is remarkably good considering that hydrodynamic effects (e.g., strong shocks along the bar; Athanassoula 1992; Piner, Stone, & Teuben 1995) and dust obscuration are not modeled.
## 5 Discussion
### 5.1 Additional Kinematic Evidence for Stellar Bars in NGC 3079 and NGC 4388?
Strong kinematic evidence for bar streaming motions in NGC 3079 and NGC 4388 was presented in §3.2 and §4.2, respectively. The velocity fields used for this analysis were derived by fitting Gaussians to the emission-line profiles. Deviations from Gaussians were detected in the inner disk of both galaxies. In NGC 3079, emission-line profiles are split north of the nucleus (PA $``$ –10$`\mathrm{°}`$, i.e. along the disk) out to a radius of $``$ 16$`\mathrm{}`$ (1.3 kpc), but in a sector only $``$ 2 – 3$`\mathrm{}`$ (170 – 335 pc) wide. The degree of line splitting varies smoothly with radius, first increasing monotonically to $``$275 km s<sup>-1</sup> at 200 – 300 pc radii, beyond which it decreases. These anomalous line profiles were described in Filippenko & Sargent (1992; their Fig. 2) and VCBTFS (their Fig. 11$`a`$). Split line profiles were also detected by Irwin & Sofue (1992) in the inner H<sub>2</sub> molecular disk of this galaxy. Similar line splitting with maximum amplitude $``$ 150 km s<sup>-1</sup> is observed on both sides of the nucleus of NGC 4388 out to radii of $``$ 10$`\mathrm{}`$ along the disk ($``$ 1 kpc; see also Iye & Ulrich 1986; Colina et al. 1987; Ayani & Iye 1989; Veilleux 1991; Rubin et al. 1997).
The origin of this line splitting is unclear. VCBTFS tentatively interpreted the line splitting in the inner disk of NGC 3079 as being due to the effects of the nuclear outflow on the gaseous component of the galactic disk (see also Sofue & Irwin 1992). Rubin et al. (1997) have argued that the anomalous kinematics in the inner disk of NGC 4388 indicates the presence of a discrete rapidly rotating circumnuclear disk. We argue that bar-induced noncircular motions may also split the lines. Kuijken & Merrifield (1995) and, more recently, Bureau & Athanassoula (1999) have pointed out that the line-of-sight velocity distribution of both the gaseous and stellar components in barred potentials observed edge-on may be double-peaked and characterized by a “figure-of-eight” variation with radius out to roughly the end of the bar. Their conclusions have since been confirmed by hydrodynamical gas simulations (Athanassoula & Bureau 1999). Some of these simulations reproduce remarkably well the velocity field of the inner disk of NGC 4388 (e.g., Fig. 3 of Rubin et al. 1997). The line splitting in this galaxy is symmetric with respect to the nucleus and extends out to near the end of the stellar bar. The small extent of the line splitting in NGC 3079 relative to the size of the bar ($`R_b`$ = 3.6 kpc) and its asymmetry with respect to the nucleus are more difficult to explain in this scenario, but selective dust obscuration by material in the disk may account for some of the asymmetry (see also long-slit observations of this galaxy by Merrifield & Kuijken 1999).
### 5.2 The Bar – Boxy Bulge Connection
The co-existence of stellar bars and boxy bulges in NGC 3079 and NGC 4388 brings additional observational support to evolutionary models which posit that box/peanut-shaped bulges of disk galaxies arise from a vertical instability in bars. Given bars in both galaxies, there is no need to invoke accretion to explain the boxy bulges in these galaxies (e.g., May, van Albada, & Norman 1985; Binney & Petrou 1985; Rowley 1986, 1988). Note, however, that we cannot exclude the possibility that each bar was created by a galaxy interaction or merger and then evolved into a boxy bulge through the bar-buckling instability (e.g., Noguchi 1987; Gerin, Combes, & Athanassoula 1990; Hernquist, Heyl, & Spergel 1993; Mihos et al. 1995; Miwa & Noguchi 1998).
Evolutionary models which invoke resonant heating of the bar (Combes et al. 1990; Pfenniger & Norman 1990; Pfenniger & Friedli 1991) predict the formation of a boxy bulge over $``$10 bar rotations. The bulge may form even quicker if the bar is subject to bending or fire-hose instabilities (Raha et al. 1991; Merritt & Sellwood 1994). The bar rotation periods in NGC 3079 and NGC 4388 can be derived assuming that the bars end near the corotation radius and using the observed rotation velocities there. We find $`\tau _{\mathrm{bar}}`$(NGC 3079) $``$ $`\tau _{\mathrm{bar}}`$(NGC 4388) $``$ 1 $`\times `$ 10<sup>8</sup> yrs. The boxy bulges in both NGC 3079 and NGC 4388 were therefore formed over $``$10<sup>9</sup> yrs.
The rate of bar heating/bulge formation is also a strong function of the central mass concentration in these galaxies (e.g., Norman, May, & van Albada 1985; Hasan & Norman 1990; Friedli & Pfenniger 1991; Friedli & Benz 1993, 1995; Hasan, Pfenniger, & Norman 1993; Norman, Sellwood, & Hasan 1996; Merritt & Quinlan 1998; Sellwood & Moore 1999). Large mass concentrations create inner-Lindblad resonances (ILRs). ILRs form the anti-bar orbit families $`x_2`$ and $`x_3`$, which thereby weaken the stellar bar.
Simulations (Norman et al. 1996; and Merritt & Quinlan 1998) suggest that the bar may be destroyed within a fraction of its rotation period if the core mass exceeds $``$ 1 – 5% of the combined disk and bulge mass. Is this condition met in NGC 3079 and NGC 4388? An estimate of the core mass can be derived assuming the nuclear activity is due to accretion around a black hole of mass comparable to that of the galaxy core. Recent compilations of black hole masses (Kormendy et al. 1997; Richstone 1998; Magorrian et al. 1998) suggest a rough proportionality between black hole and bulge masses with $`M_{\mathrm{BH}}0.005M_{\mathrm{bulge}}`$. The constant of proportionality may depend slightly on morphological type, perhaps increasing from late- to early-type galaxies (e.g., $`M_{\mathrm{BH}}2.5\%M_{\mathrm{bulge}}`$ in the S0 galaxies NGC 3115 and NGC 4342; Merritt 1998), but this variation is of little importance here because we are dealing with two late-type galaxies. In this case, we get
$`\eta {\displaystyle \frac{M_{\mathrm{core}}}{M_{\mathrm{bulge}}+M_{\mathrm{disk}}}}{\displaystyle \frac{M_{\mathrm{BH}}}{M_{\mathrm{bulge}}+M_{\mathrm{disk}}}}0.005{\displaystyle \frac{M_{\mathrm{bulge}}}{M_{\mathrm{bulge}}+M_{\mathrm{disk}}}}0.005,`$ (4)
implying that the central black holes in these galaxies do not play a significant role in the destruction of the bars. The very presence of AGN in NGC 3079 and NGC 4388 may require $`\eta `$ to be smaller than the critical value for rapid bar dissolution because destruction of the bar would end mass accretion onto the nucleus (see Merritt 1998).
The short time scales that we derive suggest that we are unlikely to have caught the bars in the act of forming boxy bulges. This is probably also the case for most barred spirals with boxy bulges, given the large fraction of disk galaxies with strong bars ($``$ 35%; e.g., Shaw 1987; Sellwood & Wilkinson 1993; Mulchaey & Regan 1997) and box-peanut bulges (20 – 45%; e.g., Jarvis 1986; Shaw 1987; de Souza & dos Anjos 1987; Dettmar & Barteldrees 1988, 1990; see Dwek et al. 1995, Kuijken 1996, and references therein for a discussion of our own Galaxy). Numerical simulations suggest that bars can indeed persist after the epoch of boxy bulge formation (e.g., Miller & Smith 1979; Pfenniger & Friedli 1991; but see Raha et al. 1990 for a counterexample). It is also possible that bar formation, dissolution, and bulge building recurs in disk galaxies. Open-box simulations of disk galaxies with nearby companions show that tidal encounters can repeatedly form a bar throughout the life of a disk galaxy (e.g., Sellwood & Moore 1999).
### 5.3 Stellar Bars and Nuclear Activity
Stellar bars in NGC 3079 and NGC 4388 do not necessarily assure efficient fueling of the nuclear starbursts/AGN. Our kinematic models neglect hydrodynamical effects and therefore cannot constrain the bar-induced mass inflow rates in these galaxies. Several factors are important in determining if these bar-induced mass inflow rates suffice to power the observed nuclear activity. Possibly relevant parameters include the strength and length of the bar, the gas mass-fraction and morphological type of the host galaxy, the star formation efficiency, the age of the bar, and the existence of ILRs near the nucleus. In this section, we review briefly each of these possibilities in the context of NGC 3079 and NGC 4388.
#### 5.3.1 Length and Strength of the Bar
Surveys of barred galaxies find that those currently displaying the highest star formation activity have both strong and long bars (e.g., Martin 1995; Martinet & Friedli 1997). Here, we follow Martinet & Friedli and define strong and long bars as having deprojected bar axis ratios $`(b/a)_i0.6`$ and relative lengths $`2L_i/D_{25}0.18`$ where $`L_i`$ is the deprojected bar length and $`D_{25}`$ is the galaxy diameter at 25 mag arcsec<sup>-2</sup>. The high inclinations of NGC 3079 and NGC 4388 prevent us from determining photometrically the lengths and axis ratios of these bars, but the results from our analysis of the velocity fields (Table 2) constrain these values. Assuming $`L_iR_b`$ and using $`D_{25}`$ = 40 kpc for NGC 3079 and 27 kpc for NGC 4388 (de Vaucouleurs et al. 1991), we get relative lengths of $``$ 0.18 and $``$ 0.11 for NGC 3079 and NGC 4388, respectively. The bar in NGC 3079 therefore is a borderline case while that of NGC 4388 appears to be a short bar. The strengths of the bars may be estimated using the derived eccentricity (= $`b/a`$) of the gas orbits in the inner ($`RR_b`$) portion of the bars. The bar in NGC 4388 would therefore be considered a strong bar while that in NGC 3079 is weak. Note, however, that using the eccentricity of the gas orbits rather than that of the stellar orbits probably overestimates the strength of the stellar bar by factors of 2 – 3 (e.g., Friedli & Benz 1993), so that even the bar in NGC 4388 may be relatively weak. The modest size and weakness of the stellar bars in both galaxies may prevent efficient fueling of their active nuclei.
#### 5.3.2 Gas Mass Fraction, Morphological Type, and Star Formation Efficiency
That not all galaxies with strong and long bars are forming stars furiously (Martinet & Friedli 1997) indicates that other parameters also affect the star formation rate in barred galaxies. The numerical simulations of Friedli & Benz (1995) suggest that the overall rate of star formation increases with gas mass-fraction of the galaxy. These authors found that a larger gas mass-fraction pushes the threshold for star formation to larger radii in the disk, thereby increasing the overall star formation rate. However, feedback between star formation and the release of mechanical energy from supernovae keeps the rate of star formation in the central regions relatively constant. This may explain why the correlation between gas mass-fraction and nuclear star-formation is more evident in early-type barred spirals, where star formation rates and feedback from supernovae are more modest (e.g., Hawarden et al. 1986; Devereux 1987; Dressel 1988; Arsenault 1989; Huang et al. 1996).
For NGC 3079, little can therefore be said on the importance of the gas mass-fraction in determining bar-induced fueling. However, it is clear that the nuclear starburst in this galaxy has the potential to be long-lived regardless of the mass inflow rate induced by the bar. Indeed, given the current star formation rate in the nucleus ($``$ 10 $`\beta ^1\mathrm{M}_{}\mathrm{yr}^1`$ where $`\beta <1`$ is the fraction of the bolometric luminosity of NGC 3079 radiated by stars; VCBTFS) and the total mass of atomic and molecular gas observed in the galaxy core (Irwin & Seaquist 1991), star formation will deplete gas in the nuclear region of NGC 3079 in $`>`$ $`\beta `$ 10<sup>9</sup> yrs. Bar-induced inflow would increase this time.
The situation in NGC 4388 is less clear because the nuclear activity appears to be powered by an AGN rather than a starburst. We are unaware of numerical simulations that address the effects of gas mass-fraction or morphological type on the level of AGN activity.
#### 5.3.3 Age of the Bar
Self-consistent evolutionary models of barred galaxies by Friedli & Benz (1995), Martinet & Friedli (1997) and Martin & Friedli (1997) suggest that both the total and nuclear star formation rates in galaxies with strong (weak) bars peak $``$ 1 (2) Gyr after the bar instability starts. The relative importance of nuclear star formation increases more than tenfold over that time span. According to strong (weak) bar simulations, young barred galaxies \[$``$ 0.5 (1.0) Gyrs; Type A in the nomenclature of Martin & Friedli 1997\] are characterized by chains of bright HII regions near strong shocks along the bar with no star formation activity in the nucleus, while old barred galaxies \[$``$ 0.8 (1.6) Gyrs; Type C\] shows the opposite distribution. The discussion in §5.3.1 suggests that the time scales for the weak bar simulations are more likely to apply to the bars in NGC 3079 and NGC 4388. In principle, the star formation distribution in these bars can therefore constrain the age of the bars and determine if the bar is old enough to fuel the observed nuclear activity. Unfortunately, both NGC 3079 and NGC 4388 are highly inclined spirals and patchy dust obscuration affects the brightness of HII regions in these galaxies. Qualitatively, we find that the bar region in NGC 3079 presents a larger number of bright HII regions than that of NGC 4388, therefore suggesting that the stellar bar in NGC 3079 is less evolved than the one in NGC 4388.
Perhaps it is more instructive to invert this argument and attempt to constrain the age of the bar by assuming that the nuclear activity in NGC 3079 and NGC 4388 is induced by the bar. Then, the age of the bar should equal the sum of the starburst or AGN lifetime and the delay between bar formation and the onset of nuclear activity. This situation may be particularly relevant to NGC 3079, where a nuclear starburst seems to power the kpc-scale windblown superbubble (VCBTFS). The dynamical time scale of the superbubble ($``$ 10<sup>6</sup> yrs; VCBTFS) and starburst age (10<sup>7</sup> – 10<sup>8</sup> yrs) determined from the optical spectrum of the nucleus (VCBTFS) are both considerably shorter than the predicted delay between bar formation and the onset of the starburst ($``$ 10<sup>9</sup> yrs). If the starburst in NGC 3079 was triggered by the bar, this would imply that the bar is $``$ 10<sup>9</sup> yrs old. The age of the bar equals or exceeds the timescale that we derived in §5.2 for the formation of the boxy bulge in NGC 3079.
The situation is more complex in NGC 4388, where the nuclear activity is almost certainly due to mass accretion onto a supermassive black hole rather than from a starburst. The ionized gas detected above the disk of NGC 4388 (VBCTM) may be used to constrain the age of the AGN in this galaxy. This extraplanar material appears to be outflowing from the nucleus with a characteristic dynamical time scale of $``$ 2 $`\times `$ 10<sup>7</sup> yrs. If no other outflows have occured in NGC 4388, the dynamical time scale of the extraplanar material constrains the age of the AGN to a few $`\times `$ 10<sup>7</sup> yrs. Once again, this is much shorter than the predicted delay between the epoch of bar formation and the onset of starburst activity predicted by numerical simulations. However, more relevant to NGC 4388 is the delay between the epoch of bar formation and the onset of AGN activity. This delay depends critically on poorly constrained factors including star formation efficiency, the conversion efficiency of the kinetic energy injected from supernovae and stellar winds into gas motion (if both these efficiencies are large, less material will be available to fuel the AGN), and the time needed to transport material from the kpc-scale starburst down to the sub-pc scale of the AGN accretion disk. Simulations of AGN fueling by bars (e.g., Heller & Shlosman 1994) suggest that intense starburst activity always precedes or coincides with AGN activity. Our data can therefore only place a lower limit of $``$ 10<sup>9</sup> yrs on the age of the bar in NGC 4388. This is again consistent with the time scale that we derived for the formation of its boxy bulge.
#### 5.3.4 Inner Lindblad Resonances
In the self-consistent evolutionary models discussed in the previous section, the ILRs appear at the end of the simulations once sufficient mass has inflowed. ILRs have important consequences on the fueling of the nuclear starburst and perhaps the AGN. ILRs correspond to regions of orbit crowding associated with higher gas concentrations and sometimes accompanied by shocks (e.g., Athanassoula 1992; Piner et al. 1995). ILRs in barred galaxies often form a gas ring nearby (e.g., Telesco, Dressel, & Wolstencroft 1993). Gas flows down the dust lane of the bar to the ILR, then most sprays back into the bar region at the point of contact of the nuclear ring and the dust lane (e.g., Binney et al. 1991; Piner et al. 1995; Regan et al. 1997).
A galaxy can have zero, one, or two ILRs depending on its rotation curve and therefore on the detailed mass distribution. For a weak bar rotating with angular velocity $`\mathrm{\Omega }_b`$ in a galaxy in which stars at a galactocentric radius $`R`$ (cylindrical coordinates) orbit with angular velocity $`\mathrm{\Omega }(R)=V(R)/R`$, an ILR at that radius for which the bar potential perturbs, or drives, the stars at a frequency equal to their natural (or epicyclic) frequency $`\kappa (R)`$; i.e. $`\mathrm{\Omega }_b=\mathrm{\Omega }(R)\kappa (R)/2`$ at ILR where (e.g., Binney & Tremaine 1987):
$`\kappa (R)^2={\displaystyle \frac{2V}{R}}({\displaystyle \frac{V}{R}}+{\displaystyle \frac{dV}{dR}}).`$ (5)
Our discussion in §5.3.1 suggests that this weak bar approximation is realistic for both galaxies. Unfortunately, we have no easy way to estimate $`\mathrm{\Omega }_b`$ in NGC 3079 and NGC 4388. We assume that the corotation radius, $`R_{\mathrm{CR}}`$ where $`\mathrm{\Omega }_b=\mathrm{\Omega }(R_{\mathrm{CR}})`$, coincides with the ends of the bar (i.e., $`R_{\mathrm{CR}}R_b`$), and derive $`\mathrm{\Omega }_b`$ $``$ 60 km s<sup>-1</sup> kpc<sup>-1</sup> in NGC 3079 and $`\mathrm{\Omega }_b`$ $``$ 105 km s<sup>-1</sup> kpc<sup>-1</sup> in NGC 4388. Next, we smoothed the rotation curves to calculate $`\kappa (R)`$ from equation (5). For NGC 3079, we follow the procedure of Bland-Hawthorn, Freeman, & Quinn (1997) and fit the observed rotation using a dynamical model which comprises a Freeman exponential disk and a dark halo of the form
$$\rho _h=\rho _{}(1+r^2/r_a^2)^1$$
(6)
with the rotation curve $`V(r)`$ given by
$$V^2=V_{\mathrm{}}^2[1(\frac{r_a}{r})\mathrm{tan}^1(\frac{r}{r_a})].$$
(7)
This fitting procedure was not used for NGC 4388 because the nuclear outflow affects the velocity field north-east of the nucleus. Instead we deprojected equation (3) using $`V_0`$ = 180 km s<sup>-1</sup> and $`\alpha `$ = 0.1 arcsec<sup>-1</sup>.
The results of our calculations are shown in the lower panels of Figures 9 and 10. The adopted rotation curves are shown in the top panels of these figures. We find that neither galaxy has an ILR. This result is consistent with the presence of large amounts of gas in the cores ($``$ 1$`\mathrm{}`$) of these objects. The absence of an ILR may help the bar-induced mass inflows continue down to nuclear scales to fuel the AGN and/or nuclear starburst. Note, however, that the converse may be generally false: results from numerical simulations suggest that ILRs do not necessarily prevent transport of material toward the nucleus (e.g., Friedli & Benz 1993). Consequently, the absence of an ILR in galactic nuclei is probably not a necessary condition for efficient fueling of AGN and nuclear starbursts.
## 6 Summary
In this paper, we have presented the first direct kinematic evidence for bar potentials in two active galaxies with box/peanut-shaped bulges. The complete two-dimensional coverage of our Fabry-Perot spectra has allowed us to detect the kinematic signature of the bar potential without an ILR or strong bar shocks. Our results provide observational support for bar evolutionary models where boxy bulges represent a critical transitional phase in the evolution of stellar bars into spheroidal bulges. We compared the predictions of these models with our data to determine that the boxy bulges in NGC 3079 and NGC 4388 probably formed over a time scale $``$ 10<sup>9</sup> yrs. This short time scale, if typical of all barred galaxies, is difficult to reconcile with the high frequency of bars and boxy bulges in galaxies unless (1) bars survive the epoch of boxy bulge formation, or (2) bar and boxy bulge formation recur in the life of disk galaxies. Perhaps minor mergers reform a bar and feed new fuel to the host galaxy.
Stellar bars in NGC 3079 and NGC 4388 provide a mechanism for fueling the nuclear activity in these galaxies. However, we find no direct evidence that mass inflows induced by the bars suffice to power the nuclear activity in these objects. The bars in NGC 3079 and NGC 4388 are rather short and weak and may not be dynamically important enough to trigger the required mass inflows. Using the velocity fields and bar kinematics derived from the Fabry-Perot data, we find that both galaxies lack inner Lindblad resonances. This may explain the large quantities of gas found in the nuclei of these galaxies, and the fueling of these active galaxies down to a scale of 10 pc. However, our evaluation of the bar-induced fueling of the nuclear activity in these galaxies is severely limited by the current lack of simulations that attempt to relate the mass inflow rates outside $``$ 10 pc with the processes affecting gas dynamics closer in. Such simulations would be particularly relevant to NGC 4388 where the nuclear activity is probably due to an AGN (evidence for both a nuclear starburst and an AGN exists in NGC 3079). A proper treatment of the MHD effects associated with the AGN will be needed to address this important issue.
From an observational standpoint, more efforts should be made to search for small ($``$ 10 pc) nested bars in nearby active galaxies and to map the velocity field of the gaseous and stellar components in this region to constrain the dynamical importance of bars in fueling nuclear activity. As recent examples of such work, we note the intriguing results on Seyfert galaxies obtained by Regan & Mulchaey (1999) and Martini & Pogge (1999). Using the WFPC2 and NICMOS instruments on HST, they found a paucity of nuclear bars in the galaxies studied but several “nuclear mini-spirals”. Follow-up kinematical data may establish if such structures can fuel the AGN.
We thank S. Courteau, J. Holtzman, J. Huang, and R. B. Tully for acquiring and reducing the near-infrared images of NGC 3079 and NGC 4388 used in the present paper. We also thank the referee, R. Pogge, for constructive comments that helped improve the paper. SV is grateful for partial support of this research by a Cottrell Scholarship awarded by the Research Corporation, NASA/LTSA grant NAG 56547, NSF/CAREER grant AST-9874973, and Hubble fellowship HF-1039.01-92A awarded by the Space Telescope Science Institute which is operated by the AURA, Inc. for NASA under contract No. NAS5–26555. JBH acknowledges partial support from the Fullam award of the Dudley Observatory.
|
no-problem/9907/astro-ph9907016.html
|
ar5iv
|
text
|
# Production of star-grazing and impacting planetesimals via orbital migration of extrasolar planets
## 1 Introduction
In the standard scenario for solar system formation, solid material in the disk forms rocky or icy bodies called planetesimals. These then accumulate in certain regions to form planets. The moderate detection rate of dusty disks with IRAS and ISO in the far infrared, particularly surrounding younger stars (Aumann & Good (1990), Becklin et al. (1999), Beckwith et al. (1999)), suggest that planet formation is often accompanied by the formation of belts (e.g., the Kuiper belt and possibly the Main asteroid belt). Recently spectral features of crystalline silicate material similar to those observed in comets have also been detected in these disks, suggesting that there is asteroidal and cometary material in these disks (Malfait et al. (1998), Waelkens et al. (1996), Pantin et al. (1999)). The detection of planets orbiting nearby solar-type stars (e.g., Mayor & Queloz ) and dusty disks surrounding some of these stars (e.g., Trilling & Brown (1998)) confirms the connection between rocky disk material and planets. Notably stars with known extra-solar planets have enhanced metallicities (Gonzalez, Wallerstein, & Saar (1999); Gonzalez (1998)), establishing an as yet unexplained link between planet formation and stellar metallicities.
The small orbital semi-major axes of many of the newly discovered extrasolar planets ($`a<0.1`$ AU) is surprising. This has resulted in the proposal of two classes of planetary orbital migration mechanisms. One mechanism involves the transfer of angular momentum between a planet and a gaseous disk (e.g. Trilling et al. (1998); Lin, Bodenheimer & Richardson (1996)). The other focuses on resonant interactions between planetesimals and the planet and the resulting ejection of the planetesimals (in extrasolar systems Murray et al. (1998), and in our solar system Fernandez & Ip (1984) and Malhotra (1995)). The first mechanisms suffers from a fine tuning problem where only a small range of planet and disk masses would allow migration but not the destruction of planet by the star (Trilling et al. (1998)). Metals from planets accreted by the star could account for the enhanced metallicities of the more massive stars with known planets. However because stars with masses comparable to the sun have convective envelopes for nearly the the entire time interval over which planets are expected to be accreted, encorporation of giant planets into the star should not be able to enhance the stars metallicity substantially (Laughlin & Adams (1997)).
The second mechanism involving ejection of planetesimals (Murray et al. (1998)) has some advantages over the first mechanism. Planetesimals affected by the inner resonances can be driven to extremely high eccentricities and so can impact the star (Beust & Morbidelli (1996); Gladman et al. (1997); Moons & Morbidelli ; Wisdom (1985); Farinella et al. (1994); Migliorini et al. (1998); Ferraz-Mello & Klafke (1991)). This would happen at a later time ($`10^7`$ years; Murray et al. (1998)) than appropriate for the migration scenario involving a gaseous disk ($`10^6`$ years). Thus addition of rocky or metallic material will happen when the stellar convective envelope is small so that the metals will remain trapped in the convection zone, rather than mixing into the star. In this way orbital migration via ejection of planetesimals would more naturally explain the enhanced metallicities of stars with massive planets. As pointed out by Gonzalez (1998) adding 20 $`M_{}`$ (earth masses) of asteroidal material to the convection zone of the star is sufficient to increase the enhanced metallicities of a solar type star by $`\mathrm{\Delta }[\mathrm{Fe}/\mathrm{H}]0.1`$ dex. For a planet to migrate a significant fraction of its initial semi-major axis roughly its mass of planetesimals must be ejected from the system (Murray et al. (1998)). Since in the inner solar system this material is expected to be asteroidal or rocky this could result in a significant fraction of a Jupiter mass planet ($`M_J=310M_{}`$) impacting and becoming incorporated into the star.
In this paper we concentrate on the mechanism for producing star grazing planetesimals explored by Beust & Morbidelli (1996) to account for the transient absorption lines observed against beta Pictoris (e.g., Crawford et al. (1994); Lagrange et al. (1996)). In this context a star grazing planetesimal approaches within 10 stellar radii of the star. Mean motion resonances (such as the 3:1 and 4:1) with one large moderately eccentric planet, can pump eccentricities to 1.0. In §2 using averaged Hamiltonians we plot the range of planetesimal and planet eccentricities needed for a given resonance to produce a star impacting body. However the region of phase phase that results in extremely high eccentricity orbits is not necessarily large since many particles with semi-major axes containing the resonance will not librate or will not librate to high eccentricities (e.g., as shown in the contour plots of Yoshikawa (1990); Yoshikawa (1991)). As a planet migrates we expect many particles to have orbital elements such that they will not be caught in the active (or high eccentricity) part of the resonance. So in §3 we estimate via numerical integration the efficiency of these resonances to produce extremely high eccentricity particles. For a series of integrations we tabulate the numbers of particles which impact the star and those which eventually cross the Hill sphere of the planet and are ejected to large semi-major axes.
## 2 When can star impacting planetesimals be produced?
During the migration of a major planet, mean-motion resonances will be swept through the disk of planetesimals. Though secular resonances are also capable driving particles to extremely high eccentricities (Levison, Duncan & Wetherill (1994)) they may not necessarily be swept through the disk. We also cannot necessarily assume that secular resonances are always strong in extra-solar systems (Beust & Morbidelli (1996)) particularly as the innermost planet becomes more distant from its neighboring planets. So we concentrate here on mean motion resonances with one major planet.
The maximum eccentricity reached by particles librating in a resonance is extremely sensitive to the eccentricity of the planet (Beust & Morbidelli (1996); Yoshikawa (1990); Moons & Morbidelli ). We expand on the work of Beust & Morbidelli (1996) to determine what range of eccentricities for a planet are required to pump particle eccentricities to 1. We created contour plots numerically from the Hamiltonian averaged over time (as in Beust & Morbidelli (1996) and Yoshikawa (1990)). For each resonance we then determined what minimum initial particle eccentricity is needed for a particle to later reach the star ($`e=1`$). We estimated this minimum eccentricity (shown in Fig. 1) for a range of planet eccentricities, $`e_p`$. These contour plots are only extremely weakly dependent on the planet mass. We see in Fig. 1 that past a planet eccentricity of 0.3 the 3:1, 4:1, 5:1, 5:2 and 7:2 resonances are all capable of driving low eccentricity particles to extremely high eccentricities. The eccentricities of the extrasolar planets are not restricted to extremely low values (Marcy (1999)). This implies that resonances which are capable of causing star grazing or impacting planetesimals are likely to exist in almost all of these systems.
## 3 Simulation of particles in mean motion resonances during orbital migration
To estimate the efficiency of production of high eccentricity orbits we numerically integrate the orbits of particles (using a conventional Burlisch-Stoer numerical scheme) during the slow migration of a major planet. All particles are massless except for the star and one planet with an eccentricity $`ϵ_p`$, which remains constant throughout the integration. During the integration we force the semi-major axis of the planet to drift inwards at a rate given by the dimensionless parameter
$$D_a=\frac{da}{dt}\frac{P}{a}$$
(1)
for $`P`$ the period of the planet and $`a`$ its semi-major axis. $`D_a`$ is fixed during the integration resulting in $`\frac{da}{dt}\sqrt{a}`$. Particles are placed in the plane of the planet’s orbit just within (a few resonance widths) the semi-major axis of either the 3:1 or 4:1 resonance. For each particle the angle of perihelion and the mean anomalies were chosen randomly. Massless particles were integrated until they were driven to high eccentricity ($`ϵ>0.995`$) and so impact the star, or crossed the Hill sphere radius of the planet and were ejected to semi-major axes larger than the planet. This took between a few times $`10^5`$ to $`10^6`$ periods measured in units of the initial orbital period of the planet. In Table 1 we note the initial conditions, migration rates, planet masses and eccentricities (which remain fixed during the simulation), and final particle fates for a set of 10 particle integrations. In Table 2 we note the resonances operating on the particles in each simulation prior to impact or ejection.
A sample plot showing eccentricity and semi-major axes for a run (denoted N8) are shown in Fig. 2. Almost at all times particles are strongly affected by resonances. When a particle crosses the 3:1 or 4:1 resonance it may be trapped in a high eccentricity region of the resonance. Then the particle can be pumped to extremely high eccentricities and impact the star. We find that both the 3:1 and 4:1 resonances cause impacts. However if the particle does not remain trapped in the resonance it can later on be caught in another resonance. For example, we observe that particles not removed by the 3:1 may later on be caught in the 5:2 or 7:3 resonances and particles not initially affected by the 4:1 may subsequently be caught in the 3:1, 7:2 or 8:3 resonances (see Table 2). If the particle is trapped or strongly affected by a resonance nearer to the planet (such as the 8:3 resonance) then it has a higher chance of being ejected than hitting the star. In the slower migration rate simulations (N5, M5) we see that even minor resonances such as the 11:3, 10:3, 11:4 cause jumps in the semi-major axis as the particle crosses the resonance. However only the 3:1 and 4:1 are strong enough (and with large enough regions in phase space) that particles are trapped in them for long periods of time. These resonances are responsable for the majority of impacts.
In Fig. 2a we see that particles trapped in the 3:1 and 4:1 resonances can make multiple close approaches to the star. During a close approach the surface of a planetesimal will graze the star and so be sublimated it. Thus we would predict that a migrating planet would cause continuing production of ‘falling evaporative bodies’, as proposed to explain the transient absorption lines observed against beta Pictoris and other stars (e.g., Beust & Morbidelli (1996); Crawford et al. (1994); Lagrange et al. (1996); Grady et al. (1996)). We see in our simulations that more than one resonance can cause star grazers. If star grazers are produced by more than one resonance then particles could approach the star from different angles with respect to the planet’s angle of perihelion. This might provide an alternative explanation for the occasional blue-shifted event on beta Pictoris (Crawford, Beust & Lagrange (1998)).
Even though the 3:1 and 4:1 resonances can pump eccentricities to 1.0, in every simulation (see Tab. 1) we find particles which pass through these resonances that are not pumped to high eccentricities and so removed from the system by an impact with the star. These particles can later be ejected by the planet. While the 3:1 and 4:1 resonances can reduce the surface density in a disk of planetesimals, they do not create a hole as they are swept through the disk. If the density of planetesimals is high enough, a planet migrating as a result of ejection of planetesimals can migrate to within its original (at formation) 3:1 or 4:1 mean motion resonances. This would allow a planet to migrate a substantial fraction of the planet’s semi-major axis via ejection of planetesimals.
Particles which impact the star loose all their angular momentum to the planet which would reduce the planet’s eccentricity. During the time they are trapped in a resonance their semi-major axes decreases. This implies that the planet will gain energy. However the decrease in semi-major axis was typically less than $`30\%`$ of the particle’s initial semi-major axes, so the energy gained by the planet from trapped particles should be small compared to that lost from ejected particles.
We did not find that the fraction of impacts was strongly dependent on the planet migration rate, initial particle conditions, or planet eccentricity. However more particles should be integrated to verify this. We would have expected that slower migration rates, more massive planets, lower eccentricity initial particle eccentricities, and higher planet eccentricities would result in an increase in the efficiency of trapping particles in resonances and so in producing impacts. However the number of resonances operating on each particle makes it difficult to predict the final states. For example when the migration rate is fast or the planet eccentricity is lowered then we found that weaker resonances such as the 4:1 or 7:2 did not affect the particles much, however the 3:1 was still strong enough to cause impacts.
### 3.1 Survival until impact
In our integrations we can estimate the timescale for the eccentricity to reach $`0.995`$. While some particles impact the star on a very short timescale (e.g., particle 1 in Fig. 2), others are slowly pumped to high eccentricities (e.g., particles 5,6,7,and 8 in Fig. 2). For the slower approaches the particle or planetesimal could make $`10^410^5`$ close passages to the star before impact. When the migration rate was slower particles typically experienced larger numbers of close passages before impact. We have estimated the mass loss from a rocky body during a free fall time at solar radius from the sun to be $`30`$ cm. If the planetesimal makes $`10^4`$ of such close passages then a km body will be completely evaporated by a solar type star. For particles making multiple close approaches only large bodies $`1`$km will survive until impact. Notably the size distribution of asteroids subsequent to Gyr timescale collisional evolution (Davis et al. (1985); Greenberg et al. (1978)) is expected to be such that most of the integrated disk mass is contained in the largest bodies. When migration is relatively quick, this mechanism could be a way to increase the metallicity of the star, despite the fact that the lower mass bodies may not survive until impact. Smaller bodies which will completely evaporate could manifest themselves as transient absorption features, a phenomenon which is observed on beta Pictoris and other stars (e.g., Crawford et al. (1994); Lagrange et al. (1996); Grady et al. (1996)).
We now consider whether large bodies are likely to fragment upon close approach. If the object is strengthless then it is likely to fragment at periapse only if the density of the object is lower than the mean density of the star (e.g., Asphaug & Benz (1996); Sridhar & Tremaine (1992)). The mean density of the sun is $`\rho 1.4`$ g cm<sup>-3</sup> so that on a solar type star all but the least dense asteroids should not fragment and so should survive until impact. On lower mass main sequence stars (which are denser), however, denser objects could be fragmented during close passages subsequent to impact. On higher mass stars, such as beta Pictoris, even cometary material will not be fragmented by the star during close passages.
## 4 Summary and Discussion
We have presented a series of numerical integrations of particles initially at low eccentricities which pass through mean motion resonances with a major moderate eccentricity migrating planet. We confirm that the 3:1 and 4:1 resonances can pump the particles eccentricities to 1.0 and so can cause particles trapped in them to impact the star or be evaporated by it. As a planet migrates through a disk of planetesimals we would expect continuing production of bodies undergoing close approaches to the star. This provides us with a possible observational test. A recent study finds that beta Pictoris may be quite young ($`2\times 10^7`$ years; Barrado y Navascues et al. (1999)). If orbital migration occurs commonly during this timescale then a multi-object (or multi-fiber) survey in young clusters should detect transient absorption features due to evaporating bodies similar to those seen in beta Pictoris and other stars.
Our integrations show that many particles which pass through these resonances will not be pumped to high eccentricities and so removed from the system by evaporation or by impact with the star. These particles can subsequently be ejected by the planet. This implies that a planet can migrate a significant fraction of its initial semi-major axis via ejection of planetesimals.
For the faster migration rates, we estimate that $`1`$ km sized rocky bodies will survive heating from a solar type star during multiple close passages and so can become incorporated into the convection zone of the star. Because we expect that most of the mass will be in the most massive bodies, this migration process may be capable of increasing the metallicity of the star. Planet migration should occur on a $`10^7`$ year timescale (Murray et al. (1998)) so we do not expect the star to be fully convective during migration. Metals dumped into the star should remain in the convection zone of the star. This scenario therefore offers a plausible explanation for the metallicity enhancements observed in stars with extrasolar planets (Gonzalez et al. (1999)).
To migrate a significant fraction of its semi-major axis the planet must eject on the order of its mass (Murray et al. (1998)) in planetesimals. If the material ejected is rocky then the original proto-stellar disk would have had $`30`$ times this mass in gas and volatiles. It is not inconceivable that this amount of material was left in and interior to a Jupiter mass planet after formation. However planetesimals exterior to the planet forced to high eccentricity by a secondary planet may also be ejected by a planet and so cause its migration. Some fraction of these particles will also impact the star (e.g., as seen in simulations of short period comets, Levison & Duncan (1994)). This suggests another possible link between star grazers and impactors and orbital migration.
This work could not have been carried out without suggestions, discussions and correspondence from N. Murray, C. Pilachowski, R. Strom, M. Sykes, D. Davis, R. Greenberg, J. Raymond, D. Trilling, D. Garnett, B. Livingston, R. Kudriski, and J. Lunine. We also thank G. Rivlis and K. Ennico for donations of computer time and support. We acknowledge support from NASA project numbers NAG-53359 and NAG-54667.
|
no-problem/9907/quant-ph9907024.html
|
ar5iv
|
text
|
# Quantum Probability from Decision Theory?
## I Introduction
In a recent paper, Deutsch attempts to derive the “probabilistic predictions of quantum theory” from the “non-probabilistic part of quantum theory” and what he views as the “non-probabilistic part of classical decision theory.” For Deutsch this means the following. The nonprobabilistic part of quantum theory is contained in the axioms that associate quantum states with rays in Hilbert space and observables with Hermitian operators; in particular, the eigenvalues of a Hermitian operator are the only possible results of a measurement of the corresponding observable, and if the quantum state is an eigenstate of the Hermitian operator, the eigenvalue is the certain result of a measurement of that observable. The relevant nonprobabilistic part of classical decision theory includes the assumption that a rational decision maker orders all his preferences transitively—that is, if he prefers $`A`$ to $`B`$ and $`B`$ to $`C`$, he must also prefer $`A`$ to $`C`$. From these assumptions, Deutsch seeks to derive, first, that quantum mechanics has a probabilistic interpretation and, second, that the quantum probability rule has the standard form of a squared inner product. Deutsch describes his result as follows:
> Thus we see that quantum theory permits what philosophy would hitherto have regarded as a formal impossibility, akin to ‘deriving an ought from an is’, namely deriving a probability statement from a factual statement. This could be called deriving a ‘tends to’ from a ‘does’.
We argue in this paper that Deutsch’s derivation fails to achieve both its goals. First, as we discuss in Sec. II, the standard nonprobabilistic axioms of classical decision theory, which include the assumption of (complete) transitive preferences, already ensure that the preferences can be ordered in terms of probabilities and utility functions . Second, as we detail in Sec. III, Deutsch’s derivation of the form of the quantum probability law is flawed because an ambiguity in his notation masks a hidden probabilistic assumption that is essential for the derivation.
Despite the failure of Deutsch’s derivation, we are sympathetic to the view that the meaning of probability in quantum mechanics is specified by its role in rational decision making. Indeed, we believe that this view can help illuminate the very nature of quantum theory . We believe, however, that the primary technical machinery underlying this view is already provided by Gleason’s theorem , an oft-neglected derivation of the quantum probability law. We review the theorem in Sec. IV. Gleason assumes that observables are described by Hermitian operators, supplementing that only by the assumption that the results of measurements cannot always be predicted with certainty and that the uncertainty is described by probabilities that are consistent with the Hilbert-space structure of the observables. From this he is able to derive both that the possible states are density operators and that the quantum probability law is the standard one. Because Gleason’s theorem gives both the state space of quantum mechanics and the probability rule, we believe it trumps all other derivations along these lines.
## II Probabilities and decision theory
Classical decision theory, formulated along the lines that Deutsch has in mind, envisions a rational decision maker, or agent, who is confronted with a choice among various games . Each game is described by a set of events labeled by an index $`j`$, which the agent believes will occur with probability $`p_j`$. The value the agent attaches to an event within a given game is quantified by its utility $`x_j`$. Decision theory seeks to capture the notion of rational decision making by positing that the agent decides among the games by choosing the one that has the largest expected utility,
$$\underset{j}{}p_jx_j.$$
(1)
A simple consequence of this framework is that an agent can give his preferences among games a complete transitive ordering.
Deutsch extracts what he sees as the nonprobabilistic part of decision theory and applies it to quantum mechanics in the following way. A game is again described by events, now interpreted as the outcomes of a measurement of a Hermitian operator that has eigenstates $`|\varphi _j`$. The $`j`$th outcome has utility $`x_j`$. In place of the probabilities of classical decision theory, Deutsch substitutes the normalized quantum state of the system in question,
$$|\psi =\underset{j}{}\lambda _j|\varphi _j.$$
(2)
Thus a quantum game, in Deutsch’s formulation, is characterized by a quantum state and utilities that depend on the outcome of a measurement performed on that state. As the final part of his formulation, Deutsch defines the value of a game—the central notion in his argument—as “the utility of a hypothetical payoff such that the player is indifferent between playing the game and receiving that payoff unconditionally.” Deutsch does not assume that the value of a game is an expected utility, for that is precisely the probabilistic aspect of classical decision theory he wants to exclude from his formulation. He does assume that the values are transitively ordered and that a rational decision maker decides among games by choosing the game with the largest value.
Deutsch describes this in the following way:
> On being offered the opportunity to play such a game at a given price, knowing $`|\psi `$, our player will respond somehow: he will either accept or refuse. His acceptance or refusal will follow a strategy which, given that he is rational, must be expressible in terms of transitive preferences and therefore in terms of a value $`𝒱[|\psi ]`$ for each possible game.
Notice that Deutsch denotes the value of a game without explicit reference to the utilities and the corresponding eigenstates, which partially define the game. We prefer a more explicit notation. First we define a Hermitian utility operator
$$\widehat{X}=x_j|\varphi _j\varphi _j|.$$
(3)
We now can denote the value of a game more explicitly as $`𝒱(|\psi ;\widehat{X})`$, which includes both defining features of a game, the quantum state $`|\psi `$ and the utility operator $`\widehat{X}`$. Our notation serves its purpose in Sec. III, where it helps to ferret out a flaw in Deutsch’s derivation.
Before turning to such details of Deutsch’s argument, we consider a more fundamental issue. Deutsch’s attempt to derive the probabilistic interpretation of quantum mechanics from purely nonprobabilistic considerations must fail, because his assumption of complete transitive preferences is tantamount to assuming probabilities at the outset. The conventional understanding of preferences—making decisions in the face of uncertainty—already hints strongly that probabilities will be an essential tool in any decision theory. Indeed, this is the import of a fundamental result of the theory : if one assumes complete transitive preferences among games along with standard nonprobabilistic axioms, one can determine simultaneously utility functions and sets of probabilities, such that the agent’s behavior is described as maximizing expected utility. If the preferences among games are quantified by a value function $`𝒱`$, then for each game there exist probabilities $`p_j`$ and transformed utilities $`F(x_j)`$, where $`F`$ is a strictly increasing function, such that expected utility gives the same ordering:
$$𝒱(|\psi ;\widehat{X})=F^1\left(\underset{j}{}p_jF(x_j)\right).$$
(4)
The crucial point is that this intimate relationship between preferences and probabilities is purely classical, having nothing to do with quantum mechanics.
Although Deutsch’s argument fails to exclude a priori probabilistic considerations, it might nevertheless provide a derivation of the specific form of the quantum probability law. To assess this possibility, we turn now to Deutsch’s specific argument.
## III Examination of Deutsch’s derivation
In this section we examine the derivation of what Deutsch terms the “pivotal result” of his argument,
$$𝒱(\frac{1}{\sqrt{2}}\left(|\varphi _1+|\varphi _2\right);\widehat{X})=\frac{1}{2}(x_1+x_2);$$
(5)
that is, that the value of a game in which the quantum state is an equal linear combination of two eigenstates of the utility operator is the mean of the utilities. The derivation of the pivotal result is contained in Deutsch’s Eqs. (D7)–(D11) and related textual material. (Here and throughout we refer to equations in Deutsch’s paper by prefixing a D to the equation number.) We comment briefly on later steps in Deutsch’s argument at the end of this section.
Deutsch uses a notation for the value of a game that makes no explicit reference to the utility operator. Furthermore, he employs a notational convention, often used in physics, whereby an eigenvector of an operator—in this case the utility operator—is labeled by its eigenvalue—in this case the utility itself. This labeling can cause confusion when games involving different utility operators are under consideration, as in the argument examined in this section. The resulting ambiguity leads Deutsch to accidentally equate the value of two games whose value cannot be shown to be equal without some additional assumption. Identifying this hidden assumption is the goal of this section.
In deriving the pivotal result, Deutsch posits two properties of the quantum value function. The first, given in Eq. (D8), we call the displacement property. Written in our notation, this property becomes
$$𝒱(\underset{j}{}\lambda _j|\varphi _j;\underset{j}{}(x_j+k)|\varphi _j\varphi _j|)=k+𝒱(\underset{j}{}\lambda _j|\varphi _j;\underset{j}{}x_j|\varphi _j\varphi _j|).$$
(6)
Our notation makes clear that both sides of this equation refer to the same quantum state, but different utility operators. In contrast, Eq. (D8) is ambiguous. The left-hand side of Eq. (D8) refers to the state $`_j\lambda _j|x_j+k`$, whereas the right-hand side refers to the state $`_j\lambda _j|x_j`$; it is unclear whether the two sides refer to different quantum states or to a single state labeled according to two different utility operators. We adopt the latter interpretation as being the one most consistent with Deutsch’s discussion. The second property of value functions, Deutsch’s zero-sum property (D9), becomes in our notation,
$$𝒱(\underset{j}{}\lambda _j|\varphi _j;\underset{j}{}(x_j)|\varphi _j\varphi _j|)=𝒱(\underset{j}{}\lambda _j|\varphi _j;\underset{j}{}x_j|\varphi _j\varphi _j|).$$
(7)
Equation (D9) suffers from the same sort of ambiguity as Eq. (D8): it refers to two states, $`_j\lambda _j|x_j`$ and $`_j\lambda _j|x_j`$. In Eq. (7) we again choose the interpretation that there is a single state, but two different utility operators. The zero-sum property is an axiom of Deutsch’s nonprobabilistic decision theory, and the displacement property follows from the principle of additivity, another axiom of his analysis.
The derivation of the pivotal result deals with a state $`|\psi `$ that is a superposition of two utility eigenstates:
$$|\psi =\lambda _1|\varphi _1+\lambda _2|\varphi _2.$$
(8)
When writing the displacement and zero-sum properties for such a state, we can omit the other eigenstates from the utility operator, since as Deutsch shows, the corresponding outcomes do not occur. To shorten the equations, we introduce the abbreviations
$$\widehat{\mathrm{\Pi }}_i=|\varphi _i\varphi _i|,i=1,2.$$
(9)
We now proceed in our notation through the rest of the argument leading to Eq. (D11). We carry along arbitrary amplitudes $`\lambda _1`$ and $`\lambda _2`$, because this helps to illustrate the nature of the hidden assumption. The reasoning begins with the displacement property (D8), specialized to the case $`k=x_1x_2`$:
$$𝒱(|\psi ;x_1\widehat{\mathrm{\Pi }}_1+x_2\widehat{\mathrm{\Pi }}_2)x_1x_2=𝒱(|\psi ;x_1\widehat{\mathrm{\Pi }}_2x_2\widehat{\mathrm{\Pi }}_1).$$
(10)
Applying the zero-sum property (7) to the game on the right-hand side yields
$$𝒱(|\psi ;x_2\widehat{\mathrm{\Pi }}_1+x_1\widehat{\mathrm{\Pi }}_2)+𝒱(|\psi ;x_1\widehat{\mathrm{\Pi }}_1+x_2\widehat{\mathrm{\Pi }}_2)=x_1+x_2.$$
(11)
Deutsch uses this result in the case $`\lambda _1=\lambda _2=1/\sqrt{2}`$, where it becomes
$$𝒱(\frac{1}{\sqrt{2}}(|\varphi _1+|\varphi _2);x_2\widehat{\mathrm{\Pi }}_1+x_1\widehat{\mathrm{\Pi }}_2)+𝒱(\frac{1}{\sqrt{2}}(|\varphi _1+|\varphi _2);x_1\widehat{\mathrm{\Pi }}_1+x_2\widehat{\mathrm{\Pi }}_2)=x_1+x_2.$$
(12)
In Deutsch’s notation the values of the two games in this equation are denoted in the same way, so he assumes they are equal,
$$𝒱(\frac{1}{\sqrt{2}}(|\varphi _1+|\varphi _2);x_2\widehat{\mathrm{\Pi }}_1+x_1\widehat{\mathrm{\Pi }}_2)=𝒱(\frac{1}{\sqrt{2}}(|\varphi _1+|\varphi _2);x_1\widehat{\mathrm{\Pi }}_1+x_2\widehat{\mathrm{\Pi }}_2),$$
(13)
which leads immediately to the pivotal result (5).
Equation (13) is the hidden assumption in Deutsch’s argument. To see that it is required and that it involves introducing the notion of probabilities, consider the following rule for measurement outcomes: the result associated with eigenstate $`|\varphi _1`$ always occurs. This deterministic rule is perfectly legitimate at this point in the argument. In Eq. (12) it gives utility $`x_2`$ in the first game and utility $`x_1`$ in the second, thus satisfying the equation.
Another way to get at the import of Deutsch’s hidden assumption is to make a similar assumption for the case of arbitrary expansion coefficients, that is to assume
$$𝒱(\lambda _1|\varphi _1+\lambda _2|\varphi _2;x_2\widehat{\mathrm{\Pi }}_1+x_1\widehat{\mathrm{\Pi }}_2)=𝒱(\lambda _1|\varphi _1+\lambda _2|\varphi _2;x_1\widehat{\mathrm{\Pi }}_1+x_2\widehat{\mathrm{\Pi }}_2).$$
(14)
Both this assumption and the more specialized one embodied in Eq. (13) are equally well (or badly) justified at this stage of the argument. The reason is that as yet $`\lambda _1`$ and $`\lambda _2`$ are just numbers attached to the possible outcomes, having no a priori relation to probabilities. Substituting Eq. (14) into Eq. (11) gives
$$𝒱(\lambda _1|\varphi _1+\lambda _2|\varphi _2;x_1\widehat{\mathrm{\Pi }}_1+x_2\widehat{\mathrm{\Pi }}_2)=\frac{1}{2}(x_1+x_2),$$
(15)
which generalizes to the rule that the value of a game is the arithmetic mean of the utilities that have nonzero amplitude. This corresponds to the probability rule $`p_j=(\text{number of nonzero amplitudes})^1`$. Notice that this probability rule is contextual in the sense of Gleason’s theorem (see discussion in Sec. IV).
We conclude that to derive the pivotal result (5), one must include Eq. (13) as an additional assumption. In our view, including this additional assumption is not just a minor addition to Deutsch’s list of assumptions, but rather a major conceptual shift. The assumption is akin to applying Laplace’s Principle of Insufficient Reason to a set of indistinguishable alternatives, an application that requires acknowledging a priori that amplitudes are related to probabilities. Once this acknowledgement is made, however, the pivotal result (5) is a simple consequence of classical decision theory, as can be seen in the following way. As discussed in Sec. II, the existence of a numerical value $`𝒱(|\psi ;\widehat{X})`$ for each game, together with standard nonprobabilistic axioms of decision theory, entails that there exist probabilities $`p_1`$ and $`p_2`$ such that
$$𝒱(\frac{1}{\sqrt{2}}(|\varphi _1+|\varphi _2);x_1\widehat{\mathrm{\Pi }}_1+x_2\widehat{\mathrm{\Pi }}_2)=F^1\left(p_1F(x_1)+p_2F(x_2)\right),$$
(16)
where $`F`$ is a strictly increasing function. The hidden assumption (13) then takes the form
$$p_1F(x_2)+p_2F(x_1)=p_1F(x_1)+p_2F(x_2).$$
(17)
If this is to be true for arbitrary $`x_1`$ and $`x_2`$ (or for any $`x_1x_2`$), it follows that $`p_1=p_2=1/2`$. Thus in the context of classical decision theory, the assumption (13) is equivalent to applying the Principle of Insufficient Reason to the case of equal amplitudes.
It is difficult to assess the validity of Deutsch’s argument once one gets past the derivation of the pivotal result. This is mainly because the remainder of the argument repeatedly invokes the principle of substitutability. The difficulty is that this principle—that the value of a game is unchanged when a subgame is replaced by another subgame of equal value—is never given a precise mathematical formulation in the quantum context. In any case, the remainder of the argument can be simplified once one realizes that the Principle of Insufficient Reason is an essential ingredient, for then one gets immediately that for an equal superposition of $`n`$ eigenstates, the probability of each outcome is $`1/n`$.
The vagueness of the principle of substitutability has an important consequence. We believe that the probability rule following Eq. (15) satisfies all of Deutsch’s assumptions, including a suitably defined principle of substitutability. If it does, then it shows that no amount of cleverness in using Deutsch’s assumptions can ever lead uniquely to the standard quantum rule for probabilities. The fly in the ointment is that without a precise formulation of the principle of substitutability, it is not possible to tell whether this rule satisfies it.
## IV Conclusion
We have seen that if one assumes the nonprobabilistic part of classical decision theory, then one is effectively introducing probabilities at the same time. Indeed, once one realizes that quantum theory deals with uncertain outcomes, one is forced to introduce probabilities, as they provide the only language for quantifying uncertainty . From this point of view, the most powerful and compelling derivation of the quantum probability rule is Gleason’s theorem.
> Gleason’s theorem: Assume there is a function $`f`$ from the one-dimensional projectors acting on a Hilbert space of dimension greater than 2 to the unit interval, with the property that for each orthonormal basis $`\{|\psi _k\}`$,
>
> $$\underset{k}{}f\left(|\psi _k\psi _k|\right)=1.$$
> (18)
> Then there exists a density operator $`\widehat{\rho }`$ such that
>
> $$f\left(|\psi \psi |\right)=\psi |\widehat{\rho }|\psi .$$
> (19)
It is worthwhile to ponder the meaning of this theorem. It assumes the Hilbert-space structure of observables—that is, that each orthonormal basis corresponds to the mutually exclusive results of a measurement of some observable. It sets as its task to derive the probabilities for the inevitably uncertain measurement outcomes. The only further ingredient required is that the probability for obtaining the result corresponding to a normalized vector $`|\psi `$ depends only on $`|\psi `$ itself, not on the other vectors in the orthonormal basis defining a particular measurement. This important assumption, which might be called the “noncontextuality” of the probabilities, means that the probabilities are consistent with the Hilbert-space structure of the observables. With these assumptions the probabilities for all measurements can be derived from a density operator $`\widehat{\rho }`$ using the standard quantum probability rule. Remarkably this conclusion does not rely on any assumption about the continuity or differentiability of $`f`$; the only essential property of $`f`$ is that it be bounded.
By assuming that measurements are described by probabilities that are consistent with the Hilbert-space structure of the observables, Gleason’s theorem derives in one shot the state-space structure of quantum mechanics and the probability rule. It is hard to imagine a cleaner derivation of the probability rule than this.
###### Acknowledgements.
This work was supported in part by the the U.S. Office of Naval Research (Grant No. N00014-1-93-0116) and the U.S. National Science Foundation (Grant No. PHY-9722614). H.B. is partially supported by the Institute for Scientific Interchange Foundation (I.S.I.) and Elsag-Bailey, and C.A.F. acknowledges the support of the Lee A. DuBridge Prize Postdoctoral Fellowship at Caltech. C.M.C., C.A.F., and R.S. thank the Isaac Newton Institute for its hospitality, H.B. thanks the I.S.I. for its hospitality, and J.F. acknowledges the hospitality of Lawrence Berkeley National Laboratory.
|
no-problem/9907/cond-mat9907125.html
|
ar5iv
|
text
|
# Glassy transition in a disordered model for the RNA secondary structure
## Abstract
We numerically study a disordered model for the RNA secondary structure and we find that it undergoes a phase transition, with a breaking of the replica symmetry in the low temperature region (like in spin glasses). Our results are based on the exact evaluation of the partition function.
The folded structure of biopolymers, like RNA and proteins, is crucial for understanding the biological functionality of these molecules and its characterization still remains a challenging problem in statistical mechanics and theoretical biology . The folding problem usually consists in understanding if and how a particular biomolecule (maybe one selected by evolution and present now in nature) folds into its native conformation. In this Letter we are interested in the characterization of the most generic (i.e. random) RNA molecules. Even if real RNAs are not completely random, they present a very large variability in their sequences and no strong correlations in their bases. The interest in studying the limiting and somehow unphysical case of really random sequences arises in order to answer the following questions. Is the folding transition, that forces real biomolecules into their functional shapes, characteristic of those sequences selected by the evolution? Do random sequences show some phase transition too? We answer affirmatively to both questions, showing that the transition depends more on the geometrical constraints and on the interaction energies spread rather than on the specific sequence. However in the random case the transition is of a glassy type and the low-temperature phase is not dominated by a single native state. Our results may be very useful in order to understand better what could happen in a prebiotic world mainly made of random RNA sequences . Such a transition (partially found only in a very simplified model of proteins ) was suggested in previous studies of the RNA folding .
In this Letter we first study the thermodynamical properties of random RNAs, finding some hints for the existence of a glassy transition. The clear evidence for such a transition is shown in the last part of the paper and has been obtained thanks to the typical tools of disorder systems statistical mechanics: spin glass susceptibility and a related parameter (see Fig. 3). The connection with complex systems is well expected: the model has both disorder and frustration.
Generally speaking a classification among biopolymers includes a hierarchy of structures and in principle a complete description must include all these levels. RNA from this point of view is supposed to be simpler than DNA or proteins since its secondary structure seems to capture the essential features of the thermodynamics of the molecule. RNA molecules are linear chains consisting of a sequence of four different bases: adenine ($`A`$), cytosine ($`C`$), guanine ($`G`$) and uracil ($`U`$). The four bases are related by complementarity relations: $`CG`$ and $`AU`$ form stable base pairs with the formation of hydrogen bonds and are also known as Watson-Crick base pairs.
The secondary structure of RNA is the set of base pairs that occur in its three-dimensional structure. Let us define a sequence as $`\{r_1,r_2,\mathrm{},r_n\}`$, $`r_i`$ being the $`i^{th}`$ base and $`r_i\{A,C,G,U\}`$. A secondary structure on $``$ is now defined as a set $`𝒮`$ of $`(i,j)`$ pairs (with the convention that $`1ijn`$) according to the following rules:
a) $`ji4`$: this restriction permits flexibility of the chain in its three-dimensional arrangement.
b) Two different base pairs $`(i,j),(i^{},j^{})𝒮`$ if and only if (assuming with no loss of generality that $`i<i^{}`$):
$`i<j<i^{}<j^{}`$: the pair $`(i,j)`$ precedes $`(i^{},j^{})`$;
$`i<i^{}<j^{}<j`$: the pair $`(i,j)`$ includes $`(i^{},j^{})`$.
Condition b) avoids the formation of pseudo-knots on the structure and the resulting structure can be drawn on a plane. In real RNA structures it is known that pseudo-knots occur but are rare and they can be excluded as a first approximation .
The energy of a structure is simply defined as $`H[𝒮]=_{(i,j)𝒮}e(r_i,r_j)`$. Other phenomenological parameters (including stacking energies and loop penalties) could be considered in order to take into account the whole complexity of the energy function .
In our approach we assume a drastic approximation to the original model in order to improve its tractability both from numerical and analytical point of view. As a first step we consider sequences of only two symbols $`(A,B)`$, that appear with equal probabilities, and we assume that only two kind of base pairs occurs: $`AA`$ and $`BB`$ pairs with energy $`1`$ (in arbitrary units); $`AB`$ and $`BA`$ pairs with energy $`2`$. It is reasonable to assume that such a reduction of symbols will not affect the thermodynamical class of criticality of the model (this claim is supported by numerical results we have obtained with a 4-letter code and Watson-Crick base pairs). We did not remove the constraint which forbids the links on short distances, but we simplify it to: $`ji2`$. We think that this topological constraint must be kept in order to not drastically change the entropy of the model and then its thermodynamical behavior. In this model disorder (encoded in the sequence $``$) and frustration (induced by the planarity condition on $`𝒮`$) are clearly distinct. We hope this could make the model analytically more manageable.
The model can be formally considered as unidimensional and with long range interactions: the disorder giving rise to different interactions strengths, all with the same sign (here the disorder does not induce frustration), while the planarity condition making the long-distance links unlikely. We have numerically estimated that the probability of having a link between two bases a distance $`r`$ apart goes down roughly like $`r^{3/2}`$.
The planar structure of the configurations and the simple energy function chosen allow to write down a recursion equation for the partition function of the subsequence contained inside the base interval $`(i,j)`$:
$$Z_{i,j}=Z_{i+1,j}+\underset{k=i+1}{\overset{j}{}}Z_{i+1,k1}e^{\beta e(i,k)}Z_{k+1,j},$$
(1)
with $`Z_{i,i}=Z_{i,i1}=1i`$. Such a recursion relation is particularly effective since the time needed for the computation of $`Z_{1,L}`$ scales as $`O(L^3)`$. With a slight modification of the algorithm it is also possible to include similar recursions for the internal energy $`U=H[𝒮]`$ and its second moment $`U^{(2)}=H^2[𝒮]`$, where $``$ is the usual average over the Gibbs-Boltzmann distribution. At this level all the observables actually depend on the sequence over which they have been calculated and, if we want to gain information on the class of universality of the model, we have to average them over all the random realizations of the sequence.
In Fig. 1 we show the specific heat (averaged over the disorder) for sizes ranging from $`L=128`$ to $`L=1024`$. We note a very slow increasing of the peak height with the size, which seems not to diverge. There is no hint for a finite jump in $`C(T)`$. This could be compared with the result by Bundschuh and Hwa who found a finite jump in the specific heat (note however that their model has an unique ground state, which dominates the frozen phase). It is important to point out that in the temperature region $`T0.150.2`$ the curves slightly cross themselves and as a consequence the decrease of $`C(T)`$ becomes steeper for larger sizes. One of the main effects of the disorder is that the location on the temperature axis of the critical region becomes sample-dependent. A measure of the critical region width can be achieved from the sample to sample fluctuation of the temperature where the specific heat has a peak ($`\mathrm{\Delta }T_p`$). We find that $`\mathrm{\Delta }T_pL^\omega `$, with $`\omega =0.26`$. If we assume that these fluctuations are induced by the presence of a nearby transition, we obtain a value $`\nu =\omega ^1=3.9(1)`$.
Since the model is unidimensional, $`\alpha =2d\nu =1.9(1)`$ and then the second derivative of the specific heat with respect to the temperature should display a very slow divergence or a finite jump. In fact, in the lower inset of Fig. 1, can be seen that the argument is fully supported by the data, which show the typical finite size behavior of a discontinuity. The clear crossing point of the data around $`T0.2`$ is supposed to be a signature for non-analyticities in thermodynamical potential. We note that such a point is located well below the peak temperature. This is a common feature in many disordered systems (e.g. spin glasses). Near this temperature also the entropy of the model has a crossing point, which signals a rapid shrink of the available phase space.
Moreover the model has a finite zero-temperature entropy (see upper inset in Fig. 1). The zero-temperature results have been obtain via an exact enumeration of all the ground states structures (GSS) for any given sequence. The number of GSS (i.e. the degeneracy) strongly depends on the sequence: for example, studying thousands of different sequences with length $`L=256`$, we have found sequences with degeneracies ranging from 1 to $`𝒪(10^7)`$. In the upper inset of Fig. 1 we show the zero-temperature entropy defined as $`S(T=0)=\mathrm{log}(𝒩)/L`$, where $`𝒩`$ is the GSS degeneracy and $`L`$ is the sequence length, as a function of $`L`$. The line is the power law extrapolation, which tends to $`S(T=0)=0.0255(8)`$ .
Since the model turns out to be highly degenerate in the low-temperature phase, the natural question is how these GSS are organized. It is quite obvious that a very different physical behavior may appear in a model whose GSS are all very similar (like an ordered or “ferromagnetic” behavior) compared to a model whose GSS are sparse over the whole configurational space. A more quantitative analysis can be achieved introducing the notion of distance between structures and a classification based on these distances. To quantify the relative distance between two structures, we have used the overlap, which is defined as
$$q[𝒮,𝒮^{}]=\frac{1}{L}\underset{i<j}{}l_{ij}^{(𝒮)}l_{ij}^{(𝒮^{})},$$
(2)
where the variable $`l_{ij}^{(𝒮)}`$ ($`l_{ij}^{(𝒮^{})}`$) takes value 1 if sites $`i`$ and $`j`$ are connected in the $`𝒮`$ ($`𝒮^{}`$) sequence and 0 otherwise. By definition the overlap takes values in the interval $`[0,1]`$. For any given disorder realization (i.e. sequence) $``$ we can define the zero-temperature probability distribution function (pdf) of the overlaps as
$$P_{}(q)=\underset{𝒮,𝒮^{}\mathrm{\Gamma }_{}}{}\delta (qq[𝒮,𝒮^{}]),$$
(3)
being $`\mathrm{\Gamma }_{}`$ the GSS set. This definition can be easily generalized to every temperature summing over all the structures and weighting each term with the Gibbs-Boltzmann factor of $`𝒮`$ and $`𝒮^{}`$. The usual classification of disordered systems is based upon the average pdf of the overlaps, the so-called $`P(q)[P_{}(q)]`$, the average being taken over the disorder distribution function.
We have calculated the $`P(q)`$ at different temperatures, $`T[0,0.4]`$. While at $`T=0`$ we summed over the whole sets $`\mathrm{\Gamma }_{}`$, at finite temperatures we performed a Monte Carlo sampling of the structures in the spirit of Higgs .
In Fig. 2 the averaged $`P(q)`$ are shown. The first striking evidence is that, decreasing the temperature, the shape of the $`P(q)`$ changes abruptly from a narrow peak in the low-$`q`$ region to a broader one which extends over almost the whole allowed support. In the insets we present the size dependence of $`P(q)`$ for the highest and lowest temperature considered. For the $`T=0.4`$ case, we are highly confident that the thermodynamical limit would be a delta function (the width of the distribution goes to zero as $`\mathrm{\Delta }qL^{1/2}`$). For the $`T=0`$ case, the asymptotic shape is much more difficult to be extrapolated, since the width of the $`P(q)`$ scales with a small power of $`L`$ (as in ) and, eventually, we can not exclude that it goes to a finite value, implying a breaking of the replica symmetry.
While in Fig. 2 the averaged $`P(q)`$ gives us information about the typical pdf of the overlaps, we can get some hints about the origin of the $`P(q)`$ broadness in the low-temperature phase analyzing directly the $`P_{}(q)`$ for each sequence. If all the GSS of a given sample are very similar its $`P_{}(q)`$ will be non-zero only in a narrow $`q`$-range not too far from the upper bound $`q=1`$. On the other hand, if the GSS are very heterogeneous their mutual overlaps will cover a large $`q`$-range.
The great majority of the sequences shows a very broad $`P_{}(q)`$, signaling a strong heterogeneity in the GSS. Moreover the shape of the pdf completely changes from sequence to sequence (this property is called non-self-averageness in spin glass jargon ). Nevertheless some patterns can be easily recognized: while single peak shapes are mostly associated with low-degeneracy sequences, highly structured ones seem to be not correlated to their degeneracy and they are responsible for the $`P(q)`$ broadness. Among the latter the double-peak shape dominates, especially for the sequences with higher entropy: the higher $`q`$ peak gives information about the typical distance between two structures in the same state , while the lower $`q`$ one can be associated with the rising of a backbone , that is the set of persistent links common to all the GSS (already found in ). The position of this second peak strongly fluctuates from sample to sample, giving rise to the long tail in the $`P(q)`$, like spin glass models in external field.
In order to understand whether a true transition happens in this model, we have measured the order parameter introduced in
$$A=\frac{[\chi _{}^2][\chi _{}]^2}{[\chi _{}]^2},$$
(4)
where $`\chi _{}=L(\mathrm{\Delta }q_{})^2`$, being $`\mathrm{\Delta }q_{}`$ the width of $`P_{}(q)`$. The $`A`$ parameter measures how much the $`P_{}(q)`$ changes from sample to sample. The crossing point of different curves in Fig. 3 signals the existence of a low-temperature spin glass phase, where the $`P_{}(q)`$ become non-self-averaging (analogous results has been obtained with the 4-letter model). In this phase the RNA is “folded”, that is the number of links is nearly the maximum allowed.
The critical temperature of this transition seems to be located between $`T=0.1`$ and $`T=0.15`$. We have determined the best estimates for $`T_c`$ and for the critical exponent $`\eta `$ requiring the best collapse for the susceptibility $`\chi [\chi _{}]`$ data, scaled assuming the usual finite-size formula $`\chi =L^{2\eta }f(L^{1/\nu }(TT_c))`$ (see inset of Fig. 3). We obtain the values $`T_c0.13`$ and $`\eta 1.41`$. We stress that we also tried to collapse the data fixing $`T_c=0`$, but the result was very poor.
The critical temperature seems to be below the one we found by the study of thermodynamical quantities. However given the high value of $`\nu `$, the critical region should shrink as $`L^{1/\nu }`$ and then all the region around $`T=0.10.3`$ is critical as suggested by the wide separation of the two peaks in $`_T^2C`$ (lower inset of Fig. 1).
We have presented strong numerical evidences for a phase transition in a random model for the RNA secondary structure. It is very important to stress that the thermodynamical limit is not so interesting for biological RNAs, which are at most thousands bases long. As a consequence our sizes are in principle directly comparable with a large number of biological molecules. Our findings about the broadness of the $`P(q)`$ could suggest for the existence of zero-energy fluctuations of the order of the volume, which is a well known behavior in spin glass and disordered systems. In , for example, it has been found that the matching problem (which is disordered and frustrated) has low-energy excitations of order $`\sqrt{L}`$. These excitations becomes irrelevant in the thermodynamical limit, but they are a key ingredient in order to correctly describe finite systems. In the low temperature region of our model (from $`T=0.13`$ down to $`T=0`$) $`\chi L^{0.6}`$ and it has strong fluctuations from sample to sample. This situation can be described by an effective breaking of the replica symmetry with a strength which goes to zero as $`L^{0.4}`$ according to the slow shrinking of $`P(q)`$ at $`T=0`$. Incidentally we note that in all this temperature region the critical exponent of $`\chi `$ is the same, as suggested from the scaling plot in the inset of Fig. 3. Moreover we have also measured the $`G`$ cumulants defined in and we have verified that it goes to the value $`\frac{1}{3}`$ as the temperature goes to zero coherently with a replica symmetry breaking scenario.
In conclusions, we have found a glassy transition in a simplified random model for the RNA secondary structures. This transition corresponds to the breaking of the configurational space in many disconnected regions (ergodicity breaking). In terms of random RNA folding, this means that below the critical temperature almost every sequence folds (all the low-energy structures are very compact), but very often not in a single structure. The ergodicity breaking is of primary importance also for the folding dynamics, that may become very slow (glassy).
We have checked that the transition disappears as soon as we remove the constraint of not having links on short distances (maybe this is a pathology of the 2-letters code) or as soon as we set all the interaction energies to the same value. These facts suggest us that the glassy transition is mainly due to the freezing of some strong links, which then force the rest of the interactions, aided by the geometrical constraints. A cooperation phenomenon between interaction energies heterogeneity and geometrical constraints has been already observed in DNA models .
We warmly thank R. Zecchina for many interesting discussions and for a careful reading of the manuscript.
|
no-problem/9907/cond-mat9907443.html
|
ar5iv
|
text
|
# A Ball in a Groove
## Rolling solutions
We first consider solutions in which both contacts roll without sliding: One of the contact points retrogrades up its contact plane, while the other rolls down its plane. Thus we have Eqs. (10) for the normal displacements, while Eqs. (11) become
$$u_{t;(\genfrac{}{}{0pt}{}{L}{R})}=Z(\mathrm{cos}\beta _{(\genfrac{}{}{0pt}{}{L}{R})}\pm \upsilon ),$$
(13)
In principle, we have a uniquely soluble system, since we have the three variables $`Z`$, $`\beta _L`$ and $`\upsilon `$ describing the kinematics, with three conditions of static equilibrium. However, we must also impose Eqs. (4) constraining the values of $`T/N`$. Thus there will be some restricted region of the $`\gamma \delta `$ plane in which solution of the Eqs. (1-3) is possible. Substituting Eqs. (10) and (13) into Eqs. (1-3), we obtain
$$\left(\frac{\mathrm{sin}\beta _R}{\mathrm{sin}\beta _L}\right)^{\frac{3}{2}}=\frac{\mathrm{sin}\theta _L}{\mathrm{sin}\theta _R}+\frac{\left(\mathrm{cos}\beta _L+\upsilon \right)\left(\mathrm{cos}\theta _R\mathrm{cos}\theta _L\right)}{\mathrm{sin}\beta _L\mathrm{sin}\theta _R},$$
(14)
and
$$\left(\frac{\mathrm{sin}\beta _R}{\mathrm{sin}\beta _L}\right)^{\frac{1}{2}}=\frac{\mathrm{cos}\beta _L+\upsilon }{\mathrm{cos}\beta _R\upsilon }.$$
(15)
It is straightforward to solve these equations for $`\beta _L`$ (and thus $`\beta _R`$) and for $`\upsilon `$. For $`\theta _L=\theta _R`$ (or $`\delta =0`$), $`\beta _L=\beta _R=\frac{\pi }{2}\gamma `$ and $`\upsilon =0`$, i.e., there is no rolling and both contacts are stuck. Furthermore,
$$\frac{u_{t;L}}{u_{n;L}}=\frac{u_{t;R}}{u_{n;R}}=\mathrm{cot}\beta _L=\mathrm{tan}\gamma =\frac{T_L}{N_L}=\frac{T_R}{N_R}<k.$$
(16)
Hence, the solution for $`\delta =0`$ can only exist for $`\gamma <\mathrm{arctan}(k)`$, because for $`\gamma >\mathrm{arctan}(k)`$, the values of $`T/N`$ corresponding to this solution exceed $`k`$. Thus, emanating from the point $`(\gamma ,\delta )=(\mathrm{arctan}(k),0)`$ is a boundary beyond which double rolling is impossible, because it would require illegal values of $`T/N>k`$. This bound is exceeded first on the steeper of the two walls. This boundary, shown as a solid line in Fig. 2, rejoins the $`\gamma `$-axis at $`\gamma =0`$, asymptotically merging with the line $`\delta =\gamma `$.
There is an additional solution for which $`\upsilon =0`$ (no rolling), with $`\beta _L\beta _R`$, that exists for $`\frac{\pi }{4}\gamma \mathrm{arctan}(\sqrt{2})`$. This additional solution always has at least one of the ratios $`T/N\sqrt{2}`$, so it can only appear for $`k>\sqrt{2}`$, and constitutes a boundary between a regime in which the ball rolls clockwise (i.e., down the steeper surface) for smaller $`\gamma `$, and a regime in which the ball rolls counter-clockwise (i.e., up the steeper surface) for larger $`\gamma `$. The values of $`(\gamma ,\delta )`$ for which this solution occurs is shown for $`k=10`$ as a dot-dashed line in Fig. 2.
## Rolling-sliding solutions
Beyond the boundary at which the steeper wall has $`T=kN`$, the only solutions possible are “rolling-sliding” solutions, in which one contact rolls without sliding while the other slides. Assuming that the left-hand wall slides, we have
$$\frac{T_L}{N_L}=k.$$
(17)
The equations of static equilibrium then imply
$$\frac{T}{N_R}=\frac{\mathrm{sin}\theta _R}{\mathrm{cos}\theta _R\mathrm{cos}\theta _L+\frac{1}{k}\mathrm{sin}\theta _L}\mu ,$$
(18)
where $`\mu `$ depends only upon the geometry and upon the coefficient of friction $`k`$. Note that for $`\theta _R<\theta _L`$, $`\mu <k`$, while for $`\theta _R>\theta _L`$, $`\mu >k`$. Thus, sliding always occurs on the steeper wall, which is indeed the left-hand wall for $`\delta 0`$. This is the simplest continuation of the double-rolling solution, which failed at the boundary through incipient sliding at the steeper wall. We have
$$\frac{N_L}{N_R}=\left(\frac{\mathrm{sin}\beta _L}{\mathrm{sin}\beta _R}\right)^{\frac{3}{2}}=\frac{\mu }{k},$$
(19)
which can be uniquely solved for $`\beta _L`$ and $`\beta _R`$. In addition, the criterion that the ball rolls at the right-hand contact implies that
$$\frac{T}{N_R}=\mu =\frac{\mathrm{cos}\beta _R\upsilon }{\mathrm{sin}\beta _R},$$
(20)
which allows us to solve for $`\upsilon (\beta _R)`$.
The physical constraint $`d_L>0`$ sets the boundary on the region with a rolling-sliding solution, which, for $`\gamma <\mathrm{arctan}(k)`$, coincides with the upper boundary of the double rolling solution discussed in the previous section. For $`\delta =0`$ we have $`\upsilon =0`$ and $`T/N_R=k`$, and this solution collapses onto a double sliding solution for $`\gamma >\mathrm{arctan}(k)`$. In addition, there is a line of solutions with $`\upsilon =0`$, corresponding to one contact sliding and the other being stuck. For $`k<\sqrt{2}`$ this line emanates from $`(\gamma ,\delta )=(\mathrm{arctan}(k),0)`$ and ends at $`(\gamma ,\delta )=(\frac{\pi }{4},\frac{\pi }{4})`$ . For $`k>\sqrt{2}`$, this line emanates from the point on the boundary of the double rolling region for which $`\upsilon =0`$, and ends at $`(\gamma ,\delta )=(\frac{\pi }{4},\frac{\pi }{4})`$ (dashed lines in Fig. 2). As for the double rolling $`\upsilon =0`$ line, this line separates the low-$`\gamma `$ clockwise rolling from the high-$`\gamma `$ counterclockwise rolling.
Much discussion of the statics of granular media has emphasized the two limits of plastic and linear elastic behavior. Clearly the rolling-sliding regime of the ball in the groove is a good analog to plastic behavior–in this regime, the problem is isostatic, meaning that the number of independent forces is identical to the number of constraints. In the double rolling regime, the state of the ball is specified by its displacements, as in conventional elasticity theory. However, minimization of the strain energies corresponding to the contact forces plays no role in resolving the stress indeterminacy.
It is instructive to consider the relationship between the forces and the underlying degrees of freedom $`Z`$, $`\beta _L`$, $`\upsilon `$, and $`d_{(\genfrac{}{}{0pt}{}{L}{R})}`$. The four forces $`T_{(\genfrac{}{}{0pt}{}{L}{R})}`$ and $`N_{(\genfrac{}{}{0pt}{}{L}{R})}`$ are functions of these five underlying variables. In the double-rolling regime, $`d_{(\genfrac{}{}{0pt}{}{L}{R})}=0`$, and there are only three active variables, which are then uniquely fixed by the equations of static equilibrium. We call this an isokinetic configuration, because the number of kinetic degrees of freedom equals the number of constraints enforcing equilibrium.
On the other hand, as one passes into the isostatic rolling-sliding regime, one acquires a new degree of freedom $`d_L`$, so that there are now four underlying active degrees of freedom. However, one of these degrees of freedom is used to fix $`T_L=kN_L`$, so in practice there are three degrees of freedom remaining to satisfy the equations of static equilibrium.
We are grateful to P.M. Chaikin and D. Levine for stimulating discussions.
|
no-problem/9907/hep-ph9907289.html
|
ar5iv
|
text
|
# Stopped Muon Decay Lifetime Shifts due to Condensed Matter
## Abstract
Up to second order in $`\alpha =(e^2/\mathrm{}c)`$, vacuum electro-magnetic corrections to weak interaction induced charged particle lifetimes have been previously studied. In the laboratory, stopped muon lifetimes are measured in a condensed matter medium whose radiation impedance differs from that of the vacuum. The resulting condensed matter corrections to first order in $`\alpha `$ dominate those vacuum radiative corrections (two photon loops) which are second order in $`\alpha `$.
For unstable charged particles, such as the muon, electro-magnetic corrections to the lifetime are essential for precise determinations of weak interaction coupling strengths. For example, tests of universality for lepton couplings to the weak current are meaningful only after radiative corrections have been applied. The needed calculations have a long history , and investigations into the limit $`(m_e/M_\mu )0^+`$ were quite fruitful. These investigations eventually led to wonderful insights into the nature of mass singularities, e.g. the Kinoshita-Lee-Nauenberg theorem , and their applications to jet definitions. More recently, computations of two-loop photon corrections to second order in $`\alpha =(e^2/\mathrm{}c)`$ have also been made.
All of the above calculations are applicable to muons which decay in the vacuum. In real experiments, the muons are stopped in condensed matter before they decay. Curiously, and even though condensed matter environment related life time shifts of decaying nuclei have been experimentally confirmed , we have been unable to find similar studies for weak charged particle decays. Such an effect must exist since the radiation spectrum (emitted during a charged particle decay) depends on the nature of material environment. In what follows, we shall exhibit the radiative corrections to the weak decay rate for muons both in condensed matter and in the vacuum. We conclude, for laboratory experiments, that condensed matter effects to first order in $`\alpha `$ dominate those vacuum effects which are of second order in $`\alpha `$.
To lowest order in electro-weak theory, the Fermi transition rate $`\mathrm{\Gamma }_F`$ for the muon decay $`\mu ^{}e^{}+\overline{\nu }_e+\nu _\mu `$ yields
$$\mathrm{\Gamma }_F=\left(\frac{M_\mu c^2}{192\pi ^3\mathrm{}}\right)\left(\frac{G_FM_\mu ^2}{\mathrm{}c}\right)^2\left\{18\left(\frac{m_e}{M_\mu }\right)^2\right\}.$$
(1)
The first Feynman diagram of FIG.1 corresponds to Eq.(1) The radiative corrections to lowest order in $`\alpha `$ are described by the five remaining Feynman diagrams.
One computes from the amplitudes in FIG.1, the radiative corrections to $`\mathrm{\Gamma }_F`$ to lowest order in $`\alpha `$. The final answer for the total decay rate is
$$\mathrm{\Gamma }_{tot}=\mathrm{\Gamma }_F\left\{1\left(\frac{\alpha }{2\pi }\right)\left(\pi ^2\left(\frac{25}{4}\right)+\eta \right)+\mathrm{}\right\},$$
(2)
where the parameter $`\eta `$ will be used to describe the condensed matter effects.
For the vacuum case with $`\eta =0`$, Eq.(2) is well known. The vacuum radiative corrections slightly suppress the muon decay rate. The physical reason is that the virtual photon diagrams (when interfering with the zeroth order amplitude) subtract from the transition rate. The Bremsstrahlung emission of real photons adds to the decay rate by introducing a new photon channel. However, the elastic subtraction wins out and determines the overall sign of the effect.
The rules of quantum electrodynamics for the diagrams in FIG.1 require the propagator (most often) written in the Feynman gauge. For the vacuum
$$D_{\mu \nu }^{vac}(𝐤,\omega )=\left(\frac{4\pi \eta _{\mu \nu }}{|𝐤|^2(\omega /c)^2+i0^+}\right).$$
(3)
In condensed matter, wherein resides the stopped muon, the properties of the medium will be described by a dielectric response function analytic in the upper-half complex frequency $`\zeta `$ plane; i.e.
$$\epsilon (\zeta )=1+\frac{2}{\pi }_0^{\mathrm{}}\frac{\omega \mathrm{}m\epsilon (\omega +i0^+)d\omega }{\omega ^2\zeta ^2},$$
(4)
where $`\mathrm{}m\zeta >0`$. In the condensed matter media, the Feynman gauge propagator $`D_{\mu \nu }(𝐤,\omega )`$ is still diagonal in the indices $`(\mu \nu )`$. However, the diagonal elements are given by
$$D_{00}(𝐤,\omega )=\left(\frac{\left(4\pi /\epsilon (|\omega |+i0^+)\right)}{|𝐤|^2\epsilon (|\omega |+i0^+)(\omega /c)^2+i0^+}\right)$$
(5)
and
$$D_{ij}(𝐤,\omega )=\left(\frac{4\pi \delta _{ij}}{|𝐤|^2\epsilon (|\omega |+i0^+)(\omega /c)^2+i0^+}\right)$$
(6)
Eqs.(5) and (6) are required for the photon propagator insertions in the Feynman diagrams of FIG.1. For the Bremsstrahlung emission diagrams, the out going photon wave function must be renormalized in the following manner: (i) Define the condensed matter polarization part $`\mathrm{\Pi }^{\lambda \sigma }(𝐤,\omega )`$ via
$$D_{\mu \nu }=D_{\mu \nu }^{vac}+D_{\mu \lambda }^{vac}\mathrm{\Pi }^{\lambda \sigma }D_{\sigma \nu }^{vac}.$$
(7)
(ii) If $`a_\mu (𝐤,\omega )`$ denotes the outgoing photon wave function in the vacuum, and if $`A_\mu (𝐤,\omega )`$ denotes the outgoing photon wave in the condensed media, then
$$A_\mu =a_\mu +D_{\mu \sigma }^{vac}\mathrm{\Pi }^{\sigma \nu }a_\nu .$$
(8)
(iii) Finally, in the high frequency limit
$$\epsilon (\zeta )1\left(\frac{\omega _p^2}{\zeta ^2}\right)\mathrm{as}|\zeta |\mathrm{}$$
(9)
where the “plasma frequency” is determined by the number of electrons per unit volume $`n_e`$ via
$$\omega _p^2=\left(\frac{4\pi n_ee^2}{m_e}\right).$$
(10)
Equivalently we have the sum rule
$$\frac{2}{\pi }_0^{\mathrm{}}\omega \mathrm{}m\epsilon (\omega +i0^+)𝑑\omega =\omega _p^2,$$
(11)
so that $`\mathrm{}\omega _p`$ sets a photon energy scale beyond which the condensed matter radiation impedance is the same as the vacuum radiation impedance. In practice, condensed matter effects are important only for photons of energy less than (say) $`5`$ KeV.
For a stopped muon decay, the radiation given off by the product electron is substantial. To zeroth oder, the distribution of the product electron energy $`E`$ is given by
$$d\mathrm{\Gamma }_F(E)=\mathrm{\Gamma }_FdP(E),(0<E<W),$$
(12)
where
$$\frac{dP(E)}{dE}=6(\frac{E^2}{W^3})4(\frac{E^3}{W^4}),$$
(13)
and the electron energy cut-off is $`W=(M_\mu c^2/2)`$. If $`𝐯`$ denotes the electron velocity,
$$E=\frac{m_ec^2}{\sqrt{1(|𝐯|/c)^2}},$$
(14)
and $`dN(\omega ,E)`$ denotes the distribution of Bremsstrahlung photons in a bandwidth $`d\omega `$, then for the vacuum
$$dN^{vac}(\omega ,E)=\beta ^{vac}(E)\left(\frac{d\omega }{\omega }\right)$$
(15)
where
$$\beta ^{vac}(E)=\left(\frac{\alpha }{\pi }\right)\left\{\left(\frac{c}{|𝐯|}\right)\mathrm{ln}\left(\frac{c+|𝐯|}{c|𝐯|}\right)2\right\}.$$
(16)
The corresponding distribution of radiated photons in the condensed matter environment
$$dN(\omega ,E)=\beta (\omega ,E)\left(\frac{d\omega }{\omega }\right)$$
(17)
has been discussed elsewhere . The final result is given in terms of
$$z(\omega )=\left(\frac{c}{|𝐯|\sqrt{\epsilon (\omega +i0^+)}}\right)$$
(18)
as
$$\beta (\omega ,E)=\left(\frac{\alpha |𝐯|}{c\pi }\right)\left|\frac{(\mathrm{}ez)\left(\mathrm{}m𝒢(z)\right)}{(\mathrm{}mz)}\right|,$$
(19)
where
$$𝒢(z)=\left(\frac{z^21}{2}\right)\mathrm{ln}\left(\frac{z+1}{z1}\right)z.$$
(20)
Using Eqs.(2), (5), (6), (13), (18), (19) and (20), and employing the condition that $`\mathrm{}\omega <<W`$ in the integral regime in which the material and vacuum values $`\beta `$ differ appreciably, one finds
$$\eta =\left(\frac{2\pi }{\alpha }\right)\left(\frac{dP(E)}{dE}\right)_{E=W}\mathrm{\Delta }E_{rad}$$
(21)
where
$$\mathrm{\Delta }E_{rad}=\mathrm{}_0^{\mathrm{}}\left(\beta (\omega ,E=W)\beta ^{vac}(E=W)\right)𝑑\omega .$$
(22)
Our central results follow from Eqs.(2), (13), (22) and (23). The total difference in the mean radiated energy $`\mathrm{\Delta }E_{rad}=E_{rad}E_{rad}^{vac}`$ between the material and the vacuum, at the maximum electron energy $`W(M_\mu c^2/2)`$ determines the material radiation renormalization parameter
$$\eta =\left(\frac{8\pi }{\alpha }\right)\left(\frac{\left(E_{rad}E_{rad}^{vac}\right)}{M_\mu c^2}\right)\left(\frac{\mathrm{\Delta }E_{rad}}{30.78KeV}\right).$$
(23)
At this point it is important to distinguish three regimes for the motion of the charge. (i) When the muon is stopped, the velocity of the charge is zero. (ii) When the muon decays, there is a rapid acceleration of the charge to a final electron velocity $`𝐯`$. The acceleration produces a pulse of radiation with a mean photon distribution $`\omega dN(\omega ,E)/d\omega =\beta (\omega ,E)`$. These two regimes are present in the vacuum. (iii) The third (and final) regime is when the electron slowly decelerates with an energy loss as it leaves a track through the condensed matter medium. This last process is usually described by a retardation force
$$F=\left(\frac{dE}{dx}\right)$$
(24)
which has no counterpart in the vacuum.
For example, consider a regime in which the material is almost transparent; i.e.
$$\sqrt{\epsilon (\omega +i0^+)}=n(\omega )e^{i\varphi (\omega )},|\varphi (\omega )|<<1.$$
(25)
where $`n`$ is the index of refraction and $`\mathrm{tan}(2\varphi )`$ is the loss tangent of the dielectric response $`\epsilon (\zeta )`$. In such a regime, and for
$$|𝐯|>\left(\frac{c}{n}\right)(\mathrm{Cerenkov}),$$
(26)
the Cerenkov radiation retardation force has the well known frequency distribution
$$dF_C=\left(\frac{e^2}{c^2}\right)\left(1\left(\frac{c}{|𝐯|n}\right)^2\right)\omega d\omega .$$
(27)
This corresponds to the number of Cerenkov photons emitted in a bandwidth $`d\omega `$ and in a path length $`dx`$ given by
$$\left(\frac{d^2N_C}{d\omega dx}\right)=\left(\frac{\alpha }{c}\right)\left(1\left(\frac{c}{|𝐯|n}\right)^2\right).$$
(28)
Eq.(28) describes the emission of Cerenkov radiation during the third regime in which the electron leaves a track. The emission of Cerenkov radiation in the second regime (in which the charge rapidly accelerates from zero velocity to $`𝐯`$) is governed by Eqs.(18)-(20) and (25),
$$\beta _C=\left(\frac{\alpha |𝐯|}{2c\mathrm{tan}\varphi }\right)\left(1\left(\frac{c}{|𝐯|n}\right)^2\right),|\varphi |<<1.$$
(29)
Note the loss angle term $`tan\varphi `$ is in the denominator of Eq.(29). If the material in a frequency range of interest were really transparent, then $`\varphi 0`$ would imply the divergence $`\beta _C=\omega (dN_C/d\omega )\mathrm{}`$. As previously discussed, the Cerenkov contribution to $`\beta `$ thereby depends sensitively on the attenuation of the radiated waves. In some continuous media detector systems, e.g. optical fibres with a very small electromagnetic attenuation, there should be a very large flash of Cerenkov radiation.
The physical picture may be described as follows: Consider a modern jet plane leaving an airport. The plane starts out at rest, rapidly accelerates upon “take-off” and continues to accelerate until sound speed is approached. When the plane then accelerates right through the sound speed barrier, there follows a loud “thunder clap” or sound wave “explosion”. (A considerate airline will have the pilot avoid breaking the sound speed barrier while flying over a city since the resulting sound wave explosion would be quite unpleasant for the the city inhabitants to hear). After the sound speed barrier is broken, the plane still sends out some further, but much more mild, sound waves along the “Mach-Cerenkov cone”. The sounds are much more mild when flying at a uniform velocity than while accelerating through the sound speed barrier. With the sound wave analogy kept firmly in mind, let us return to the case of muon decay.
The muon starts off at rest and decays into an electron (plus, of course, uncharged objects). The electron quickly accelerates (so to speak) to a final velocity $`𝐯`$ which may break through the light speed barrier; i.e. $`|𝐯|>(c/n)`$ for some material bandwidths. When the light speed barrier is broken, there is an large flash of Cerenkov electromagnetic radiation with a photon distribution $`(dN_C/N_C)=\beta _C(d\omega /\omega )`$ according to Eq.(29). After the light speed barrier is broken, the electron (moving at a roughly uniform velocity) still sends out a further (but much more mild) Cerenkov electromagnetic signal in accordance with Eqs.(27) and (28).
The crucial point is the following: If the Cerenkov flash of radiation in the material is sufficiently large, then the electromagnetic renormalization of the muon decay rate due to the material will also be quite large. The exact numbers depend on the material properties $`\epsilon (\omega +i0^+)`$.
For typical plastic coated glass optical fibres, we expect, in a frequency bandwidth $`\mathrm{}\mathrm{\Delta }\omega 0.2eV`$, an index of refraction $`n1.3`$. Most importantly, there will be a small light wave attenuation (partly from Rayleigh scattering and partly from infrared electronic transitions) with a loss angle $`\varphi 10^8`$. Using these estimates, we find for the Cerenkov contribution to Eq.(2) $`\eta _C0.9`$ which is substantial. For muon decay in metals, the effect is much reduced in magnitude. We note (in passing) that metals suppress radiation which may change the sign of $`\eta `$ to a negative value.
While we were led to the notion of a Cerenkov flash in optical fibres via the consideration of the Feynman diagrams in FIG.1, it would appear to us to be of scientific interest to measure such optical “bolts of lightning” in situ per se. By contrast, when a charged pion at rest decays in a transparent medium, the Cerenkov flash will be very small or virtually zero. In pion decay the muon emerges at a velocity which can hardly beat that of light in the material. When the muon decays, the flash caused by the emerging electron (moving almost at vacuum light speed) should be substantial and measurable. These Cerenkov effects are within the technology of ongoing and planned precision muon decay experiments.
Finally, in a pioneering experiment, a dielectric suppression has been verified for Bremsstrahlung emission in scattering processes. This study gives us confidence in the reality of the material contributions to radiative corrections.
|
no-problem/9907/quant-ph9907039.html
|
ar5iv
|
text
|
# A method for reaching detection efficiencies necessary for optical loophole–free Bell experiments
## I Introduction
Two–photon interferometry using down converted photons has recently been extensively used for demonstrating violations of local as well as non–local hidden–variables theories. However, in spite of the recent improvement in the efficiencies of single–photon detectors (close to 80%) , all experiments carried out to date have had ten or more times fewer coincidence counts than singles counts and this, in effect, meant a detection efficiency under 10%. The reason for this lies in a directional uncertainty of signal with respect to idler photons. On every ten or more detected signal (idler) photons only one of conjugate idler (signal) photons finds its way to the other detector. (Detectors in two–photon experiments must have the same openings.) Therefore, all experiments carried out so far relied only on coincidence counts and herewith on additional assumptions—the no enhancement assumption was the most important one—which were considered very plausible. Then Santos devised local hidden–variable models which violate not only the low detection loophole but also the no enhancement assumption. These models, as well as improvements in techniques resulted in an interest into loophole–free experiments. In the past two years several sophisticated proposals appeared which rely on detailed elaborations of all experimental details. The last three proposals make use of maximal superpositions and require detection efficiency of at least 83% , while the first three references make proposals for nonmaximal superpositions relying on recent results which require only 67% detection efficiency. In this paper we dispense with all supplementary assumptions by proposing a feasible selection method of doing a loophole–free Bell experiment which ideally requires only 67% detection efficiency and can reach a realistic detection efficiency as high as 75%. It is shown that this means a feasible conclusive experiment with a realistic visibility of 85%. The method employs solid angles of signal and idler photons (in a type–II down conversion process) which differ five times from each other. This enables a detection of more than 90% of conjugate photons. We also consider a method of combining unpolarized independent photons into spin correlated pairs by means of non–spin observables. The physics of such a preparation of spin states by means of non–spin observables can be paralleled with the polarization correlation between unpolarized photons discovered by Pavičić and formulated for classical light by Paul and Wegmann . The main difference is that in the latter experiments photons cross each other’s paths while in the present proposal they do not. In the end we compare Hardy’s equalities with the Bell inequalities.
## II The experiment
A schematic representation of the experiment is shown in Fig. 1. An ultra–short laser beam (of frequency $`\omega _p`$), split by a beam splitter, simultaneously pumps up two type–II crystals. We assume that they are beta barium borate (BBO) crystals. In each type–II crystal the parametric down conversion produces intersecting cones of mutually perpendicularly polarized signal (extraordinary linearly polarized within e–ray plane of the BBO) and idler (ordinary linearly polarized within o–ray plane of the BBO) photons (of frequencies $`\omega _e+\omega _o=\omega _p`$). Signal and idler photons can be of the same frequencies ($`\omega _e=\omega _o=\omega _p/2`$) in which case the cones are tangent to each other. When we tilt the crystal so as to increase the angle between the pumping beam and the crystal axis of the BBO (increasing $`\omega _o`$ and decreasing $`\omega _e`$ slightly) the cones start to intersect each other \[see inlet in Fig. 1\]. Looking only at polarization, we see the photons at the intersections of the cones as entangled, because one cannot know which cone which photon comes from. We then entangle one photon from one pair with one photon from the other pair by an interference of the fourth order at a beam splitter. Each successful entanglement (coincidence firing of detectors D1’ and D2’) selects (opens the computer gates for their counts) the other two conjugate photons into a Bell state.
In a real experiment one first has to make photons at the cone intersections of each BBO indistinguishable, which means one has to compensate for the finite bandwidths and different group velocities inside the crystal, i.e., for transversal and longitudinal (e–photon pulls ahead) walkoff effects. Half–wave plates (exchanging retardation of e and o photons) and quartz plates (being positive uniaxial crystal—BBO is negative) do the job. Then, by rotating the crystal, one can entangle the photons in a (non)maximal singlet–like or triplet–like state. In our proposal we assume both, crystals and plates, prepared so as to produce maximal singlet–like states. (It is interesting that starting with two maximal triplet–like states we arrive at the same final expressions for the probabilities; Cf. Ref. .) We also assume that the intensity of the laser pumping beam is reduced so that the probability of having more than two down converted singlets in chosen solid angles within the pumping time is negligible. We stress here that we choose a subpicosecond laser since without such an ultra–short pumping one would not be able to collect valid coincidence counts of D1’ and D2’ simply because there are no detectors which could react in a time short enough to confirm the intensity interference between two independent down converted photons (from two crystals) whose coherence time lies in a subpicosecond region. Two successive pumping can take place within several nanoseconds as determined by the lowest available detector resolution (recovering) time. For the feasibility of the experiment it is crucial that the probability of both photon pairs coming from only one of the two crystals can be made sufficiently small in comparison with the probability of photon pairs coming from both crystals by using more and more asymmetric beam splitter which at the same time lowers the required detection efficiency more and more towards 67%. We show this at the end of this section.
As we mentioned in Sec. I, the main detection efficiency problem in two photon interferometry is that signal and idler photons have to be in equal solid angles and that therefore less than 10% of conjugate photons reach a detector. The present set–up enables us to use different solid angles for selecting photons (those which interfere at the beam splitter) and their conjugate photons (whose counts are passed by the gates). For the purpose, one has to evaluate the angular width of the conjugate photons once we know central directions (cone intersections) of both photons. One can show that the angular width of a conjugate photon depends on the frequencies of the pump, signal, and idler photons, on the band widths, on the pump, signal, and idler group refraction indexes, and on the directions of signal and idler photons with respect to the pumping beam, but for all combinations of these terms, the ratio of 1:5 between the solid angle of the photons we detect by detectors D1’ or D2’ and the solid angle centered around central direction of the conjugate photons ($`ph`$ in Fig. 1) assures that over 90% of conjugate photons will be found in the latter solid angle and that the probability of detecting “third party” photons will be negligible. Let us now dwell on deriving our probabilities.
We can have three input states, depending on whether the two pairs come from different crystals or both of them from only one of the crystals. The probabilities of the pairs being emitted in any of these three possible ways are equal. The singlets coming from the crystals are mutually independent and we therefore formally describe them by tensor products. The former one, coming from different crystals is given by
$`|\mathrm{\Psi }={\displaystyle \frac{1}{\sqrt{2}}}\left(|1_x_1|1_y_1^{}|1_y_1|1_x_1^{}\right){\displaystyle \frac{1}{\sqrt{2}}}\left(|1_x_2|1_y_2^{}|1_y_2|1_x_2^{}\right),`$ (1)
where $`x`$ corresponds to the e–ray planes of the BBO’s and $`y`$ to the o–ray planes. The latter ones, coming from the same crystals are given by
$`|\mathrm{\Psi }_{20}={\displaystyle \frac{1}{2}}\left(|1_x_1|1_y_1^{}|1_y_1|1_x_1^{}\right)^2|0_2^{},`$ (2)
$`|\mathrm{\Psi }_{02}={\displaystyle \frac{1}{2}}\left(|1_x_2|1_y_2^{}|1_y_2|1_x_2^{}\right)^2|0_1^{}.`$ (3)
To obtain the four photon coincidence probabilities we cannot superpose these three input states upon one another because that would violate the principle of indistinguishability. To see this let us for the moment assume that our detection efficiency be ideal (100%) and that the polarizers P1, P1, P2, and P2 be removed. Then, with the help of the responses of the detectors D1 and D2 we could tell $`|\mathrm{\Psi }`$ (both D1 and D2 would fire), from $`|\mathrm{\Psi }_{02}`$ (only D1 would fire) or from $`|\mathrm{\Psi }_{20}`$ (only D2 would fire). If detections in reality had been ideal we would have used only $`|\mathrm{\Psi }`$. Since they are not, we have to take $`|\mathrm{\Psi }_{02}`$ and $`|\mathrm{\Psi }_{20}`$ into account as well, but helpfully it turns out that the Bell inequality containing their corresponding counts (in addition to $`|\mathrm{\Psi }`$ counts) is still violated. Therefore, we start with $`|\mathrm{\Psi }`$, i.e., with two pairs coming out from different crystals, and discuss $`|\mathrm{\Psi }_{02}`$ and $`|\mathrm{\Psi }_{20}`$ later on. A multi–mode representation of the input state we give later on.
The outgoing electric–field operators describing photons—we call them selector photons—which pass through beam splitter BS, through polarizers P1’ and P2’ (oriented at angles $`\theta _1^{}`$ and $`\theta _2^{}`$, respectively), and are detected by detectors D1’ and D2’, read (see Fig. 1)
$`\widehat{E}_1^{}`$ $`=`$ $`\left(\widehat{a}_{1^{}x}t_x\mathrm{cos}\theta _1^{}+\widehat{a}_{1^{}y}t_y\mathrm{sin}\theta _1^{}\right)e^{i\text{k}_1^{}\text{r}_1^{}i\omega _1(tt_1^{}\tau _1^{})}`$ (5)
$`+i\left(\widehat{a}_{2^{}x}r_x\mathrm{cos}\theta _1^{}+\widehat{a}_{2^{}y}r_y\mathrm{sin}\theta _1^{}\right)e^{i\stackrel{~}{\text{k}}_2^{}\text{r}_1^{}i\omega _2(tt_2^{}\tau _1^{})},`$
$`\widehat{E}_2^{}`$ $`=`$ $`\left(\widehat{a}_{2^{}x}t_x\mathrm{cos}\theta _2^{}+\widehat{a}_{2^{}y}t_y\mathrm{sin}\theta _2^{}\right)e^{i\text{k}_2^{}\text{r}_2^{}i\omega _2(tt_2^{}\tau _2^{})}`$ (7)
$`+i\left(\widehat{a}_{1^{}x}r_x\mathrm{cos}\theta _2^{}+\widehat{a}_{1^{}y}r_y\mathrm{sin}\theta _2^{}\right)e^{i\stackrel{~}{\text{k}}_1^{}\text{r}_2^{}i\omega _1(tt_1^{}\tau _2^{})},`$
where $`t_x^2`$, $`t_y^2`$ are transmittances, $`r_x^2`$, $`r_y^2`$ are reflectances, $`t_j^{}`$ is time delay after which photon $`j^{}`$ reaches BS, $`\tau _j^{}`$ is time delay between BS and Dj’, $`\omega _j^{}`$ is the frequency of photon $`j^{}`$, $`\text{k}_j^{}`$ is the wave vector of photon $`j^{}`$, and $`\stackrel{~}{\text{k}}_j^{}`$ is the wave vector corresponding to $`\text{k}_j^{}`$ after reflection at BS. The annihilation operators act as follows: $`\widehat{a}_{jx}|1_x_j^{}=|0_x_j^{}`$, $`\widehat{a}_{jx}|0_x_j^{}=0`$, $`j^{}=1,2`$.
Operators describing photons—we call them Bell photons—which pass through polarizers P1 and P2 (oriented at angles $`\theta _1`$ and $`\theta _2`$, respectively) and are detected by detectors D1 and D2 read
$`\widehat{E}_1=(\widehat{a}_{1x}\mathrm{cos}\theta _1+\widehat{a}_{1y}\mathrm{sin}\theta _1)e^{i\omega _1t_1},`$ (8)
$`\widehat{E}_2=(\widehat{a}_{2x}\mathrm{cos}\theta _2+\widehat{a}_{2y}\mathrm{sin}\theta _2)e^{i\omega _2t_2}.`$ (9)
The probability of detecting all four photons by detectors D1, D2, D1’, and D2’ is thus
$`P(\theta _1^{},\theta _2^{},\theta _1,\theta _2)`$ $`=`$ $`\eta ^2\mathrm{\Psi }|\widehat{E}_2^{}^{}\widehat{E}_1^{}^{}\widehat{E}_2^{}\widehat{E}_1^{}\widehat{E}_1^{}\widehat{E}_2^{}\widehat{E}_1^{}^{}\widehat{E}_2^{}^{}|\mathrm{\Psi }`$ (10)
$`=`$ $`{\displaystyle \frac{\eta ^2}{4}}(A^2+B^22AB\mathrm{cos}\varphi ),`$ (11)
where $`\eta `$ is detection efficiency; $`A=Q(t)_{11^{}}Q(t)_{22^{}}`$ and $`B=Q(r)_{12^{}}Q(r)_{21^{}}`$; here $`Q(q)_{ij}=q_x\mathrm{sin}\theta _i\mathrm{cos}\theta _jq_y\mathrm{cos}\theta _i\mathrm{sin}\theta _j`$; $`\varphi =(𝒌_1\stackrel{~}{𝒌}_2)𝒓_1+(𝒌_2\stackrel{~}{𝒌}_1)𝒓_2+(\omega _1\omega _2)(\tau _1^{}\tau _2^{})`$.
To obtain a realistic estimation of the above result we start with the multi–mode input states
$`|1_1^{}|1_2^{}={\displaystyle 𝑑\omega _1^{}𝑑\omega _1^{}\psi _1(\omega _1)\psi _2(\omega _2)|\omega _1_1^{}|\omega _2_2^{}},`$ (12)
which we introduce into Eq. (1); $`\psi _i(\omega _i)`$ ($`i=1,2`$) are both peaked at $`\omega =\frac{1}{2}\omega _p`$: $`\omega _i^{}=\omega \omega _i`$ ($`i=1,2`$). We can keep the singlet state description as given by Eq. (1) because it has recently been proved by Keller and Rubin —as we briefly present below—that a subpicosecond pulse would give practically the same output as the continuous pumping beams provided a group velocity condition is matched. In doing so we rely on the experimental and theoretical results obtained by Kwiat et al. . We then make a Fourier decomposition of the electric–field operators \[Eqs. (5) and (7)\] and obtain the mean value for $`P(\theta _1^{},\theta _2^{},\theta _1,\theta _2)`$. By integrating the latter probability over $`\tau _1^{}`$, $`\tau _2^{}`$, $`\omega _1^{}`$, and $`\omega _2^{}`$ and using
$`{\displaystyle \frac{1}{T}}{\displaystyle _{T/2}^{T/2}}\mathrm{cos}(\omega \tau +a)𝑑\tau ={\displaystyle \frac{\mathrm{sin}(\omega T/2)}{\omega T/2}}\mathrm{cos}a,{\displaystyle _{\mathrm{}}^{\mathrm{}}}{\displaystyle \frac{\mathrm{sin}a\omega }{\omega }}\mathrm{sin}b\omega d\omega =0,`$ (13)
$`{\displaystyle _{\mathrm{}}^{\mathrm{}}}{\displaystyle \frac{\mathrm{sin}a\omega }{\omega }}\mathrm{cos}b\omega d\omega =\{\begin{array}{cc}\pi \hfill & \text{for }b<a\hfill \\ \pi /2\hfill & \text{for }b=a\hfill \\ 0\hfill & \text{for }b>a\hfill \end{array}`$ (17)
we obtain
$`P(\theta _1^{},\theta _2^{},\theta _1,\theta _2)={\displaystyle \frac{\eta ^2}{4}}(A^2+B^22ABv_e\mathrm{cos}\mathrm{\Phi }),`$ (18)
where
$`v_e={\displaystyle \frac{_{T/2}^{T/2}f_1(\tau \tau _1)f_2(\tau \tau _2)𝑑\tau }{_{T/2}^{T/2}f_1^2(\tau \tau _1)𝑑\tau +_{T/2}^{T/2}f_2^2(\tau \tau _2)𝑑\tau }},`$ (19)
where $`f_i(\tau )=_{\mathrm{}}^{\mathrm{}}\psi _i(\omega )\mathrm{cos}\omega \tau d\omega `$, ($`i=0,1`$), where $`T`$ is the detection time, and where $`\mathrm{\Phi }=2\pi (z_2z_1)/L`$; here $`L`$ is the spacing of the interference fringes, $`z_j`$ are the coordinates of detectors D$`j`$’ along $`𝒌_1\stackrel{~}{𝒌}_2`$ and $`𝒌_2\stackrel{~}{𝒌}_1`$ (see Fig. 1 in reference ); we dropped primes from $`\tau _1^{}`$ and $`\tau _2^{}`$ for simplicity. We see that $`\mathrm{\Phi }`$ can be changed by moving the detectors transversally to the incident beams.
By numerical calculation one can easily show that $`v_e`$ is not susceptible to the variation of detection time $`T`$ provided $`|\tau _1\tau _2|<|\omega _1\omega _2|^1`$ (for $`|\tau _1\tau _2||\omega _1\omega _2|`$ even when $`T|\omega _1\omega _2|^1`$). For $`|\tau _1\tau _2||\omega _1\omega _2|^1`$ we have $`v_e1`$, i.e., the sharpest fringes, and for $`|\tau _1\tau _2||\omega _1\omega _2|^1`$ we have $`v_e0`$ and the fringes disappear. With the experimentally reachable frequency passband $`\mathrm{\Delta }\omega `$ of the order of magnitude of THz within a single parametric down conversion with a continuous pumping beam reaching the condition $`|\tau _1\tau _2||\mathrm{\Delta }\omega |^1`$ is not a problem because the time interval between the idler and signal photons is of the order of femtoseconds. In our case, when dealing with two simultaneous down conversions we have to resort to an ultra–short pumping beam to satisfy the condition. Pulse pumping beam shorter than 1 ps would in general “make it possible to distinguish pairs of photons born at sufficiently different depths inside the crystal with a consequent decrease in two photon interference” as recently shown by Keller and Rubin. This happens when the center of momentum of signal and idler photons and the center of pump pulse does not leave the crystal simultaneously. When they do, i.e., when, by choosing appropriate material conditions and pump frequency for a down conversion within a type–II crystal, we make “the inverse of the pump group velocity equals the mean of the idler and signal inverse group velocity” and therewith we make the photons indistinguishable again. Singlets appearing from such “compensated” crystal therefore keep their description given in Kwiat et al. and that is what we rely on in the afore–given calculation.
Another realistic detail of the experiment is that the pinholes of detectors D1’ and D2’ are not points but have a certain width $`\mathrm{\Delta }z`$. Therefore, in order to obtain a realistic probability we integrate Eq. (11) over $`z_1`$ and $`z_2`$ over $`\mathrm{\Delta }z`$ to obtain
$`P(\theta _1^{},\theta _2^{},\theta _1,\theta _2)`$ $`=`$ $`{\displaystyle \frac{\eta ^2}{4}}{\displaystyle \underset{z_1\frac{\mathrm{\Delta }z}{2}}{\overset{z_1+\frac{\mathrm{\Delta }z}{2}}{}}}{\displaystyle \underset{z_2\frac{\mathrm{\Delta }z}{2}}{\overset{z_2+\frac{\mathrm{\Delta }z}{2}}{}}}[A^2+B^22ABv_e\mathrm{cos}{\displaystyle \frac{2\pi (z_2z_1)}{L}}]𝑑z_1𝑑z_2`$ (20)
$`=`$ $`{\displaystyle \frac{\eta ^2}{4}}(A^2+B^2v2AB\mathrm{cos}\mathrm{\Phi }),`$ (21)
where $`v=v_e\left[\mathrm{sin}(\pi \mathrm{\Delta }z/L)/(\pi \mathrm{\Delta }z/L)\right]^2`$ is the visibility of the coincidence counting.
An analysis of Eq. (21) shows that triggering of D1’ and D2’ by selector photons means that their conjugate Bell photons appear entangled in spite of the fact that they stem from two independent sources (two crystals) and that they do not interact in any way (e.g., they do not cross each other’s paths). In general, Bell photons are only partially entangled as in the case of classical intensity interferometry. For special cases, however, one can achieve full quantum nonmaximal entanglement, i.e., one can obtain probability zero for certain orientations of polarizers P1 and P2. In order to obtain such an entangled state, which would at the same time enable a violation of the Bell inequality with only 67% detection efficiency, it is necessary to use an asymmetrical beam splitter, to orient polarizers P1’ and P2’, e.g., along $`\theta _1^{}=90^{}`$ and $`\theta _2^{}=0^{}`$, and to put detectors D1’ and D2’ in a symmetric position with respect to BS and with respect to the photons paths from the middle of the crystals so as to obtain $`\mathrm{\Phi }=0`$. Eq. (11) then projects out the following nonmaximal singlet–like probability:
$`P(\theta _1,\theta _2)`$ $`=`$ $`\eta ^2s(\mathrm{cos}^2\theta _1\mathrm{sin}^2\theta _22v\rho \mathrm{sin}\theta _1\mathrm{cos}\theta _1\mathrm{sin}\theta _2\mathrm{cos}\theta _2\mathrm{cos}\mathrm{\Phi }+\rho ^2\mathrm{sin}^2\theta _1\mathrm{cos}^2\theta _2),`$ (22)
$``$ $`\eta ^2p(\theta _1,\theta _2)`$ (23)
where we assumed the near normal incidence at BS so as to have $`r_x^2=r_y^2=R`$ and $`t_x^2=t_y^2=T=1R`$, where we used $`sT^2/(R^2+T^2)`$, $`\rho R/T`$, and where we multiplied Eq. (21) by 4 for other three possible coincidence detections \[i.e., for ($`\theta _1^{}^{}`$,$`\theta _2^{}^{}`$), ($`\theta _1^{}^{}`$,$`\theta _2^{}^{}`$), and ($`\theta _1^{}^{}`$,$`\theta _2^{}^{}`$) which we do not take into account because only ($`\theta _1^{}`$,$`\theta _2^{}`$)–triggering opens the gates\] and by $`(R^2+T^2)^1`$ for photons emerging from the same side of BS (which also do not open the gates).
The singles–probability of detecting a photon by D1 is
$`P(\theta _1)=\eta s(\mathrm{cos}^2\theta _1+\rho ^2\mathrm{sin}^2\theta _1)\eta p(\theta _1).`$ (24)
Analogously, the singles–probability of detecting a photon by D2 is
$`P(\theta _2)=\eta s(\mathrm{sin}^2\theta _2+\rho ^2\mathrm{cos}^2\theta _2)\eta p(\theta _2).`$ (25)
Introducing the above obtained probabilities into the Clauser–Horne form of the Bell inequality
$`B_{CH}P(\theta _1,\theta _2)P(\theta _1,\theta _2^{})+P(\theta _1^{},\theta _2^{})+P(\theta _1^{},\theta _2)P(\theta _1^{})P(\theta _2)0,`$ (26)
we obtain the following minimal efficiency for its violation
$`\eta ={\displaystyle \frac{p(\theta _1^{})+p(\theta _2)}{p(\theta _1,\theta _2)p(\theta _1,\theta _2^{})+p(\theta _1^{},\theta _2^{})+p(\theta _1^{},\theta _2)}}.`$ (27)
We stress here that the probabilities in Eq. (26) are proper probabilities—not the ratios of coincidence counts as in the experiments carried out so far. For example, $`P(\theta _2)=\eta p(\theta _2)`$ is a total number of counts detected by detector D2 with the polarizer P2 oriented along $`\theta _2`$—it is not either $`\eta ^2p(\mathrm{},a_2)`$ or $`\eta ^2p(\mathrm{},a_2)/p(\mathrm{},\mathrm{})`$.
This efficiency is a function of visibility $`v`$ and by looking at Eqs. (23), (24), and (25) we see that for each particular $`v`$ a different set of angles should minimize it. A computer optimization of angles—presented in Fig. 2—shows that the lower the reflectivity is, the lower is the minimal detection efficiency. Also, we see a rather unexpected property that a low visibility does not have a significant impact on the violation of the Bell inequality. For example, with 70% visibility and 0.2 reflectivity of the beam splitter we obtain a violation of Eqs. (26) with a lower detection efficiency than with 100% visibility and 0.5 ($`\rho =1`$) reflectivity.
In Ref. we have shown that one can select fully quantum entangled Bell photons even without polarizers P1’ and P2’; i.e., whenever unpolarized selector photons trigger detectors D1’ and D2’ they open the gates for maximally entangled singlet–like state of Bell photons. Now, it is of interest to find out whether we can use such non–polarization preparation to prepare full non–maximal polarization–entangled states. To this aim, we calculate
$`P_{\mathrm{}}(\theta _1,\theta _2)=P(\theta _1^{},\theta _2^{},\theta _1,\theta _2)+P(\theta _1^{}^{},\theta _2^{},\theta _1,\theta _2)+P(\theta _1^{},\theta _2^{}^{},\theta _1,\theta _2)+P(\theta _1^{}^{},\theta _2^{}^{},\theta _1,\theta _2)`$ (28)
where we obtain the last three probabilities by analogy with the first one \[Eq. (11)\]; e.g., in order to obtain $`P(\theta _1^{}^{},\theta _2^{},\theta _1,\theta _2)`$, we introduce $`E_2^{}^{}`$ instead of $`E_2^{}`$ into Eq. (11), where we get $`E_2^{}^{}`$ from Eq. (7) upon substituting $`\mathrm{sin}\theta _2^{}`$ for $`\mathrm{cos}\theta _2^{}`$ and $`\mathrm{cos}\theta _2^{}`$ for $`\mathrm{sin}\theta _2^{}`$. Eq. (28) yields
$`P_{\mathrm{}}(\theta _1,\theta _2)=\eta ^2{\displaystyle \frac{(12r_x^2t_x^2)\mathrm{sin}^2\theta _1\mathrm{sin}^2\theta _2+(12r_y^2t_y^2)\mathrm{cos}^2\theta _1\mathrm{cos}^2\theta _2+S2vW\mathrm{cos}\mathrm{\Phi }}{2(12r_x^2t_x^22r_y^2t_y^2+t_x^2t_y^2+r_x^2r_y^2)}}`$ (29)
where
$`S=(t_x^2t_y^2+r_x^2r_y^2)(\mathrm{sin}^2\theta _1\mathrm{cos}^2\theta _2+\mathrm{cos}^2\theta _1\mathrm{sin}^2\theta _2),`$ (30)
$`W=(t_xr_x\mathrm{sin}\theta _1\mathrm{sin}\theta _2+t_yr_y\mathrm{cos}\theta _1\mathrm{cos}\theta _2)^2.`$ (31)
A computer calculation shows that this probability can violate the Bell inequalities only for a detection efficiency 83% or higher. It also shows that the probability cannot be used for obtaining Hardy’s equalities . On the other hand, an analysis of Eq. (29) shows that the only way to obtain a non–partial, i.e., full quantum (non–classical) entanglement is to use a symmetric beam splitter ($`r_x^2=r_y^2=1/2`$) and a symmetric position of detectors D1’ and D2’ with respect to BS and with respect to the photons paths from the middle of the crystals so as to obtain $`\mathrm{\Phi }=0`$. Under these conditions Eq. (29) yields $`P_{\mathrm{}}(\theta _1,\theta _2)=\frac{1}{2}\mathrm{sin}^2(\theta _1\theta _2)`$, i.e., a maximal singlet–like state. Thus, by means of non–spin preparation we can prepare only “symmetric” (maximal) non–classical spin correlated states.
In the end we show that other down conversions which may occur in the crystals and enter our statistics do not significantly influence the obtained probabilities. The probability of both photon pairs coming from only one of the two crystals and the probability of their coming from both crystals are of course equal, but for $`\rho `$ close to 0 the influence of photon pairs coming from only one of the two crystals can be made small enough for a conclusive Bell experiment. Let us see this in detail.
Choosing $`\theta _1^{}=90^{}`$, $`\theta _2^{}=0^{}`$, $`\mathrm{\Phi }=0`$, and rewriting the electric–field operators \[Eqs. (5) and (7)\] accordingly, we obtain the following probabilities of detecting the “intruder” counts (corresponding to both photons coming from the same crystal and being both detected by D1 and D2, respectively) while collecting the singles–probabilities \[Eqs. (24) and (25)\]
$`P_{20}(90^{},0^{},\theta _1,)=P_{20}(\theta _1)=\eta s\rho (1+v)\mathrm{sin}^2(2\theta _1),`$ (32)
$`P_{02}(90^{},0^{},,\theta _2)=P_{02}(\theta _2)=\eta s\rho (1+v)\mathrm{sin}^2(2\theta _2).`$ (33)
We could dispense with these counts only if detectors D1 and D2 could tell one photon from two. It is therefore important to see whether the Bell inequality Eqs. (26) is still violated when we have them in our statistics. In order to include them into the Bell inequality we should add them to the singles–probabilities given by Eqs. (24) and (25). By comparing $`P_{20}(\theta _1)`$ and $`P_{02}(\theta _2)`$ with $`P(\theta _1)`$ and $`P(\theta _2)`$, respectively, we see that for the angles close to $`\frac{\pi }{2}`$ and $`\pi `$, for which the asymmetrical states violate the Bell inequality, the following inequalities hold: $`P_{20}(\theta _1)P(\theta _1)`$ and $`P_{02}(\theta _2)P(\theta _1)`$. For example, for $`\rho =0.1`$, $`\eta =0.75`$, $`v=0.9`$, $`\theta _1=104^{}`$, $`\theta _1^{}=89^{}`$, $`\theta _2=181^{}`$, and $`\theta _2^{}=161^{}`$ we obtain the violation $`B_{CH}>0`$. For the same parameters we also obtain $`B_{CH}P_{20}P_{02}>0`$. However, this reduces the value of $`B_{CH}`$ for which the Bell inequality is violated by 2/3. On the other hand, we have to use birefringent polarizers P1 and P2 to be able to discard counts which fire both D1 and D1 when collecting data for the singles–probability $`P(\theta _1)`$ by D1 and those which fire both D2 and D2 when collecting data for the singles–probability $`P(\theta _2)`$ by D2. Therefore, in doing a real experiment we should better split unwanted two–photon wave packets across additional polarized beam splitters or, even better, by applying photon chopping when collecting counts for singles–probabilities. We stress here that by this method we do not affect the conclusiveness of the Bell experiment but only pick out valid Bell pairs from all those ones already selected by the D1’–D2’ coincidence gates. That is, we do not discard any counts corresponding to firing of D1 and/or D2 by photons coming from different crystals.
## III Conclusion
Our elaboration shows that the proposed loophole–free Bell experiment which selects two out of four photons into nonmaximal singlet–like states can be carried out with the present technique. The proposal makes use of an asymmetrical preparation of two input photon singlets generated by two nonlinear type–II crystals. The asymmetry consists in the fact that we first let one photon from the first singlet interfere in the fourth order with one photon from the other singlet at a highly transparent beam splitter. Coincidental detections of the photons interfering at the beam splitter (we call them selector photons) open gates for a selection of the remaining two conjugate photons, one from each singlet, into a new correlated state: nonmaximal Bell singlet. In other words, since no coincidence detection between signal and idler photons of the input singlets is needed we can use several times wider solid angles for the Bell photons than for the selector photons. With five times wider solid angle (determined by pinholes ph in Fig. 1) we collect practically all Bell companions of those selector photons which trigger detectors D1’ and D2’ in coincidence. In this way we eliminate the main cause of the low detection in all two–photon experiments so far: loosing of the conjugate photons (in most cases they “miss” the detector opening). An apparent draw–back to our set–up is that the probabilities of two pairs coming from both crystals and of both pairs coming from only one of the crystals are equal. However, the above calculations show that for reflectivity 0.1, realizable visibility of 85–90%, and achievable detector efficiency of 75% the Bell inequality is violated even when the counts corresponding to photons emerging from only one of the two crystals are included into the statistics by which the inequality is fed.
We should mention here that although 67% efficiency result for Hardy’s equalities has been obtained recently as well their low marginal violations (of maximal value 0.09 as opposed to 0.41 for the Bell inequalities) make a loophole–free “Hardy experiment” more demanding. However, it would be worth trying to collect data for it within the proposed set–up because of its conceptual clarity and because our results add to the physics of the Hardy experiment. In particular, an analysis of Eq. (29) shows that Hardy’s equalities, as opposed to the Bell inequalities, cannot be formulated for a system which is not fully non–classical. Thus, our set–up reveals nonlocality of quantum systems as a property of selection of their subsystems and Hardy’s equalities as a test (ideally, some detectors should always react and some never) of whether the system is fully quantum or not. It may turn out that quantum nonlocality is only operationally defined in the same way in which quantum phase might turn out to be only operationally defined. On the other hand, since Hardy’s equalities reach their maximal violation for $`R=0.32`$ and not for $`R=0.5`$, it might turn out the unwanted effect of both photon pairs coming from the same crystal on the marginal probabilities can be compensated sufficiently well to make the experiment feasible.
In the end we mention that the set–up may find its application in quantum cryptography and quantum computation for its property to deliver Einstein–Podolsky–Rosen singlets whose “coherence … \[is\] retained over considerable distances and for long times” ; actually, our Bell singlets stay coherent for ever, i.e., until we make use of them and collapse them.
Note. Some preliminary results to these paper have been presented within an invited talk at the Adriatico Research Conference on Quantum Interferometry II, held in Trieste, Italy, 4–8 March 1996.
###### Acknowledgements.
The author thanks to Harry Paul, Humboldt–Universität zu Berlin for many valuable discussions. He also acknowledges supports of the Max–Plack–Geselschaft, Germany.
|
no-problem/9907/math-ph9907016.html
|
ar5iv
|
text
|
# UMP-98/? The Lanczos Algorithm for extensive Many-Body Systems in the Thermodynamic Limit
## I. Introduction
The Lanczos Algorithm is one of the few reliable and general methods for computing the ground state and excited state properties of strongly interacting quantum Many-body Systems. It has been traditionally employed as a numerical technique on small finite systems, with attendant round-off error problems, although the main obstacle to its further development has been the rapid growth of the number of basis states with system size. The reader is referred to a review of the applications of this method in strongly correlated electron problems. In this work we examine the Lanczos process in the context of the extensive quantum Many-Body Systems, where it is employed entirely in an exact manner and where the thermodynamic limit is taken. So in complete contrast to the traditional use of the Lanczos algorithm - we completely circumvent the issues of loss of orthogonality due to round-off errors and the inability to approach the thermodynamic limit because of the requirement to construct a full basis on the cluster. The systems we have in mind are those with an infinite number of degrees of freedom, yet are extensive, in that all total averages of any physical quantity scale linearly with the numbers of degrees of freedom however quantified. These would include all condensed matter systems with sufficiently local interactions (the precise conditions need to be clarified, but it is clear which specific systems obey extensivity) and Quantum Field Theories, with the proviso that the spectrum is bounded below (in some cases there is also an upper bound too).
After noting some of the advantageous features of the algorithm in general we discuss the scaling behaviour of the Lanczos Process as it approaches convergence and as the thermodynamic limit is taken. Central to this approach is the manifestation of extensivity through a description based on the Cumulant Generating Function, which we take to be given. We then derive a set of general integral equations which define the scaled Lanczos functions in the thermodynamic limit, which can be explicitly and exactly solved for certain integrable models, or employed in a truncated manner for non-integrable models. An alternative formulation is also given which expresses the equivalence of the Lanczos Process with the continuum Toda Lattice Model treated as a boundary value problem. Finally we state some general results concerning the behaviour of the Lanczos functions.
## II. The Lanczos Process, Orthogonal Polynomials and Moments
The Lanczos Algorithm or Process begins with a trial state $`|\psi _0`$ appropriate to the model and the symmetries of the phase being investigated. From this the Lanczos recurrence generates a sequence of orthonormal states $`\{|\psi _n\}_{n=1}^{\mathrm{}}`$ and Lanczos coefficients $`\{\alpha _n(N)\}_{n=0}^{\mathrm{}}`$ and $`\{\beta _n(N)\}_{n=1}^{\mathrm{}}`$, thus
$$\widehat{H}|\psi _n=\beta _n|\psi _{n1}+\alpha _n|\psi _n+\beta _{n+1}|\psi _{n+1},$$
(1)
with the Lanczos coefficients being defined
$$\begin{array}{cc}\hfill \alpha _n& =\psi _n|\widehat{H}|\psi _n,\hfill \\ \hfill \beta _n& =\psi _{n1}|\widehat{H}|\psi _n.\hfill \end{array}$$
(2)
We distinguish a total or extensive operator or variable such as $`H`$ from its density or intensive counterpart by $`\widehat{H}`$. In this basis the transformed Hamiltonian takes the following tridiagonal form
$$T_n=\left(\begin{array}{cccccc}\alpha _0& \beta _1& & & & \\ \beta _1& \alpha _1& \beta _2& & & \\ & & \mathrm{}& & & \\ & & & \beta _{n1}& \alpha _{n1}& \beta _n\\ & & & & \beta _n& \alpha _n\end{array}\right).$$
(3)
As such the Lanczos process is one of the Krylov subspace methods, in that at a finite step $`n`$, the eigenvectors belong to the Krylov Subspace $`\mathrm{Span}\{|\psi _0,\widehat{H}|\psi _0,\widehat{H}^2|\psi _0,\mathrm{}\widehat{H}^n|\psi _0\}`$.
In the Many-Body context one would iterate the Lanczos Process until termination whereupon the Hilbert space is exhausted (at this point one of the $`\beta _{n_T}=0`$, where $`n_T`$ is the dimension of the Hilbert space in the sector defined by the ground state), or until the process has converged according to some arbitrary criteria $`nn_C`$. Then one would perform the thermodynamic limit $`N\mathrm{}`$ where it should be understood that the above conclusion of the Lanczos process is also dependent on the system size, that is to say $`n_T(N),n_C(N)`$. These cutoffs are monotonically increasing functions of the system size so they will all tend to $`\mathrm{}`$ in the thermodynamic limit as well. Taking the limits in the reverse order clearly leads to nonsensical results, as taking $`N\mathrm{}`$ with $`n`$ fixed produces $`\alpha _nc_1`$ and $`\beta _n0`$. The great virtue of the Lanczos process is that it can be shown to converge essentially exponentially fast with respect to iteration number, using the Kaniel-Paige-Saad exact bounds for the rate of convergence. This means that convergence occurs within a very small subspace of the total Hilbert space, so that $`n_C<<n_T`$.
The Lanczos process is entirely equivalent to the 3-term recurrence for an Orthogonal Polynomial System, however we consider a slight generalisation of the preceding process to one with a single parameter evolution (a ”time” $`t`$). In this construction we are continuing a development begun by Lindsay and Chen and Ismail, which will lead to some powerful tools in treating the Lanczos process. The measure, or that component which is absolutely continuous, is defined by the weight function
$$w(ϵ,t)=e^{u(ϵ)+tNϵ},$$
(4)
on the real line $`ϵ`$. Our system under study is described by the initial value of the system at $`t=0`$ and often we will suppress this argument for the sake of simplicity. This measure defines a system of monic Orthogonal Polynomials $`\{P_n(ϵ,t)\}_{n=0}^{\mathrm{}}`$ with an orthogonality relation
$$_{\mathrm{}}^+\mathrm{}𝑑ϵw(ϵ,t)P_m(ϵ,t)P_n(ϵ,t)=h_n(t)\delta _{mn},$$
(5)
and normalisation $`h_n(t)`$. This is equivalent to the following three-term recurrence relation
$$P_{n+1}(ϵ,t)=(ϵ\alpha _n(t))P_n(ϵ,t)\beta _n^2(t)P_{n1}(ϵ,t),$$
(6)
with the recursion coefficients $`\alpha _n(t)`$ real for $`n0`$ and $`\beta _n^2(t)`$ real and positive for $`n>0`$. By convention we take $`\beta _0^2=1`$. It can be readily shown that the Lanczos coefficients are given in terms of the normalisation thus
$$\begin{array}{cc}\hfill \alpha _n(t)& =\frac{1}{Nh_n(t)}\frac{d}{dt}h_n(t),\hfill \\ \hfill \beta _n^2(t)& =\frac{h_n(t)}{h_{n1}(t)}.\hfill \end{array}$$
(7)
The direct connection between the Lanczos Process and the OPS are given by the determinant relation of the characteristic polynomial
$$P_{n+1}(ϵ)=()^{n+1}|T_nϵI_{n+1}|,$$
(8)
so that the zeros of the Orthogonal Polynomial are eigenvalues of Hamiltonian.
Some comments are in order regarding the differences, or more accurately the special character, of these Orthogonal Polynomials with respect to the generic OPS or with some of the scaling versions of OPS. These OPS have been termed Many-Body OPS, but could be equally described as extensive OPS. They all have an additional, essential parameter to the generic OPS, the system size $`N`$, which appears in both the gross scaling factors (the ‘external’ scaling such as in the energy densities $`ϵ`$ defined by $`E=Nϵ`$), but also internally in the 3-term recurrence coefficients, in the Polynomials themselves and in other derived quantities. The internal dependence in the Lanczos coefficients on the system size is not at all apparent and the most transparent way that extensive scaling properties can be exhibited is through the Cumulant Generating Function, (CGF), which hitherto has played no role in Orthogonal Polynomial Theory. In fact the CGF is central to this class of OPS rather than the moments, and is in a practical sense the starting point in any application of the Formalism to physical Models. For all models it is clear that the ground state energy $`E_0`$ is proportional to $`N`$ and unbounded in the thermodynamic limit, and similarly the total Lanczos coefficients (as opposed to the densities) are unbounded as $`n\mathrm{}`$ for fixed $`N`$. When everything is recast in terms of densities the spectrum is bounded below by $`ϵ_0`$ and in many models will also be bounded above, and similarly the density Lanczos coefficients are bounded. Another difference that Many-Body OPS exhibit in comparison to general OPS is, as we have noted above, the three-term recurrence will terminate exactly at $`n=n_T`$, although this will never present any problems as this is exponentially large.
The Lanczos process is intimately connected with the Hamburger moment problem, via the Resolvent operator
$$R(ϵ)=\frac{1}{ϵ\widehat{H}}ϵ\mathrm{Supp}[d\rho ].$$
(9)
Its formal Laurent series establishes a direct link with Hamiltonian moments
$$R(ϵ)=\underset{i=0}{\overset{\mathrm{}}{}}\frac{\mu _i}{ϵ^{i+1}},$$
(10)
where these moments are defined as expectation values with respect to the trial state referred to above
$$\mu _n\widehat{H}^n,\mu _0=1.$$
(11)
The resolvent has a real Jacobi-fraction continued fraction representation
$$R(ϵ)=𝐊_{n=0}^{\mathrm{}}\left(\frac{\beta _n^2}{ϵ\alpha _n}\right),$$
(12)
with elements coming from the Lanczos coefficients.
An equivalent description to that of the Hamiltonian moments is to formulate everything in terms of cumulants or connected moments $`\{\nu _n\}_{n=1}^{\mathrm{}}`$, and to ignore all corrections which vanish in the thermodynamic limit $`N\mathrm{}`$. Cumulants scale directly with the size of the system so that for the extensive Many-Body Problem we have
$$\nu _n=c_nN+\mathrm{o}(1)$$
(13)
in the ground state sector, or
$$\nu _n=c_nN+m_n+\mathrm{o}(1)$$
(14)
in any other sector. This also means that no finite-size scaling can be performed given that only the limiting quantities are retained here and boundary condition effects do not appear. The foundation ingredient is the Moment Generating Function which is related to the Cumulant Generating Function in the following way.
###### Definition 1
The Moment Generating Function (MGF) $`M(t)`$ and the Cumulant Generating Functions (CGF) $`F(t)`$ are defined by,
$$M(t)e^{tH}=\underset{n=0}{\overset{\mathrm{}}{}}\mu _n\frac{t^n}{n!}=\mathrm{exp}\left(\underset{n=1}{\overset{\mathrm{}}{}}\nu _n\frac{t^n}{n!}\right)\mathrm{exp}(NF(t)).$$
(15)
Some examples of Cumulant Generating Functions include the isotropic XY model using the z-polarised Néel state as the trial state
$$F(t)=\frac{1}{\pi }_0^{\pi /2}𝑑q\mathrm{log}\mathrm{cosh}(t\mathrm{cos}q),$$
(16)
and the Ising model in a transverse field using the disordered state as the trial state, and coupling constant $`x`$
$$F(t)=\frac{1}{2\pi }_0^\pi 𝑑q\mathrm{ln}\left[\mathrm{cosh}(2tϵ_q)\frac{(\mathrm{cos}q+x)}{ϵ_q}\mathrm{sinh}(2tϵ_q)\right],$$
(17)
where the quasiparticle energies $`ϵ_k`$ are defined by $`ϵ_q^2=1+x^2+2x\mathrm{cos}q`$.
###### Definition 2
The Determinants of the Moment Matrices $`\mathrm{\Delta }_n(t)`$ for $`n0`$ are defined by the Hankel form -
$$\mathrm{\Delta }_n(t)=|M^{(i+j2)}(t)|_{i,j=1}^{n+1}.$$
(18)
The direct relationship from moments to the Lanczos coefficients which is established in this way is via the construction of a sequence of Hankel determinants of the Moment Matrices and their Selberg-type integral representation
$$\mathrm{\Delta }_n(t)=\frac{1}{(n+1)!}_{\mathrm{}}^+\mathrm{}\underset{k=1}{\overset{n+1}{}}dϵ_kw(ϵ_k,t)\underset{1i<jn+1}{}|ϵ_iϵ_j|^2.$$
(19)
These determinants are related to the normalisations via
$$\mathrm{\Delta }_n(t)=\underset{jn}{}h_j(t).$$
(20)
###### Definition 3
Our final definition, that of the Lanczos $`L`$-function, is
$$N^2L_n(t)=\frac{\mathrm{\Delta }_n(t)\mathrm{\Delta }_{n2}(t)}{\mathrm{\Delta }_{n1}^2(t)},$$
(21)
for $`n1`$ and $`L_0(t)=M(t)`$.
The converse result is then
$$\mathrm{\Delta }_n(t)=N^{n(n+1)}\underset{k=0}{\overset{n}{}}L_k^{n+1k}(t),$$
(22)
for $`n1`$. From these the Lanczos coefficients are given simply by
$$\begin{array}{cc}\hfill \alpha _n(t)& =\frac{1}{N}\underset{j=0}{\overset{n}{}}\frac{L_j^{}(t)}{L_j(t)},\hfill \\ \hfill \beta _n^2(t)& =L_n(t).\hfill \end{array}$$
(23)
###### Theorem 1
The equation of motion for the Lanczos $`L`$-functions is
$$L_n(t)=\frac{1}{N}\underset{j=1}{\overset{n}{}}\frac{j}{N}D_t^2\mathrm{log}L_{nj}(t).$$
(24)
with the initial condition on the recurrence given by $`\mathrm{log}L_0(t)=NF(t)`$ for all $`t`$.
The advantage of introducing evolution into the Lanczos Process is that Sylvester’s Theorem applied to the Hankel determinants,
$$\mathrm{\Delta }_{n+1}(t)\mathrm{\Delta }_{n1}(t)=\mathrm{\Delta }_n(t)\mathrm{\Delta }_n^{\prime \prime }(t)\left(\mathrm{\Delta }_n^{}(t)\right)^2$$
(25)
so that the theorem follows directly from this.$`\mathrm{}`$
The first few members of the Lanczos $`L`$-sequence are
$$\begin{array}{cc}\hfill L_1(t)& =\frac{1}{N}F^{\prime \prime }(t),\hfill \\ \hfill L_2(t)& =\frac{2}{N}F^{\prime \prime }(t)+\frac{1}{N^2}\frac{F^{(2)}F^{(4)}(F^{(3)})^2}{(F^{(2)})^2}.\hfill \end{array}$$
(26)
The consequence of Sylvester’s theorem for the evolution of the $`\mathrm{\Delta }_n`$ is the following theorem
###### Theorem 2
The $`\mathrm{\Delta }_n(t)`$ obey the following differential-difference equation
$$\mathrm{exp}\left\{\mathrm{log}\mathrm{\Delta }_{n+1}+\mathrm{log}\mathrm{\Delta }_{n1}2\mathrm{log}\mathrm{\Delta }_n\right\}=D_t^2\mathrm{log}\mathrm{\Delta }_n,$$
(27)
with the boundary value $`\mathrm{log}\mathrm{\Delta }_0=NF(t)`$ and conventionally $`\mathrm{\Delta }_1=1`$.
This follows directly from Sylvester’s Identity.$`\mathrm{}`$
This evolution equation is just the finite Toda Lattice equation of motion, and this point has been previously noted in Ref. .
## III. Scaling in the Thermodynamic Limit
As was discussed earlier there are two limiting processes that one must consider when the thermodynamic limit is taken in the Lanczos Algorithm, both $`n,N\mathrm{}`$, and the issue then is what mutual relationship exists between them in the limit. One can view this limiting process in the $`1/n`$ vs. $`1/N`$ plane and then consider along what types of paths must one approach the origin. We shall find that the general relationship is $`n,N\mathrm{}`$ with $`sn/N`$ fixed, although for systems at criticality it seems inevitable that $`s`$ will become unbounded in the analysis. A consequence of these ideas is the confluence property of the Lanczos coefficients as $`n,N\mathrm{}`$ at fixed $`s=n/N`$
$$\begin{array}{cc}\hfill \alpha _n(N)& =\alpha (s)+\mathrm{O}(1/N),\hfill \\ \hfill \beta _n^2(N)& =\beta ^2(s)+\mathrm{O}(1/N).\hfill \end{array}$$
(28)
There are a number of ways to see this approach to the thermodynamic limit.
Using the explicit forms connecting cumulants and moments, and a direct evaluation of the Hankel determinants one can prove for general $`n`$ and $`N`$ that the Lanczos coefficients have a leading order scaling in $`s=n/N`$ for the first two orders of an expansion in large $`N`$. Actually this expansion is valid for all $`n`$ not just for large values and thus includes all the subdominant contributions. Thus
$$\begin{array}{cc}\hfill \alpha _n=& c_1N+n\left[\frac{c_3}{c_2}\right]\hfill \\ & +\frac{1}{2}n(n1)\left[\frac{3c_3^34c_2c_3c_4+c_2^2c_5}{2c_2^4}\right]\frac{1}{N}\hfill \\ & +\mathrm{},\hfill \end{array}$$
(29)
for $`n0`$, and
$$\begin{array}{cc}\hfill \beta _n^2& =nc_2N+\frac{1}{2}n(n1)\left[\frac{c_2c_4c_3^2}{c_2^2}\right]\hfill \\ & +\frac{1}{6}n(n1)(n2)\left[\frac{12c_3^4+21c_2c_3^2c_44c_2^2c_4^26c_2^2c_3c_5+c_2^3c_6}{2c_2^5}\right]\frac{1}{N}\hfill \\ & +\mathrm{},\hfill \end{array}$$
(30)
for $`n1`$. However this approach cannot be generalised to higher orders and therefore for the full exact Lanczos coefficients. The first two terms in the above expansions were also proven by Lindsay using the Sylvester Identity in the statistical context but no further, while this form for the higher terms (but finite numbers) was conjectured in Reference. We shall find that use of the Sylvester Identity allows one to very easily recover this result, to in fact go to much higher orders in constructing explicit forms and to prove this type of scaling in a completely general way.
###### Lemma 1
The Lanczos $`L`$-function $`L_n(t,N)`$ is a rational function of $`1/N`$ for fixed $`n`$, and all $`t`$.
The Difference-Differential Eq. (24) is of finite order in $`j/N`$ and $`t`$, so the result follows.$`\mathrm{}`$
Also for fixed $`n`$ we have
$$\underset{N\mathrm{}}{lim}L_n(t,N)=0,$$
(31)
and specifically the leading order term is $`\mathrm{O}(N^1)`$ which arises from the $`j=n`$ term in the sum. Therefore we can expand this function in a descending series in $`N^1`$, thus
$$L_n(t,N)=\underset{p1}{}\frac{l_{np}(t)}{N^p},$$
(32)
and defining the connected series related by
$$\underset{p1}{}\frac{m_{np}(t)}{N^p}\mathrm{log}\left(1+\underset{p1}{}\frac{l_{np+1}/l_{n1}}{N^p}\right).$$
(33)
This last relation can be rendered into an explicit form
$$m_{np}=\underset{_iq_ir_i=p}{}(\underset{i}{}q_i1)!\underset{i}{}\frac{1}{q_i!}\left(\frac{l_{nr_i+1}}{l_{n1}}\right)^{q_i}.$$
(34)
It is actually necessary to perform an expansion of this type because it combines the iteration number ($`n`$) dependence of the numerator and denominator which are both essential in the following results.
Then one can establish a hierarchy of equations for these coefficients
$$\begin{array}{cc}\hfill l_{n1}(t)& =nF^{\prime \prime }(t),\hfill \\ \hfill l_{n2}(t)& =\underset{j=1}{\overset{n1}{}}jD_t^2\mathrm{log}l_{nj1}(t),\hfill \\ \hfill l_{np}(t)& =\underset{j=1}{\overset{n1}{}}jm_{njp2}^{\prime \prime }(t)\text{for }p3,\hfill \end{array}$$
(35)
for $`n1`$ whilst for $`n=0`$ we have $`l_{np}(t)=0`$ as $`L_0(t,N)=\mathrm{exp}(NF(t))`$. The first members of this hierarchy can be easily solved for yielding
$$\begin{array}{cc}\hfill l_{n1}(t)& =nF^{\prime \prime }(t),\hfill \\ \hfill l_{n2}(t)& =\frac{1}{2}n(n1)\frac{F^{(2)}F^{(4)}(F^{(3)})^2}{(F^{(2)})^2},\hfill \\ \hfill l_{n3}(t)& =\frac{1}{12}n(n1)(n2)\left(\frac{F^{(2)}F^{(4)}(F^{(3)})^2}{(F^{(2)})^3}\right)^{(2)},\hfill \end{array}$$
(36)
and from these it is easy to establish the leading order terms already found in Eq. (29,30).
###### Lemma 2
The hierarchy coefficients $`l_{np}(t),m_{np}(t)`$ are polynomials in $`n`$.
These coefficients are constructed from a finite difference equation in $`n`$. $`\mathrm{}`$
###### Theorem 3
The hierarchy coefficients $`l_{np}(t),m_{np}(t)`$ are polynomials of degree $`p`$ in $`n`$.
This is proved by induction on $`p`$ using the hierarchy equations. If we take $`l_{jq}(t)`$ to be of degree $`qp2`$ in $`n`$ then similarly for $`m_{jq}(t)`$ and $`m_{jq}^{\prime \prime }(t)`$. Now for any polynomial $`P(n)`$ of degree $`p2`$ in its argument then
$$\underset{j=1}{\overset{n1}{}}jP(nj),$$
(37)
is a $`p`$th degree polynomial. Thus the recurrence, Eq. (35), establishes that $`l_{n+1p}`$ is also a $`p`$th degree polynomial.$`\mathrm{}`$
From this result it is clear that the limiting forms of the Lanczos coefficients $`\alpha _n(N),\beta _n^2(N)`$ exist when $`n,N\mathrm{}`$ with $`n/N`$ fixed. If the ratio is not kept constant in this limiting operation, say with $`n=\mathrm{o}(N)`$ then the Lanczos coefficients will vanish in the limit, while if the reverse is true $`N=\mathrm{o}(n)`$ then there will be divergent terms in the limit.
Given that the scaling Lanczos coefficients have been established then all the exact theorems for the ground state properties that were predicated on this result now are established. The first example of these theorems was the one for the ground state Energy Density,
$$ϵ_0=\underset{s^+}{inf}[\alpha (s)2\beta (s)],$$
(38)
which also has an analogue for the top of the spectrum, if this exists
$$ϵ_{\mathrm{}}=\underset{s^+}{sup}[\alpha (s)+2\beta (s)].$$
(39)
For many models these Lanczos Functions will be bounded on the positive real axis, and have limits as $`s\mathrm{}`$ on the real line. So there is a superficial similarity to classes of Orthogonal Polynomials whose 3-term recurrence coefficients have limiting values, such as the S-class, the M-class, or the $`M(a,b)`$ classes.
## IV. The extensive Measure
It is necessary to determine the OPS measure, its weight function $`w(ϵ)`$, and this is not generally known at the outset, but rather the Cumulant Generating Function is. In fact it seems to be the case that the measures are not exactly expressible in simple terms, but the CGF or characteristic functions are. There is of course a direct route from a model system and a trial state to the Lanczos coefficients, but from many points of view including practical considerations the route beginning with a cumulant description is more useful.
###### Theorem 4
Given that the cumulant generating function $`F(t)`$ is analytic for $`\mathrm{}(t)>0`$ and in the neighbourhood of the origin $`t=0`$ the OPS weight function $`w(ϵ)`$ has the following asymptotic development in the thermodynamic limit $`N\mathrm{}`$,
$$w(ϵ)=\sqrt{\frac{N}{2\pi F^{(2)}(\xi )}}e^{N\left[ϵ\xi +F(\xi )\right]}+\mathrm{O}(N^{1/2}),$$
(40)
where the function $`\xi (ϵ)`$ is defined implicitly by
$$ϵ=F^{}(\xi ).$$
(41)
Starting with the definition of the cumulant generating function $`F(t)`$
$$e^{tH}\mathrm{exp}\{NF(t)\}=\mathrm{exp}\left\{N\underset{n=1}{\overset{\mathrm{}}{}}\frac{c_n}{n!}t^n\right\}.$$
(42)
We assume here that this infinite series is not just formal but actually exists, that is it has a finite radius of convergence in addition to its analytic character for $`\mathrm{}(t)<0`$. However the Moment Generating Function is simply the analytic continuation of the characteristic function and this continuation is possible given its analyticity, so that a Fourier inversion of this will yield the weight function,
$$\begin{array}{cc}\hfill w(ϵ)& =\frac{N}{2\pi }_{i\gamma \mathrm{}}^{i\gamma +\mathrm{}}𝑑te^{N[itϵ+F(it)]},\hfill \\ & =\frac{N}{2\pi i}_{\gamma i\mathrm{}}^{\gamma +i\mathrm{}}𝑑te^{N[tϵ+F(t)]}\mathrm{}(\gamma )>0.\hfill \end{array}$$
(43)
One does not require the exact inversion but only the leading order in $`N`$ in a steepest descent approximation. In an asymptotic analysis the relevant function is
$$g(t)=tϵ+F(t),$$
(44)
which is analytic for all $`\mathrm{}(t)>0`$. We will assume the existence of a stationary point which occurs at $`t_0`$
$$ϵ=F^{}(t_0),$$
(45)
and is assumed to be unique. This point is evidently real because the energy density is real and the CGF is a real function of a real argument (here we define $`\xi =t_0`$ for convenience). One requires the inversion of this relation for $`\xi (ϵ)`$ and this is guaranteed by the Implicit Function Theorem because $`F^{(2)}(\xi )>0`$. This latter condition also implies that the saddle point is of order unity. Indeed one clearly has the case of $`F^{(2)}(t)>0`$ for real values of $`t`$ in the neighbourhood of the saddle point and $`F^{(2)}(t)<0`$ for imaginary values of $`t`$ in the same neighbourhood. Thus the path of steepest descent through the saddle point is parallel to the imaginary axis. One can then apply the standard saddle point analysis, see Wong Section II.4, to arrive at the stated result. $`\mathrm{}`$
The corresponding example of the saddle point equation for the isotropic XY model is
$$ϵ=\frac{1}{\pi }_0^{\pi /2}𝑑q\mathrm{cos}q\mathrm{tanh}(\xi \mathrm{cos}q),$$
(46)
and that for the Ising model in a transverse field is
$$ϵ=\frac{1}{\pi }_0^\pi 𝑑qϵ_q\frac{\frac{x+\mathrm{cos}q}{ϵ_q}+\mathrm{tanh}(2\xi ϵ_q)}{1+\frac{x+\mathrm{cos}q}{ϵ_q}\mathrm{tanh}(2\xi ϵ_q)}.$$
(47)
The first of the more obvious properties concerns the convexity of the measure arising in the thermodynamic limit,
###### Theorem 5
The leading order of the negative logarithm of the weight function $`u(ϵ)`$ is convex for all real energies $`ϵ`$.
This follows from the relationship of $`u(ϵ)`$ to the stationary point
$$\frac{d}{dϵ}u(ϵ)=N\xi (ϵ),$$
(48)
and the definition
$$ϵ=F^{}(\xi ).$$
(49)
Now it can be easily seen that $`F^{\prime \prime }(t)>0`$ for $`t`$ real and the Hermitian Hamiltonian using the definition of $`F(t)`$ in terms of the expectation value $`NF(t)=\mathrm{ln}\mathrm{exp}(tH)`$.$`\mathrm{}`$
Some detailed, yet general information, concerning the extensive measure in the neighbourhood of the ground state is available. This arises from consideration of the overlap of the trial state with the true ground state, and its relation to the Horn-Weinstein function $`E(t)F^{}(t)`$ via
$$\left|\mathrm{\Psi }_{GS}|\psi _0\right|^2=\mathrm{exp}\left\{N_0^{\mathrm{}}𝑑t[E(t)E(\mathrm{})]\right\}.$$
(50)
In general the limit $`E(t)`$ as $`t\mathrm{}`$ will exist, and is the ground state energy, and so the asymptotic properties of $`E(t)`$ for $`\mathrm{}(t)>0`$ as this tends to infinity is a means of classifying systems. This equivalent to the asymptotic properties of $`ϵ(\xi )ϵ(\mathrm{})`$ as $`\xi \mathrm{}`$ (we denote the Ground State Energy by $`ϵ_0`$, which is also the same as $`ϵ(\mathrm{})`$). In general the overlap is non-zero, so that $`E(t)E(\mathrm{})L^1[0,\mathrm{})`$ but it is possible at isolated points that this is not true (critical points in the model for example) and the overlap may vanish. For example the overlap squared in the case of the isotropic XY model is $`2^{N/2}`$ and that for the Ising model in a transverse field is
$$\mathrm{exp}\left\{\frac{N}{2\pi }_0^\pi 𝑑q\mathrm{ln}\left(\frac{ϵ_q+x+\mathrm{cos}q}{2ϵ_q}\right)\right\}.$$
(51)
Where the overlap is non-zero then several possibilities for the asymptotic behaviour exist, which do actually arise in the exact solutions of the example models -
* gapless case, isotropic XY and critical Ising Model in a transverse Field, Ref.:
At a critical point, the first excited state gap vanishes and
$$ϵϵ_0A|\xi |^\gamma ,$$
(52)
as $`\xi \mathrm{}`$ and if the overlap is finite then $`\mathrm{}(\gamma )>1`$. Therefore the weight function at the bottom of the spectrum takes the following form
$$w(ϵ)(ϵϵ_0)^{\frac{1+\gamma }{2\gamma }}\mathrm{exp}\left\{N\frac{b}{11/\gamma }(ϵϵ_0)^{11/\gamma }\right\},$$
(53)
This measure is integrable on $`(ϵ_0,ϵ_{\mathrm{}})`$ because of the above condition $`\mathrm{}(\gamma )>1`$ and has a branch point at the ground state energy $`ϵ_0`$.
* gapped case 1, Ising Model in a transverse Field, in the ordered phase with the disordered trial state, Ref.:
if the gap is finite then one possibility is that
$$ϵϵ_0Ae^{\mathrm{\Delta }|\xi |},$$
(54)
as $`\xi \mathrm{}`$ and where the excited state gap $`\mathrm{\Delta }>0`$. One can show that the weight function near the bottom edge of the spectrum is analytic having the form
$$w(ϵ)\frac{1}{\mathrm{\Gamma }(N\frac{[ϵϵ_0]}{\mathrm{\Delta }}+1)}.$$
(55)
* gapped case 2, Ising Model in a transverse Field, in the disordered phase with the disordered trial state, Ref.:
and yet another type of gap behaviour exists
$$ϵϵ_0A|\xi |^\gamma e^{\mathrm{\Delta }|\xi |},$$
(56)
The leading order behaviour of the weight function in this case is
$$w(ϵ)(ϵϵ_0)^{1/2N(ϵϵ_0)}\left[\mathrm{log}(ϵϵ_0)\right]^{N\gamma (ϵϵ_0)},$$
(57)
which again has a branch point at the bottom edge of the spectrum.
So generally we find the support of the measure is bounded which excludes a number of weight function types such as the Freud or Erdös weights, but that the weight functions belong to the Szegö class on $`[ϵ_0,ϵ_{\mathrm{}}]`$,
$$_{ϵ_0}^ϵ_{\mathrm{}}𝑑ϵ\frac{\mathrm{log}w(ϵ)}{\sqrt{[ϵ_{\mathrm{}}ϵ][ϵϵ_0]}}>\mathrm{}.$$
(58)
## V. Exactly Solvable Lanczos Process
In this section we derive how the exact Lanczos functions $`\alpha (s)`$ and $`\beta ^2(s)`$ can be constructed directly from the knowledge of the connected Moments or Cumulants, or more specifically from the Cumulant Generating Function. This is the initial data that one uses in any analysis of quantum Many-Body Systems with this approach, and for soluble models the full Generating Function may be available. However if this is not the case then one would use a set of low order Cumulants, up to a given order.
As a first step we recast the Hankel determinants into Selberg Integral form, from the classical result
$$\mathrm{\Delta }_n(t)=\frac{1}{(n+1)!}_{\mathrm{}}^+\mathrm{}\underset{k=1}{\overset{n+1}{}}d\rho (ϵ_k)e^{Nt_{k=1}^{n+1}ϵ_k}\underset{1i<jn+1}{}|ϵ_iϵ_j|^2.$$
(59)
For the steps leading to the two conditions which will define the Lanczos functions we follow Chen and Ismail. A similar approach, but just confined to the evaluation the Hankel determinants, was taken in References . The Hankel determinant can be recast into the form of a partition function, which is,
$$\mathrm{\Delta }_n(t)=\frac{1}{(n+1)!}_{\mathrm{}}^+\mathrm{}\underset{i}{\overset{n+1}{}}dϵ_i\mathrm{exp}\left\{\underset{i}{\overset{n+1}{}}u(ϵ_i)+Nt\underset{i}{\overset{n+1}{}}ϵ_i+2\underset{i<j}{\overset{n+1}{}}\mathrm{ln}|ϵ_iϵ_j|\right\}.$$
(60)
One should observe that both $`_i^{n+1}u(ϵ_i)`$ and $`Nt_i^{n+1}ϵ_i`$ are of order $`(n+1)N`$ whilst the remaining term in the argument $`_{i<j}^{n+1}\mathrm{ln}|ϵ_iϵ_j|`$ is of order $`(n+1)^2`$, so that the only relative scaling that remains nontrivial is one in which $`n/N`$ is fixed. The alternatives would lead to completely trivial consequences. The leading order term for this Hankel determinant as $`n,N\mathrm{}`$ is given by a steepest descent approximation (see Ref. section IX.5)
$$\mathrm{\Delta }_n(t)=\frac{(2\pi )^{n+1}}{(n+1)!}\left|\frac{^2f}{ϵ_i^0ϵ_j^0}\right|^{1/2}e^{f(ϵ^0)}\left[1+\mathrm{O}(1/n,1/N)\right],$$
(61)
where the function $`f(ϵ)`$ is defined as
$$f(ϵ)=\underset{i}{\overset{n+1}{}}u(ϵ_i)Nt\underset{i}{\overset{n+1}{}}ϵ_i2\underset{i<j}{\overset{n+1}{}}\mathrm{ln}|ϵ_iϵ_j|,$$
(62)
and the saddle points $`\{ϵ_i^0\}_{i=1}^{n+1}`$ are given by
$$u^{}(ϵ_i^0)=Nt+2\underset{ij}{\overset{n+1}{}}\frac{1}{ϵ_i^0ϵ_j^0}.$$
(63)
One can easily show that the Hessian in Eq. (61) is positive definite given that $`u(ϵ)`$ is convex. One can carry the continuum limit further by describing the saddle points as a charged fluid whose dynamics are governed by an Energy Functional $`F[\sigma ]`$
$$\mathrm{exp}\left(f(ϵ^0)\right)\underset{n,N\mathrm{}}{\overset{}{}}\mathrm{exp}\left(F[\sigma _0]\right),$$
(64)
with a charge density $`\sigma (ϵ)`$ defined on an interval of integration which is to be determined, $`I=(ϵ_{},ϵ_+)`$. The energy functional takes the following form
$$F[\sigma ]=_I𝑑ϵ\sigma (ϵ)\left[u(ϵ)Ntϵ\right]_I𝑑ϵ_I𝑑ϵ^{}\sigma (ϵ)\mathrm{ln}|ϵϵ^{}|\sigma (ϵ^{}),$$
(65)
where the single particle confining potential is controlled by the OPS measure and the two-body interaction is a logarithmic type. The result of minimising this Functional yields the following singular integral equation for the Charge Density
$$u^{}(ϵ)Nt=2\mathrm{PV}_I𝑑ϵ^{}\frac{\sigma _0(ϵ^{})}{ϵϵ^{}}.$$
(66)
The solution of this integral equation for the Minimal Charge Density $`\sigma _0(ϵ)`$ can be found exactly and is
$$\sigma _0(ϵ)=\frac{\sqrt{(ϵ_+ϵ)(ϵϵ_{})}}{2\pi ^2}\mathrm{PV}_I𝑑ϵ^{}\frac{u^{}(ϵ^{})Nt}{(ϵ^{}ϵ)\sqrt{(ϵ_+ϵ^{})(ϵ^{}ϵ_{})}}.$$
(67)
There are two conditions arising from this solution -
* the first is a Supplementary Condition which is necessary for the charge density solution to be well defined throughout the interval $`I`$
$$0=_I𝑑ϵ\frac{u^{}(ϵ)Nt}{\sqrt{(ϵ_+ϵ)(ϵϵ_{})}},$$
(68)
* and the Normalisation Condition which simply counts the number of Lanczos steps
$$n=\frac{1}{2\pi }_I𝑑ϵϵ\frac{u^{}(ϵ)Nt}{\sqrt{(ϵ_+ϵ)(ϵϵ_{})}}.$$
(69)
Using this solution for the charge density one can substitute this into the original defining equations for the Hankel determinants (the leading order approximations) and establish that the Lanczos functions are simply defined by the interval $`I`$ in this way, $`ϵ_\pm =\alpha \pm 2\beta `$.
###### Theorem 6
The Lanczos functions are given implicitly by the two integral equations
$$0=_{\alpha 2\beta }^{\alpha +2\beta }𝑑ϵ\frac{\xi (ϵ)}{\sqrt{4\beta ^2(ϵ\alpha )^2}},$$
(70)
$$s=\frac{1}{2\pi }_{\alpha 2\beta }^{\alpha +2\beta }𝑑ϵ\frac{ϵ\xi (ϵ)}{\sqrt{4\beta ^2(ϵ\alpha )^2}},$$
(71)
where the model dependent equation for the stationary point $`\xi (ϵ)`$ is given by Eq. (49).
This theorem follows from the previous conditions, namely Eqs. (68,69), and the result for the logarithmic derivative of the weight function,
$$u^{}(ϵ)=N\xi (ϵ)+\mathrm{O}(\mathrm{log}N).$$
(72)
$`\mathrm{}`$
Usually this later equation for the saddle point is also an implicit equation and invariably a nonlinear one. In our derivation the scaling $`s=n/N`$ remains finite whilst $`n,N\mathrm{}`$ emerges naturally and in fact it is difficult to see how one could avoid this confluence.
We now give an alternative result for the Lanczos functions which is based on the time evolution of the Lanczos $`L`$-function.
###### Theorem 7
The Lanczos $`L`$-function, in the thermodynamic limit is the solution of the following integro-differential equation
$$L(s,t)=_0^s𝑑rrD_t^2\mathrm{log}L(sr,t)+sF^{(2)}(t),$$
(73)
and the two Lanczos functions are derivable from this via
$$\begin{array}{cc}\hfill \alpha (s)& =_0^s𝑑rD_t\mathrm{log}L(r,0)+F^{}(0),\hfill \\ \hfill \beta ^2(s)& =L(s,0).\hfill \end{array}$$
(74)
The integro-differential equation is simply derived from the discrete recurrence, namely Eq. (24), after making the observation that the $`j=n`$ term involving $`L_0(t)`$ has to be separated from the sum because it encompasses the initial conditions and is itself not generated by the recurrence.$`\mathrm{}`$
Finally we give a result equivalent to the theorem above, but which involves only scaled forms of the Hankel determinants $`\mathrm{\Delta }_n(N,t)`$ and is the differential analogue of the above Theorem.
###### Definition 4
We make the following definition for $`\delta (n,N,t)`$ in terms of the Hankel Determinant,
$$\mathrm{\Delta }_n(N,t)=N^{n(n+1)}\left[\delta (n,N,t)\right]^{N^2},$$
(75)
for $`n1`$ and $`\mathrm{\Delta }_0(t)=[\delta (0,t)]^N`$.
###### Lemma 3
The function $`\delta (n,N,t)`$ is well defined in the scaling limit $`n,N\mathrm{}`$.
This follows naturally from the relation of the $`\mathrm{\Delta }_n(t)`$ and the Lanczos $`L`$-function as given in Eq. (22), and the well defined scaling of this latter function as demonstrated in the Theorem 3 above. $`\mathrm{}`$
Then we have the following result -
###### Theorem 8
The Lanczos $`\delta (s,t)`$-function satisfies the following partial differential equation in the thermodynamic limit
$$\mathrm{exp}\left\{D_s^2\mathrm{log}\delta (s,t)\right\}=D_t^2\mathrm{log}\delta (s,t),$$
(76)
with the boundary condition
$$\underset{s0^+}{lim}\frac{\mathrm{log}\delta (s,t)}{s}=F(t)t^+,$$
(77)
The Lanczos functions are given by
$$\begin{array}{cc}\hfill \alpha (s)& =D_tD_s\mathrm{log}\delta (s,t)|_{t=0},\hfill \\ \hfill \beta ^2(s)& =\mathrm{exp}\left\{D_s^2\mathrm{log}\delta (s,0)\right\}.\hfill \end{array}$$
(78)
Using the scaling relation above, Eq. (75), and the equation of motion for $`\mathrm{\Delta }_n(t)`$, Eq. (27), the result follows. $`\mathrm{}`$
These last two theorems relate to the dynamics of a nonlinear continuum Toda Lattice in one space domain $`s^+`$ and one time domain $`t`$, with boundary conditions defined at the origin $`s=0`$ for all times $`t`$ by the cumulant generating function $`F(t)`$. The object is then to find the Lanczos functions $`\alpha (s),\beta ^2(s)`$ from a solution of this system, wherein these functions are directly related to the solution at a given time $`t=0`$ over all spatial points $`s>0`$.
## VI. The Taylor Series Expansion
The investigation of the Taylor series expansion of the Lanczos coefficients about $`s=0`$, is an essential element in the application of this Lanczos method, as was indicated earlier, where one has only a finite set of low order cumulants available, say for non-integrable models. Therefore in this case one can only construct a truncated Taylor series expansion and so issues concerning convergence, the radius of convergence of the series, and whether one can extrapolate immediately arise. In addition one would like a direct algorithm relating the cumulants to the Lanczos functions from a purely practical point of view.
We define the Taylor series expansion of the two Lanczos functions by two new sequences of coefficients,
$$\begin{array}{cc}\hfill \alpha (s)& =c_1+\underset{n=0}{\overset{\mathrm{}}{}}a_ns^{n+1},\hfill \\ \hfill \beta ^2(s)& =\underset{n=0}{\overset{\mathrm{}}{}}b_ns^{n+1}.\hfill \end{array}$$
(79)
In order to find these coefficients one could use either of the two general solutions for the Lanczos process, Eqs. (70,71) or Eq. (73), and the two methods are presented below.
The first step involves the inversion of the following Taylor series expansion
$$ϵ=c_1+\underset{n=1}{}\frac{c_{n+1}}{n!}\xi ^n,$$
(80)
for $`\xi (ϵ)`$, namely the coefficients $`e_k`$ appearing in
$$\xi =\underset{k=1}{}e_k(ϵc_1)^k.$$
(81)
The coefficients $`c_n`$ appearing in Eq. 80 are the cumulant coefficients. The existence of this inverse function is guaranteed because the second cumulant $`c_2>0`$ in all systems and we assume that the saddle point function, Eq. (49), is analytic in the neighbourhood of $`\xi =0`$. The next step involves the solution of the two recurrences
$$\begin{array}{cc}\hfill 0& =\underset{k=1}{}e_k\underset{m=0}{\overset{k/2}{}}\left(\genfrac{}{}{0pt}{}{k}{2m}\right)\frac{(1/2)_m}{m!}(\alpha c_1)^{k2m}(4\beta ^2)^m,\hfill \\ \hfill 2s& =\underset{k=1}{}e_k\underset{m=0}{\overset{(k1)/2}{}}\left(\genfrac{}{}{0pt}{}{k}{2m+1}\right)\frac{(1/2)_{m+1}}{(m+1)!}(\alpha c_1)^{k2m1}(4\beta ^2)^{m+1},\hfill \end{array}$$
(82)
which are used to solve for the coefficients $`a_n,b_n`$ appearing in Eq. (79).
In the second method we define a continuum version of the coefficients that are defined in Eq. (33) in the following way
$$\mathrm{log}\frac{L(s,t)}{sl_1(t)}=\mathrm{log}\left(1+\underset{p1}{}\frac{l_{p+1}}{l_1}s^p\right)\underset{p1}{}m_p(t)s^p,$$
(83)
and the inverse of Eq. (34) in an explicit form
$$\frac{l_{p+1}}{l_1}=\underset{_iq_ir_i=p}{}\underset{i}{}\frac{1}{q_i!}m_{r_i}^{q_i}.$$
(84)
From these relations one can find a hierarchy of equations for these coefficients
$$\begin{array}{cc}\hfill l_1(t)& =F^{\prime \prime }(t),\hfill \\ \hfill l_2(t)& =\frac{F^{(2)}F^{(4)}(F^{(3)})^2}{2(F^{(2)})^2},\hfill \\ \hfill l_{p+2}(t)& =\frac{m_p^{\prime \prime }(t)}{(p+2)(p+1)}=l_1(t)\underset{_iq_ir_i=p+1}{}\underset{i}{}\frac{m_{r_i}^{q_i}}{q_i!}\text{for }p1.\hfill \end{array}$$
(85)
Thus one can verify from the solution for the initial value problem above that the general Taylor series coefficients are given by
$$\begin{array}{cc}\hfill [(n+1)!]^2c_2^{3n+1}a_n& =\underset{\lambda 2n+1}{}A(n;\lambda )\underset{i=0}{\overset{2n+1}{}}c_{2+i}^{a_i}\hfill \\ \hfill (n+1)!n!c_2^{3n1}b_n& =\underset{\lambda 2n}{}B(n;\lambda )\underset{i=0}{\overset{2n}{}}c_{2+i}^{a_i},\hfill \end{array}$$
(86)
where the coefficients labeled by the partition $`\lambda =(1^{a_1}.2^{a_2}\mathrm{}i^{a_i})`$, denoted by $`A(n;\lambda ),B(n;\lambda )`$, are listed in Table(1) of the Appendix. There are constraints operating in the above equations, namely $`_{i=1}^{2n+1}ia_i=_{i=0}^{2n+1}a_i=2n+1`$ for the first relation and $`_{i=1}^{2n}ia_i=_{i=0}^{2n}a_i=2n`$ for the second.
Clearly the Taylor series expansion of the Lanczos functions has low order coefficients which are constructed from the low order cumulants, and is a form of a linked cluster expansion. However it is not just a simple linked cluster expansion as in the Taylor series expansion of the Cumulant Generating Function, but involves a subtle interplay and cancellation of all cumulants below a given order.
## VII. General Properties
There are some very general properties that the Lanczos process in the thermodynamic limit and the associated Lanczos functions satisfy and we examine these now. Some are quite obvious and not particularly surprising, however we state these for completeness sake, while there are some other properties which are not so immediate but very important nevertheless.
The next, and natural, property concerns the monotonicity of the two envelope functions $`ϵ_\pm (s)=\alpha (s)\pm 2\beta (s)`$.
###### Theorem 9
The envelope functions $`ϵ_+(s),ϵ_{}(s)`$ are monotonically increasing and decreasing functions of real, positive $`s`$ respectively.
This follows from a recasting of the normalisation condition in the following way
$$2\pi s=_\xi _{}^{\xi +}𝑑\xi \sqrt{[ϵ(\xi _+)ϵ(\xi )][ϵ(\xi )ϵ(\xi _{})]},$$
(87)
where the $`\xi _\pm `$ are defined by $`ϵ(\xi _\pm )=ϵ_\pm `$. Now it is straight forward to write the explicit forms for the derivatives of the envelope functions with respect to $`s`$ as
$$\begin{array}{cc}\hfill \frac{dϵ_+}{ds}& =4\pi /_\xi _{}^{\xi _+}𝑑\xi \sqrt{\frac{ϵ(\xi )ϵ(\xi _{})}{ϵ(\xi _+)ϵ(\xi )}},\hfill \\ \hfill \frac{dϵ_{}}{ds}& =4\pi /_\xi _{}^{\xi _+}𝑑\xi \sqrt{\frac{ϵ(\xi _+)ϵ(\xi )}{ϵ(\xi )ϵ(\xi _{})}},\hfill \end{array}$$
(88)
so that the stated properties are evident.$`\mathrm{}`$
It is clear that the envelope functions $`e_\pm (s)`$ are bounded in the following ways, $`ϵ_{}(s)ϵ_0`$ and $`ϵ_+(s)ϵ_{\mathrm{}}`$.
The 3-term recurrence which serves as one of the definitions of the Orthogonal Polynomials themselves is now going to take a definite limiting form when $`n,N\mathrm{}`$ such that $`s`$ is finite. This is going to lead to a scaling form for one set of the Polynomials themselves, which would be more correctly termed orthogonal functions $`p(s,ϵ)`$. Heuristically one can see how this arises by the following argument. If one ensures that Lanczos densities are employed and the following scaling of the polynomials thus $`P_n(E)=N^np_n(ϵ)`$, then the 3-term recurrence becomes
$$p_{n+1}(ϵ)/p_n(ϵ)+\beta _n^2\frac{1}{p_n(ϵ)/p_{n1}(ϵ)}=ϵ\alpha _n.$$
(89)
Now these ratios are approximated by
$$\frac{p_{n+1}(ϵ)}{p_n(ϵ)}\mathrm{exp}\left(\frac{1}{N}\frac{}{s}\mathrm{ln}p(s,ϵ)\right),$$
(90)
for arguments $`ϵ\backslash \mathrm{Supp}[d\rho ]`$. So that in the asymptotic regime the recurrence becomes
$$\mathrm{exp}\left(\frac{1}{N}\frac{}{s}\mathrm{ln}p(s,ϵ)\right)+\beta ^2(s)\mathrm{exp}\left(\frac{1}{N}\frac{}{s}\mathrm{ln}p(s,ϵ)\right)ϵ\alpha (s),$$
(91)
whose solutions are
$$p^\pm (s,ϵ)p(0)\mathrm{exp}\left\{N^s𝑑t\mathrm{ln}\frac{1}{2}\left[ϵ\alpha (t)\pm \sqrt{(ϵ\alpha (t))^24\beta ^2(t)}\right]\right\}.$$
(92)
These are the corresponding results for the ratio $`P_n(x)/P_{n+1}(x)`$ or n-th root $`\sqrt[n]{P_n(x)}`$ asymptotics of generic Orthogonal Polynomials as $`n\mathrm{}`$, or the scaled Orthogonal Polynomials, but are rather different due to the particular nature of Many-Body Orthogonal Polynomials.
###### Theorem 10
Given the scaling behaviour of the Lanczos coefficients, and that they are bounded for $`n,N\mathrm{}`$, then the n-th root of the denominator Orthogonal Polynomials $`p_n(ϵ)`$ have the limiting form uniformly for $`ϵ`$ in compact subsets of $`\backslash \mathrm{Supp}[d\rho ]`$.
$$p(s,ϵ)\underset{n,N\mathrm{}}{lim}|p_n(N,ϵ)|^{1/N}=\mathrm{exp}\left\{_0^s𝑑t\mathrm{ln}\frac{1}{2}\left[ϵ\alpha (t)+\sqrt{(ϵ\alpha (t))^24\beta ^2(t)}\right]\right\}$$
(93)
The proof of this parallels the one constructed by van Assche in Ref. through the use of Turán Determinants,
$$D_np_n^2p_{n+1}p_{n1}$$
(94)
One can show that these obey the following recurrence relation
$$D_n=\beta _n^2D_{n1}+(\alpha _n\alpha _{n1})p_np_{n1}+(\beta _n^2\beta _{n1}^2)p_np_{n2}$$
(95)
Using the partial fraction decomposition of the ratio of two successive Orthogonal Polynomials one can also find a bound on this ratio
$$\left|\frac{p_{n1}(ϵ)}{p_n(ϵ)}\right|\frac{C}{d}n$$
(96)
for all $`ϵK`$ where the compact set $`K\backslash \mathrm{Supp}[d\rho ]`$ and $`d`$ is the distance between this set and the interval $`[ϵ_0,ϵ_{\mathrm{}}]`$, and $`C`$ is a positive constant. Using Eq. (95) we have
$$\left|\frac{D_n}{p_n^2}\right|\underset{n}{sup}(\beta _n^2)\frac{C^2}{d^2}\left|\frac{D_{n1}}{p_{n1}^2}\right|+|\alpha _n\alpha _{n1}|\frac{C}{d}+|\beta _n^2\beta _{n1}^2|\frac{C^2}{d^2}$$
(97)
Given the scaling form of the Lanczos coefficients the ratio $`|D_n/p_n^2|0`$ as $`n,N\mathrm{}`$ uniformly in $`ϵ`$ whenever $`d`$ is large enough. This means that $`|p_{n1}/p_n|`$ and $`|p_n/p_{n+1}|`$ tend to the same accumulation point which we denote by $`p(s,ϵ)`$. This point is given by the solution of the quadratic equation $`p+\beta ^2(s)/p=ϵ\alpha (s)`$, and the positive branch of the solution must be taken as $`p\mathrm{}`$ when $`ϵ\mathrm{}`$. The functions $`p(s,ϵ)`$ are analytic functions of $`ϵK`$ which are uniformly bounded, so the restriction on $`d`$ can be lifted to being only non-zero. The behaviour of the n-th ratio then gives the n-th root behaviour directly as
$$\left|p_n\right|^{1/N}=\mathrm{exp}\left\{\frac{1}{N}\underset{k=1}{\overset{n}{}}\mathrm{log}\left|\frac{p_k(ϵ)}{p_{k1}(ϵ)}\right|\right\}$$
(98)
The asymptotic behaviour that we have found applies to the denominator OP only as can be seen from the observation that $`p_1=ϵc_1`$ and $`p_2=(ϵc_1)^2c_3/c_2N(ϵc_1)c_2/N`$, while
$$\left[ϵ\alpha (s)+\sqrt{(ϵ\alpha (s))^24\beta ^2(s)}\right]\underset{s0}{\overset{}{}}\frac{1}{ϵc_1}\left((ϵc_1)^2c_3/c_2N(ϵc_1)c_2/N\right).$$
(99)
This establishes the result.$`\mathrm{}`$
## VIII. Summary
In this work we have demonstrated the general scaling behaviour of the Lanczos Process as applied to Many-Body Systems when the process is taken to convergence and the thermodynamic limit taken. We also find explicit constructions of the limiting Lanczos coefficients in two equivalent formulations, from an initial exact solution of the moment problem, that is to say the cumulant generating function for the system. There are explicit examples where the CGF can be found and the whole Lanczos process explicitly realised. Furthermore we have given the corresponding results for the associated Orthogonal Polynomial system and the measure in this regime, quite generally. However we must emphasise that these results apply only to the bulk properties, that is to say the ground state properties that scale extensively and the spectral properties in the interior (the ”bulk”) of the spectrum. So this does not include the delicate scaling behaviour at the edges of the spectrum, nor in the neighbourhood of singularities - this theory would have to be extended to treat the excited state gaps near the bottom of the spectrum. A number of general theorems are given which constrain the behaviour of the Lanczos functions, and the process in general. We also indicate how a number of such constraints operating can lead to some concrete realisations or scenarios that the Lanczos process can present, namely its behaviour at a critical point in the model under study. This is a significant step on the way to the goal of a rigorous classification of Many-Body Systems in terms of their character via the Lanczos process. Other important questions that arise in the treatment of non-integrable models, for which the general results presented here have suggested some answers, are the questions of the choice of trial state, the rate of convergence of the truncated Lanczos process and how one might accelerate its convergence given some independent qualitative knowledge.
Acknowledgements
One of the authors (NSW) would like to acknowledge the support of a Australian Research Council large Grant whilst this work was performed, and the hospitality of Service de Physique Théorique, Centre d’Etudes Nucléaires de Saclay.
## Appendix
We list here the coefficients of the Taylor series expansion for the Lanczos Coefficients, labelled by the partitions of integers, according to the definition of Eq.(86).
|
no-problem/9907/cond-mat9907207.html
|
ar5iv
|
text
|
# Physics of rapidly expanding supercritical solutions: A first approach
## I Introduction
The use of supercritical solutions to enhance processes in the chemical industry dates back several decades. Their utility stems from the fact that supercritical fluids have a relatively high diffusivity, which promotes mixing on the microscopic level, and their density is high, allowing relatively high solubilities. Furthermore, the ability to selectively dissolve substances by varying thermodynamic parameters like pressure and temperature is also a big reason why supercritical solutions are used in chemical processing. In addition, the environmentally benign nature of some supercritical solvents, e.g. carbon dioxide, encourages their use in industry.
In many applications, such as pharmaceutical processing or ceramics processing, rapid expansion of supercritical solutions (RESS) has been promoted due to the control available on the morphology of the end-product. For example, delivery of drugs within the human body may be enhanced if the chemicals coalesce to a certain particle size. In ceramics processing, fine control over particle size may lead to products with enhanced strength.
Surprisingly, despite its industrial importance, little effort appears to have been devoted to understanding the dynamics of the RESS process. This process typically consists of expanding a supercritical solution, which is under pressure, through a capillary-sized nozzle into the ambient atmosphere. Depending on the nozzle design, the fluid velocity as it exits the nozzle into the ambient atmosphere can be either sub-sonic or supersonic. In fact, unless the nozzle is designed carefully, the exit velocity will be sub-sonic, in which case the subsequent flow will be subsonic and begin slowing down in the expanding geometry. This can lead to a complicated flow, where the downstream state can influence the flow upstream into the nozzle region. We therefore intend to focus on those processes where the fluid exits the nozzle at sonic speed. In this case, the fluid accelerates as it exits the nozzle, and its pressure drops until vaporization sets in. This causes the solute to begin precipitating out in the form of micron-sized particles. The flow then decelerates as the liquid transforms into a multi-phase spray and drag effects set in. Currently, empiricism is used to determine maximal operating conditions for the RESS process. While this approach may be a practical way to solve the problems of the day, we point out that the RESS process, despite its apparent simplicity, involves an enormously large range of physics. This physics has largely remained unexplored. In fact, the characteristic information (or lack thereof) conveyed by photographs of the processis a fuzzy conical plume of presumably fine particles emanating from the orifice of a high pressure chamber, and this provides us minimal knowledge of the basic phenomena which lead to the precipitation of the solute. There are indeed experimental papers that study evaporation waves in liquids, but the focus of those investigations has been super-heated, sub-critical fluids, rather than supercritical fluids, which we consider here. It is hoped that our paper will spawn experimental efforts in the supercritical fluid arena, from which verification of our theoretical results may be sought.
For conceptual ease, we can divide the physical problem into stages:
First of all the solvent, which is in the supercritical state, undergoes a transition to a vapor-like state when the fluid enters an expansion regime in which its equation of state in $`PV`$ space ($`P`$ is the pressure, $`V=1/\rho `$, $`\rho `$ being the density) becomes concave towards the origin. This transition is a rarefaction or expansion shock. As the solvent vaporizes, the solute begins to precipitate. Thus, it is important to know the location and shape of this expansion shock, as it is the driving mechanism for the subsequent precipitation of solid particles.
Secondly, the RESS process should be viewed more as an anti-detonation wave, that we refer to here as a vaporization wave. In this vaporization wave, heat of formation is taken away from the mechanical motion as the liquid transforms into vapor. While the process of the solvent transforming to a vapor is a relatively fast process, the subsequent mechanism of particle formation may take place over a spatially extended zone. The extent of the vaporization wave depends strongly on the processes that can transport heat in and out of this thin zone from both the fluid and the gas, as well as the details of the nucleation and aggregation process. For a stable operation, the expansion shock in our case must be stationary.
Furthermore:
Just as detonation waves have modes of instability, it is important to investigate the stability of this vaporization wave. This aspect of the study is quite important, as it will affect the subsequent production of particles.
Depending on the local flow conditions, the expansion shock front produced in the expanding flow can lead to the formation of a Mach disk. This Mach configuration will be different from the usual ones in that we have at hand colliding oblique expansion shocks, rather than the usual compressive strong shocks.
Since the initial conditions that define the morphology of the precipitated solid are set in the region where the dynamic phase transition occurs from a condensed to a vapor-like phase, it is important to be able to model the complex physics of supercritical fluid expansion well. As far as we are aware, there is no computational fluid dynamics (CFD) code which contains a comprehensive physics package capable of providing insight into the phenomena we wish to investigate. This combined with problems of resolving the thinness of the multi-phase vaporization zone suggests that another approach is desirable.
To begin our investigation of this wide range of phenomena, we shall focus in this paper on the calculation of the shape and location of the expansion shock in the case of a pure solvent (supercritical carbon dioxide), leaving for future work the theory of how the solute precipitates beyond this expansion shock. And we shall examine briefly the stability of the expansion shock.
Expansion shocks are a relatively poorly studied phenomenon compared to compressive shocks. Their thermodynamics were first studied by Hans Bethe about fifty years ago. The behavior of the so-called Bethe-Zeldovich-Thomson (BZT) fluids has been studied by a few researchers in the past, and most of the work has, until recently, focused on one-dimensional problems. Only a few investigations have been reported in two dimensions. And of these, none have reported on the particular geometry which is most likely to be used in the RESS process, viz., a pinhole in a high pressure chamber. Our goal is to present an analytic approach to this problem, which we believe sheds useful light on the RESS process. It also lays the groundwork for future numerical work which must follow, and which will hopefully eliminate some of the approximations which we have employed.
In order to track the shape and location of the expansion shock, we propose to use level-set methods, which have proven their efficacy in problems involving the propagation of detonation waves in chemical explosives. It is interesting to note that a precursor of this front tracking technique was considered several decades ago by Hans Bethe, during his investigations of the atomic blast wave.
## II The physical criterion for an expansion shock in a flowing supercritical fluid
Let us begin by considering the specific geometry of the nozzle through which the supercritical fluid is emerging. In an ideal situation, one would consider a deLaval nozzle, with a smoothly varying converging portion, connected at the neck to a smoothly expanding channel. Such nozzles have been considered in the past, with the main result that for the ideal case when the fluid goes sonic at the throat of the nozzle, there is subsequent expansion, and an expansion shock occurs some distance past the throat, in the expanding portion of the nozzle.
In most practical cases, it is more likely that the nozzle is basically a pinhole (orifice) in a high pressure chamber filled with the supercritical fluid, so that the interior walls of the chamber leading up to the orifice can be considered as a converging nozzle, while the exterior walls represent the expanding portion of the nozzle. The fluid exits through the orifice. Our plan is to compute the flow of supercritical carbon dioxide just outside this orifice. It is always possible in principle to achieve a state where the Mach number is precisely unity exiting the orifice. When this occurs, the subsequent flow outside the orifice (to be thought of as a flow in a diverging channel) is supersonic. It is this particular case of supersonic flow past the orifice which we will consider here. In the case of sub-sonic flow exiting the orifice, the problem is actually more difficult, since the upstream and downstream states of the fluid can communicate via sound waves, whereas, for supersonic flow, for obvious reasons, this difficulty is not present.
As the supercritical fluid exits the orifice at Mach one, it speeds up, expands, and as the pressure drops, it enters a regime where the equation of state becomes concave. As Bethe showed many years ago, this is the signal that an expansion shock can occur. Across the shock, the fluid goes from a condensed supercritical state to a vapor-like state. Such a case is denoted schematically in Fig.1, where two adiabats for supercritical carbon dioxide are displayed. The expansion shock is denoted schematically as a transition from (0) to (1). The physical model we have chosen is that the fluid undergoes an expansion shock as soon as it enters the concave portion of the equation of state (0). We shall assume that the shock is sufficiently strong that state (1) lies in regular convex region. The state (1) could lie in the concave region if the shock was sufficiently weak. It would then decay further via subsequent expansion shock(s) into lower density states.
To calculate the adiabats, we used Van der Waal’s equation of state (VdW EOS). The VdW EOS is a cubic EOS, and represents the simplest way of describing a first order phase transition. We can write down analytically its adiabatic form:
$$\left(P+\frac{a}{V^2}\right)\left(Vb\right)^{R/C_v+1}=constant$$
(1)
where $`V=1/\rho `$, $`\rho `$ is the density, $`P`$ is the pressure, $`a`$ and $`b`$ are the usual Van der Waal’s parameters, $`R=3.814\times 10^7(ergsmole^1^{}K^1)`$ is the universal gas constant, and $`C_v`$ is the specific heat, which is quite large near the critical point ($`50R`$). For carbon dioxide, in c.g.s. units, $`a=3.959\times 10^{12}\mu ^2`$, $`b=42.69\mu ^1`$, where $`\mu =44`$ is the molecular weight of carbon dioxide. The critical pressure for carbon dioxide is $`73.8bars`$, and the critical temperature is $`31^{}C`$.
The actual flashing/vaporization event will be described by Chapman-Jouguet jump conditions. For our problem, the Chapman-Jouguet jump conditions describe an anti-detonation, in which energy must be supplied to the molecules to break bonds and form vapor.
## III Prandtl-Meyer flow near the orifice
We shall begin our calculation of the shape and location of the vaporization front by considering how the supercritical fluid, flowing at supersonic velocities, expands just as it emerges from the orifice. The next two figures show schematically the geometry we used. The first of these simply shows the orifice, and the second zooms in on the half space we considered in the immediate vicinity of the corner representing the exit at the orifice.
Thus this looks just like the Prandtl-Meyer flow problem, but applied to a non-ideal fluid (supercritical carbon dioxide) obeying a cubic EOS. As is done for the Prandtl-Meyer problem, we shall use cylindrical geometry, with the axis of the cylinder coming out of the paper, and the only independent variable is the azimuthal angle $`\varphi `$ which describes the turning of the flow around the corner. The standard equations which need to be solved to obtain the profile of the pressure versus the turning angle are given below:
$`{\displaystyle \frac{du_r(\varphi )}{d\varphi }}`$ $`=a(\rho (\varphi ))`$ (2)
$`\rho (\varphi )a(\rho (\varphi ))\left({\displaystyle \frac{da(\rho (\varphi ))}{d\varphi }}+u_r(\varphi )\right)`$ $`={\displaystyle \frac{dP(\rho (\varphi ))}{d\varphi }}`$ (3)
where $`u_r`$ is the radial component of the velocity, $`a`$ is the adiabatic speed of sound, $`P`$ is the pressure, given by the adiabatic form of the equation of state stated earlier. As is done conventionally, we have assumed that near the corner, the hydrodynamic variables depend only on the azimuthal angle $`\varphi `$.
The pressure profile is given in Fig. 4. The dashed line in Fig. 4 indicates the maximum turning angle possible, corresponding to the entrance of the fluid into the concave portion of the equation of state, as discussed above.
We notice two interesting facts about the state of supercritical carbon dioxide as it turns the corner:
$``$ For the case of a polytropic gas, the Prandtl-Meyer fan terminates at the angle for which the pressure goes to zero. However, we are considering a condensed fluid, which can undergo an expansion shock as it enters the concave portion of the equation of state, before it can achieve negative pressure.
$``$ Secondly, the maximum turning angle is close to $`90`$ degrees. This turning angle is consistent with a nearly spherical flash front.
As we increase the pressure at which supercritical carbon dioxide leaves the orifice, the maximum angle of turning before an expansion shock can occur begins to decrease. This maximum turning angle will also decrease if we consider that the fluid might have a radial component of the velocity, so that the speed of the fluid exiting the orifice has Mach number greater than one, and it expands at a faster rate. Such a possibility could be achieved by flaring the exit of the orifice.
The next step will be to take this Prandtl-Meyer type of calculation, which applies only in the immediate neighborhood of the corner, and extend it using Whitham’s front-tracking technique to distances farther from the corner we just looked at, in order to get a more complete picture of the flash front.
## IV An introduction to Whitham’s front-tracking method.
We have a supercritical fluid emerging from an orifice into a much larger space at Mach one. As this fluid speeds up, it expands, and flashes to a vapor state across some surface which we shall refer to as the vaporization front. In this paper we shall study the initial transition from a supercritical fluid which has just entered the concave region of the equation of state, to a vapor-like state which lies in a convex region. There are other cases (weaker shocks), which we shall not consider here, when the state across the shock lies in a concave region. Then there may be an extended zone beyond in which additional processes take place, such as a decay via further expansion shocks into a dispersed wave. This topic will be studied in future investigations.
In the previous section, we described how an extension of the usual Prandtl-Meyer expansion calculation can be used to describe the vaporization in the vicinity of the edge of the orifice. Here, we shall describe how to extend this calculation much beyond the edge. The method we shall employ is the front-tracking method of Whitham. It begins with the intuitive notion that the front is described by a direction (the normal to the surface), and the density of rays associated with this surface. Whitham derives a conservation law for the ray density, which could be thought of as being analogous to a current density, so that:
$$\stackrel{}{}\left(\frac{\widehat{n}}{A}\right)=0$$
(4)
where $`\widehat{n}`$ is the unit normal to the surface, and $`A`$ is the area (of the stream-tube) associated with it. One can find an explicit expression for $`A`$ in the case of spherical geometry, which is the one we shall use here. If $`\mathrm{\Psi }(\stackrel{}{r},t)=constant`$ describes the surface, $`\stackrel{}{r}`$ being the independent spatial co-ordinates and $`t`$ the time, then:
$$\widehat{n}=\frac{\stackrel{}{}\mathrm{\Psi }}{|\stackrel{}{}\mathrm{\Psi }|}$$
(5)
So far, the description is quite general, and we would like to determine a connection with the physics of vaporization. In order to do that, it turns out to be convenient to hop on to the liquid emanating from the orifice. Then, the front, which is stationary in the laboratory frame, will appear to be moving inwards, towards the observer sitting on the liquid. It is then easy to show that the velocity of the front is given by:
$$v_n=\frac{_t\mathrm{\Psi }}{|\stackrel{}{}\mathrm{\Psi }|}$$
(6)
where the subscript $`n`$ denotes the normal to the surface, and $`v`$ the velocity.
Furthermore, we shall follow Whitham and make an asymptotic expansion of $`\mathrm{\Psi }`$ in powers of $`t`$, and retain only the linear term. This is done since our main interest is in the $`t=0`$ limit, when the location of the front will be taken to coincide with that in the laboratory frame, as an initial condition. Additionally, it can be seen that $`\mathrm{\Psi }`$ can be defined arbitrarily to within an overall normalizing constant, so we shall write down the equation to the surface as:
$$\mathrm{\Psi }(\stackrel{}{r},t)\psi (\stackrel{}{r})at=constant$$
(7)
where $`a`$ is the speed of sound. Equation 4 can now be expressed as:
$$\stackrel{}{}\left(\frac{M}{A}\stackrel{}{}\psi (\stackrel{}{r})\right)=0$$
(8)
where $`M`$ is the Mach number. If the physics of vaporization could be included in some fashion through a relation between $`A`$ and $`M`$, then Eqn.8 would become relevant to the vaporization phenomenon. We shall do this through a slight modification of the usual method which is used for tracking the motion of a compressive shock wave through a complicated geometry. It must be noted that the conventional method of making the $`AM`$ connection is an approximation, as is our modification of it. The approximation we make is that of spherically symmetric dynamics. Recall that we aim to describe the vaporization phenomenon as an abrupt change in the hydrodynamic variables from the supercritical to the vapor phase, while preserving mass, momentum and conservation.– This is of course analogous to an anti-detonation as described earlier. In fact, a version of Whitham’s method has been developed to describe successfully the propagation of detonation fronts in complicated geometries by Bdzil and co-workers. And it is an analogy to this problem we have in mind while trying to obtain the shape of the vaporization front.
The basic idea is to ask how the hydrodynamic variables behave if the front is positioned at different locations in the spatial region of interest. In order to do this, it is convenient to assume that the geometry is changing smoothly. This is appropriate for the problem at hand, and leads to a set of duct equations. These duct equations incorporate the jump conditions for the vaporization phenomenon, so that they describe approximately how the Mach number changes as the area $`A`$ changes. For completeness, note that for the spherical geometry which we assume in the vicinity of the orifice, $`dA(r)/dr=2A(r)/r`$, r being the radial variable.
The jump conditions for the vaporization process, when the front is stationary in the laboratory frame (to ensure a stable operation) are:
$`\rho _vu_v`$ $`=\rho _Lu_L`$ (9)
$`P_v+\rho _vu_v^2`$ $`=P_L+\rho _Lu_L^2`$ (10)
$`{\displaystyle \frac{1}{2}}u_v^2+h_v`$ $`={\displaystyle \frac{1}{2}}u_L^2+h_L`$ (11)
$`h_j`$ $`=P_j/\rho _j+e_j+g_j,j=v,L`$ (12)
where the subscript $`v`$ denotes the vapor phase and $`L`$ the liquid phase, $`\rho `$ is the density, $`u`$ is the velocity, $`e_j`$ is the internal energy and $`g_j`$ is the heat of formation in each of the phases.
We can assume a stiffened-gas equation of state for this portion of the calculations viz., $`P_L\mathrm{\Gamma }_L^1a^2\rho _L`$, where $`\mathrm{\Gamma }_L30`$ was adjusted to yield the correct pressure in the appropriate density range. This equation of state is a representation of the relation between pressure and density in the supercritical state in $`P\rho `$ space just before the fluid enters the concave region where an expansion shock takes it to a vapor state. We can make the approximation that the front is a strong expansion shock (vapor density turns out to be a few per cent of the density in the supercritical state, and that the internal energy in the vapor phase is much lower than that in the liquid phase. Now, the jump conditions across the front can be written in a more convenient form:
$`\rho _v`$ $`{\displaystyle \frac{\rho _LM^2}{\left(\mathrm{\Gamma }_L^1+M^2\right)}}`$ (13)
$`u_v`$ $`\left({\displaystyle \frac{a}{M}}\right)\left(\mathrm{\Gamma }_L^1+M^2\right)`$ (14)
$`P_v`$ $`\alpha \rho _v(M){\displaystyle \frac{1}{2}}\rho _LaMu_v(M)`$ (15)
$`\alpha `$ $`\left(C_{vL}T_L\mathrm{\Delta }+P_L/\rho _L{\displaystyle \frac{1}{2}}u_v(M)^2\right)`$ (16)
where $`M`$ is the Mach number in the liquid phase, as is the speed of sound $`a`$, and $`\mathrm{\Delta }g_vg_L`$ is the heat of vaporization which must be supplied to the system for vaporization to occur.
Upon inserting appropriate numerical values for the various variables, it turns out that:
$$\alpha C_{vL}T_L$$
(17)
This is due to the high value of the specific heat near the critical point, which is the regime of interest in this paper. In this sense, the $`AM`$ relation we shall derive shortly will be fairly insensitive to most of the physical parameters in the problem.
As mentioned earlier, a successful application of Whitham’s technique requires a moving front, whereas for reasons of ensuring a stable operation, we took the front to be stationary in Eqn.(16). If we go to the frame of reference in which the fluid emanating from the orifice is stationary, then the front will possess a velocity $`u_L`$. In this frame of reference, the phase front which separates the supercritical fluid from the vapor state will appear to be moving inwards, towards the observer. In fact, it will also appear to the observer that the orifice itself is shrinking. Note that in going to the frame of reference that is moving with the supercritical fluid, $`u_vu_vu_Lu_v`$, so that the jump conditions we used earlier remain approximately the same. As indicated earlier, our goal is to concentrate on the short-time limit, so that the consequences of being in a reference frame such that the orifice is shrinking to an infinitesimal value can be avoided.
Using Whitham’s ideas for the propagation of a front along a duct of slowly varying cross-section, we can get a differential equation relating the change in $`M`$ to the change in $`A`$. To do this, we can use his characteristic equation which describes the propagation of a jump discontinuity along a duct. It is easy to see that his derivation holds for our problem (an anti-detonation), which differs from the one discussed in his book for the propagation of a regular shock wave. It is important to note that the derivation is valid in the limit of short times, when changes the geometry as the front propagates are small. In this linearized regime:
$$\frac{dP_v}{dr}+\rho _La\frac{du_v}{dr}+\left(\frac{\rho _La^2u_L}{u_L+a}\right)\frac{dlnA}{dr}=0$$
(18)
Inserting the appropriate equations from Eqn.16, we obtain:
$$\frac{dM}{dr}\left[\frac{2\alpha M}{a^2\mathrm{\Gamma }_L\left(\mathrm{\Gamma }_L^1+M^2\right)^2}M+1\frac{1}{\mathrm{\Gamma }_L^1M^2}\right]\left(\frac{M+1}{M}\right)=\frac{1}{A(r)}\frac{dA(r)}{dr}$$
(19)
It is implicit in this equation that the front does not deviate excessively from a spherical shape. This comes from the fact that we are using duct equations, which assume small deviations from the basic symmetry of the problem, in this case, spherical symmetry. For the problem at hand, ours is the first calculation to explore the shape and location of the vaporization front, and as such will provide a mark for future, more sophisticated calculations to compare against. More importantly, it is hoped that these calculations will spur experimental work in this area, perhaps using schlieren techniques, etc. which will undoubtedly provide useful information regarding the phenomenon.
Using $`\mathrm{\Gamma }_L^1<<1`$, it is possible to deduce, using a Taylor expansion, that for $`rr_o`$, $`r_o`$ being the orifice radius:
$`M(r)`$ $`b^1\left({\displaystyle \frac{r_o^2}{r^2}}1\right)+1`$ (20)
$`{\displaystyle \frac{A(r_o)M(r)}{A(r)}}`$ $`b^1+(1b^1){\displaystyle \frac{r_o^2}{r^2}}f_\mathrm{\Delta }(r)`$ (21)
$`b`$ $`={\displaystyle \frac{4\alpha (\mathrm{\Delta })}{\mathrm{\Gamma }_La^2}}`$ (22)
It turns out that for the parameters relevant to the problem at hand, $`b^1<<1`$, because its value dominated by the internal energy term. This in turn implies an insensitivity to the other parameters defining the problem, as discussed earlier.
Note that it might appear that the physics of the expansion shock is now represented by the single parameter $`b`$, and in particular by its (negligible) dependence on $`\mathrm{\Delta }`$, the heat of vaporization, through the parameter $`\alpha `$. Actually, this is too simplistic a view. The expansion shock, and the geometry, are also directly represented by the imposition of the condition that initially, the front is hinged to the corner of the orifice at an angle close to $`90^{}`$ (see the previous section).
The reason we have restricted attention to $`rr_o`$ is as follows. One may guess that since the front is anchored to the edge of the orifice, and almost parallel to the axis (i.e. has an approximately spherical shape in that vicinity), that the rest of the front will be approximately spherical as well. It will be shown later that this assumption is well-founded for a wide parameter range.
The problem is now reduced to solving:
$$\stackrel{}{}\left(f_\mathrm{\Delta }(r)\stackrel{}{}\psi (\stackrel{}{r})\right)=0$$
(23)
In this sense Eqn.22, which is really an $`AM`$ relation expressed in terms of the radial variable, represents a simplification of the problem, in that we now have to merely solve a linear partial differential equation, as opposed to a fully non-linear partial differential equation. We shall next attempt a solution of this equation by means of a separation of variables in spherical co-ordinates viz., $`r,\theta `$, assuming that the orifice is azimuthally symmetric, and the origin being at the center of the orifice, with the z-axis coinciding with the axis of symmetry.
## V Fundamental solutions of the front-tracking equation.
We will now obtain the linearly independent solutions to the following partial differential equation, derived in the previous section. We shall set the origin at the center of the orifice, and use spherical co-ordinates, with the z-axis coinciding with the symmetry axis.
$$\stackrel{}{}\left(f_\mathrm{\Delta }(r)\stackrel{}{}\psi (\stackrel{}{r})\right)=0$$
(24)
A separation of variables ($`r,\theta `$) turns out to be successful. In other words, setting $`\psi (\stackrel{}{r})=P_\nu (cos(\theta ))R_\nu (r)`$, where $`P_\nu `$ is a Legendre function ($`\nu `$ does not have to be an integer), leads to the following equation for $`R_\nu (r)`$:
$$R_\nu ^{\prime \prime }(r)+\left[\frac{2}{r}+\frac{f_\mathrm{\Delta }^{}(r)}{f_\mathrm{\Delta }(r)}\right]R_\nu ^{}(r)+\frac{\nu (\nu +1)}{r^2}R_\nu (r)=0$$
(25)
where a prime indicates a derivative with respect to the appropriate independent variable. Note that we do not require $`\nu `$ to be an integer, as we shall use $`\nu `$ as a parameter chosen to fit a certain boundary condition. This is similar to solving electrostatics problems in sharp corners, etc. In this case, $`P_\nu (cos(\theta ))`$ is no longer a polynomial, but an infinite series.
The radial equation can be solved using Mathematica. The answer is in terms of the Hypergeometric function, the two linearly independent solutions being:
$`R_\nu ^{(1)}(r)`$ $`=r^{(3\sqrt{9ϵ(\nu )})/2}{}_{2}{}^{}F_{1}^{}(𝒜_1,_1,𝒞_1,{\displaystyle \frac{b\beta r^2}{r_o^2}})`$ (26)
$`𝒜_1`$ $`={\displaystyle \frac{1}{2}}{\displaystyle \frac{1}{4}}\sqrt{1ϵ(\nu )}{\displaystyle \frac{1}{4}}\sqrt{9ϵ(\nu )}`$ (27)
$`_1`$ $`={\displaystyle \frac{1}{2}}+{\displaystyle \frac{1}{4}}\sqrt{1ϵ(\nu )}{\displaystyle \frac{1}{4}}\sqrt{9ϵ(\nu )}`$ (28)
$`𝒞_1`$ $`=1{\displaystyle \frac{1}{2}}\sqrt{9ϵ(\nu )}`$ (29)
$`R_\nu ^{(2)}(r)`$ $`=r^{(3+\sqrt{9ϵ(\nu )})/2}{}_{2}{}^{}F_{1}^{}(𝒜_2,_2;𝒞_2;{\displaystyle \frac{b\beta r^2}{r_o^2}})`$ (30)
$`𝒜_2`$ $`={\displaystyle \frac{1}{2}}{\displaystyle \frac{1}{4}}\sqrt{1ϵ(\nu )}+{\displaystyle \frac{1}{4}}\sqrt{9ϵ(\nu )}`$ (31)
$`_2`$ $`={\displaystyle \frac{1}{2}}+{\displaystyle \frac{1}{4}}\sqrt{1ϵ(\nu )}+{\displaystyle \frac{1}{4}}\sqrt{9ϵ(\nu )}`$ (32)
$`𝒞_2`$ $`=1+{\displaystyle \frac{1}{2}}\sqrt{9ϵ(\nu )}`$ (33)
$`ϵ(\nu )`$ $`=4\nu (\nu +1)`$ (34)
$`\beta `$ $`=\left(1b^1\right)`$ (35)
One requires regularity of the solution at the origin on physical grounds, in order to obtain as stable a solution as possible. The first solution diverges at the origin when $`ϵ(\nu )<0`$. The sign depends on the value of $`\nu `$. Now the value of $`\nu `$ is determined by a boundary condition (discussed below). If we now demand that a strong condition of regularity for all values of $`\nu `$ be satisfied, then one must reject the first solution. Furthermore, when we used the first solution $`R_\nu ^{(1)}`$ by itself, we could not find a value of $`\nu `$ which would satisfy the boundary condition discussed below, for a vaporization angle close to $`90^{}`$.
The second solution $`R_\nu ^{(2)}`$ is regular at the origin. It would be physically acceptable, as long we can satisfy the condition that at the initial moment, the normal to the front is hinged at $`\theta \varphi _f`$ to the axis of symmetry. We do this by demanding that for $`r=r_o,\theta =\pi /2`$:
$$\frac{r_oR_\nu ^{}(r_o)P_\nu (\pi /2)}{R_\nu (r_o)\left[dP_\nu (Cos(\theta ))/d\theta \right]_{\theta =\pi /2}}=tan(\varphi _f)$$
(36)
It is through this equation (36) that the presence of the corner at the junction of the orifice and the tunnel is acknowledged. We will show in the next section that the shape of the front depends on the value of $`\varphi _f`$.
The condition given by Eqn.36 is to be thought of as an eigenvalue problem. The question of normalization of the solution remains. But this is trivial, since we can choose the normalization to be arbitrary, as a change in the normalization constant leaves the differential equation for the front and Eqn.36 unchanged. This arbitrary choice is equivalent to using an arbitrary choice of the temporal origin.
## VI Calculating the shape and location of the vaporization front
The maximum value of the vaporization angle $`\varphi _f`$ turns out to be close to $`1.54radians`$, as discussed earlier, for the case that the fluid emerges from an orifice in a high pressure chamber at a pressure of $`79bars`$ (just above the critical pressure), at Mach one.
It turns out that to satisfy Eqn.(36), $`\nu 𝒪(10^2)`$ when $`\varphi _f1.54`$. To locate the front, we compute the contour corresponding to the value of $`\psi (\stackrel{}{r})`$ at $`\stackrel{}{r}(r_o,\pi /2)`$, knowing that at that point in space, Eqn.(36) is satisfied. Remember that the normalization of the solution is arbitrary, and as pointed out earlier, this corresponds to choosing an arbitrary temporal origin.
The next figure (Fig.5) shows the locus of roots of the equation $`\psi (\stackrel{}{r})=\psi (r_o,\pi /2)`$, for $`\varphi _f=1.54`$.
We will now consider what happens as the exit velocity has a different value than the one just considered. The primary reason for doing this is that it may eventually impact the morphology of the precipitate. We shall do this by assuming that during the computation of the Prandtl-Meyer fan in the previous section, the exit velocity possesses a radial component. This could be achieved in principle by flaring the orifice smoothly a short distance before the sharp corner is reached. Then the fluid will undergo an expansion shock at smaller values of the angle $`\varphi _f`$. We shall now simply study the shape of the vaporization front parametrically as $`\varphi _f`$ is decreased. This serves the additional purpose of allowing us to study the sensitivity of the flash front to $`\varphi _f`$, and thus has a bearing on the stability of the front to such perturbations.
The next two figures (Figs. 6 and 7) show what happens to the shape and location of the front as we decrease $`\varphi _f`$. At $`\varphi _f1.0`$, the front is clearly seen to start flattening near the axis.
The flattening becomes more pronounced as we decrease $`\varphi _f`$ further. Note that as this occurs, the front begins to deviate increasingly from its hemispherical shape. We thus begin to violate the assumption of spherical symmetry $`f_\mathrm{\Delta }(\stackrel{}{r})f_\mathrm{\Delta }(r)`$ more and more as $`\varphi _f`$ decreases. For smaller values of $`\varphi _f`$, the calculated front shows dramatic departures from sphericity. Future investigations may wish to focus on these cases to probe the implications of the change of the shape of the front for the underlying fluid flow and the subsequent precipitation process. At the moment, our calculation is the most sophisticated one available for the geometry considered, and the extreme case we now consider (Fig. 7) may be compared against future numerical calculations of the front for this geometry.
## VII Stability of the expansion shock
We adapted Whitham’s front-tracking method to our steady state case by changing the reference frame appropriately. In this frame, the expansion shock is seen to converge inwards, towards the orifice. We then have a converging front sweeping across a fluid, converting it from a condensed state to a vapor state as it moves. In this sense our problem is analogous to a converging detonation wave, the main difference being that the heat of vaporization in our case is negligible, whereas heat release is the main feature of detonations. It is thus natural to ask whether the expansion shock we have calculated is stable to perturbations.
Let us begin by recalling the arguments given by Whitham regarding the stability of converging shock waves in spherical geometry. Whitham shows that his front-tracking technique provides a solution which diverges at the origin. Thus, any perturbation (described in terms of spherical harmonics) gets amplified as the wave converges towards the center. In our case, the analog is that $`P_\nu (cos(\theta ))`$, the angular component of the solution diverges at $`\theta =\pi /2`$. It is therefore tempting to claim that the expansion shock is unstable. However, our interest is not in the convergence of the shock towards the orifice, but solely in the $`t=0`$ limit, so that we can obtain the shape and location of the (stationary) shock in the laboratory frame. Therefore, from the discussion in the previous section, we see that the solution remains finite for the physical range of interest in $`(r,\theta )`$ space, even if we perturb the boundary condition by varying the angle $`\varphi _f`$ at which the fluid undergoes an expansion shock at the edge of the orifice. In this sense the expansion shock we have computed is stable.
## VIII Conclusions
Progress has been made towards understanding the RESS process by first making a conceptual connection with the phenomenon of expansion shocks predicted many years ago by Hans Bethe. Whitham’s front-tracking method was then adapted to calculate the location and shape of the expansion shock in a supercritical fluid emerging from a pinhole in a high pressure chamber at Mach one (so that the subsequent flow is supersonic). For this case the front is fairly hemispherical in shape. We presented other cases, when the fluid emerges from the orifice directly at supersonic speeds, where the deviation from sphericity becomes increasingly dramatic. While these extreme cases violate the underlying assumption of spherical symmetry made initially, we speculate that they may be correct in a qualitative sense. This point must be verified by future analyses which eliminate the assumptions made in this paper. It is also hoped that our paper will spawn experimental efforts in this arena, from which verification of our theoretical results may be sought.
## IX Acknowledgments
I would like to acknowledge useful discussions with John Bdzil, Tariq Aslam, and especially Robert Owczarek regarding the front-tracking formalism. I would like to thank Larry Hill and Ralph Menikoff for their comments. This research was supported by the Department of Energy, under contract W-7405-ENG-36.
S.M. Chitanvis, Fig. 1
S.M. Chitanvis, Fig. 2
S.M. Chitanvis, Fig. 3
S.M. Chitanvis, Fig. 4
S.M. Chitanvis, Fig. 5
S.M. Chitanvis, Fig. 6
S.M. Chitanvis, Fig. 7
|
no-problem/9907/astro-ph9907158.html
|
ar5iv
|
text
|
# A BeppoSAX study of the pulsating transient X0115+63: the first X-ray spectrum with four cyclotron harmonic features
## 1 Introduction
Cyclotron features provide a powerful tool for directly measuring the high ($`B10^{12}`$ G) magnetic field strengths of accreting neutron stars in X-ray binaries. Because the electron cyclotron energy is $`E_{\mathrm{cyc}}=11.6B_{12}`$ keV, where $`B_{12}`$ is the magnetic field strength in unit of $`10^{12}`$ G, these features are expected to be observed at hard X–ray energies. Absorption-like features interpreted as cyclotron resonant scattering were first discovered in the spectrum of the low-mass X-ray binary pulsar Her X–1 (Trümper et al. (1978)) and, subsequently, in the hard X–ray transient pulsar X0115+63 (Wheaton et al. (1979)). Since then, cyclotron features have been detected in other X–ray binary pulsars with Ginga (Mihara (1995)), HEXE/TTM on Mir (Kendziorra et al. (1994)), OSSE onboard CGRO (Grove et al. (1995)) and, more recently, with RXTE (Kreykenbohm et al. (1998)) and BeppoSAX (Dal Fiume et al. (1999); Santangelo et al. (1999)).
Relatively little is known on higher cyclotron harmonics. Besides the pioneering detection of two lines in the spectrum of X0115+63 (White, Swank, & Holt (1983)), the presence of two cyclotron lines has been reported for Vela X–1 (Kreykenbohm et al. (1998); Orlandini et al. (1998)), 4U1907+09 (Cusumano et al. (1998); Santangelo et al. (1999)) and A0535+26 (Kendziorra et al. (1994)). Some of these detections are still to be confirmed.
X0115+63 is one of the best studied X–ray transients (Rappaport et al. (1978)). The source shows pulsation at $`3.6`$ s while orbiting an O9e companion (V635 Cassiopeiae, Unger et al. (1998)) with a period of 24.3 days. As customary for this class of X-ray binaries, the X-ray continuum has been modelled with a power-law with an exponential cut-off at high energies and photoelectric absorption at low energies (White, Swank, & Holt (1983); Nagase et al. (1991)). Wheaton et al. (1979) , using HEAO1-A1, first reported the discovery of an absorption line at $``$20 keV. Based on HEAO1-A2 data, White, Swank & Holt (1983) detected cyclotron lines at $``$11.5 keV and $``$23 keV, that appeared to be in absorption at the pulse peak and in emission during the interpulse. By interpreting the two lines in terms of the first and second harmonics of cyclotron resonant scattering, they derived $`B1\times 10^{12}`$ G. During the February 1990 outburst, observations with the Large Area Counter onboard Ginga revelead absorption features at $``$12 keV and $``$23 keV for all pulse phases; an investigation of the X–ray spectrum up to 60 keV did not yield any evidence for higher harmonics (Nagase et al. (1991),Tamura et al. (1992)).
On February 22, 1999 the BATSE instrument onboard the CGRO satellite revealed the onset of another outburst of X0115+63 (Wilson, Harmon, & Finger (1999)). BeppoSAX observed the source with its Narrow Field Instruments (NFI) on four occasions: 1999 March 6, 19, 22, and 26. The data presented in this Letter are from the March 19 observation, when, shortly after the outburst maximum, the source was at a flux level of $``$310 mCrab. These data led to the discovery of a four harmonic cyclotron line spectrum in X0115+63, the first ever from a cosmic X-ray source. Our results predate the announcement of the discovery of the third cyclotron line in the spectrum of X0115+63 based on RossiXTE/HEXTE measurements (Heindl et al. 1999); therefore they provide also an important independent confirmation of the latter result.
## 2 Observations and Spectral Analysis
Besides the Low-Energy Concentrator Spectrometer (LECS, 0.1–10 keV, Parmar et al. (1997)) and the Medium-Energy Concentrators Spectrometer (MECS, 2–10 keV, Boella et al. 1997b ), the NFIs onboard the BeppoSAX satellite (Boella et al. 1997a ) comprise two collimated high energy detectors, the High Pressure Gas Scintillation Proportional Counter (HPGSPC, 4–60 keV, FWHM energy resolution of 8% at 10 keV and 5.5% at 20 keV, Manzo et al. (1997)), and the Phoswich Detection System (PDS, 15-200 keV, FWHM energy resolution of 24% at 20 keV, and 14% at 60 keV, Frontera et al. (1997)).
X0115+63 was observed with the NFIs aboard BeppoSAX from March 19, UT 17:05:25 to March 20, 08:42:04. All instruments were operated in their standard configuration. The effective exposure was 3.2 ks for the LECS, 31.4 ks for MECS, 30 ks for HPGSPC and 16 ks for PDS, which makes use of the rocking collimator technique to monitor the background. The 10–50 keV source flux was $`1.3\times 10^8`$ erg cm<sup>-2</sup> s<sup>-1</sup>, corresponding to a luminosity of $`L_{1050}2.5\times 10^{37}\mathrm{d}_4^2`$ erg s<sup>-1</sup>, where d<sub>4</sub> is the distance in units of 4 kpc (Tamura et al. (1992)). The source did not show any significant variability during the observation. In Fig. A BeppoSAX study of the pulsating transient X0115+63: the first X-ray spectrum with four cyclotron harmonic features the pulse profiles folded over the best period of $`P=3.6144(1)`$ s in six different energy bands are reported. The pulse profile shows the typical double peaked structure, already apparent in the HEAO1 and Ginga data (White, Swank, & Holt (1983); Nagase et al. (1991)): a pronounced main peak (phase 0–0.35), followed by a broader and much softer second peak (phase 0.5–0.85). The shape of both peaks is clearly energy-dependent.
This Letter concentrates on the high energy X-ray spectrum ($`9100`$ keV), based on the HPGSPC and PDS data. In consideration of the strong phase dependence of the cyclotron lines of X0115+63 (Mihara (1995)), we accumulated the PHA spectra over 10 pulse phase intervals. Initially, these spectra were divided by the PHA spectrum of the Crab Nebula and multiplied by the spectral shape of the Crab Nebula, a power-law with photon index, $`\alpha `$, of 2.1, such that marked spectral features could be spotted in an approximately model- and calibration-independent fashion (Dal Fiume et al. (1999)). Dips at $``$12 keV, $``$24 keV, and $``$36 keV and, possibly, $``$48 keV were apparent in the spectra from a number of phase intervals. In particular, the spectrum of the descending edge of the main peak (phase of 0.2–0.3), shown in Fig.A BeppoSAX study of the pulsating transient X0115+63: the first X-ray spectrum with four cyclotron harmonic features, displayed by far the deepest features at $``$12 keV, $``$24 keV, $``$36 keV and a clear evidence of a dip at $``$48 keV. This motivated us to carry out a detailed spectral analysis, with models including multiple harmonic features.
We used the following continuum models to fit the 9–100 keV spectra: (a) the Negative and Positive power laws EXponential (NPEX) model adopted by Mihara (1995) as the standard model for X–ray Pulsars observed with Ginga, $`f(E)=(AE^{\alpha _1}+BE^{+\alpha _2})\times \mathrm{exp}(E/kT)`$; (b) a power law with a high energy cut-off, $`f(E)=AE^\alpha \times \mathrm{exp}(E/kT)`$. Here $`f(E)`$ is the photon flux, $`kT`$ is the e-folding energy and $`\alpha `$ is the photon index. Independent of the continuum model used, at least three absorption-like features were required in the fit. These features were introduced in the model(s) as Gaussian filters in absorption, i.e. $`G_i(E)=1D_i\times \mathrm{exp}((EE_i^{\mathrm{cyc}})^2/(2\sigma _i^2))`$ where $`E_i^{\mathrm{cyc}}`$, $`\sigma _i`$, and $`D_i`$ are the centroid energy, width and depth of each feature. Introducing the third absorption feature at $`38`$ keV (in addition to the first two harmonics at $`12`$ and $`24`$ keV) led to a pronounced improvement in the fit, with the reduced $`\chi _{\mathrm{dof}}^2`$ decreasing from 2.5 (268 dof) to 1.7 (265 dof) in the case of the NPEX model. An F test shows that the probability of chance improvement is of $`10^{21}`$.
The HPGSPC and PDS count spectra of the descending edge of the main peak (pulse phase 0.2–0.3), together with the best fit model described above are shown in Fig. A BeppoSAX study of the pulsating transient X0115+63: the first X-ray spectrum with four cyclotron harmonic features (upper panel): An additonal feature centered around $``$48 keV is clearly apparent in the residuals of both the HPGSPC and PDS spectra (Fig. A BeppoSAX study of the pulsating transient X0115+63: the first X-ray spectrum with four cyclotron harmonic features bottom panel). This prompted us to introduce a fourth absorption feature, $`G_4`$, in the model; the minimum $`\chi _{\mathrm{dof}}^2`$ decreased to 1.24 (262 dof, NPEX continuum), corresponding to an F-test probability of chance improvement of $`10^{15}`$. Fig. A BeppoSAX study of the pulsating transient X0115+63: the first X-ray spectrum with four cyclotron harmonic features shows the unfolded spectrum of X0115+63. Best fit parameters and equivalent widths are summarized in Table 1. In the same Table best-fit parameters obtained by using the power-law with an exponential cut-off are also given.
We also performed a fit with all the line centroids constrained to an integer harmonic spacing. The resulting minimum $`\chi _{\mathrm{dof}}^2`$ is 1.58(259 dof) for the NPEX model and 1.67(259 dof) for the power law plus cutoff model. An F-test gives a probability of chance improvement for the models with non-constrained line centroids $`<10^{10}`$ in both cases.
A preliminary analysis of the spectra from other phase intervals also shows significant variations of the line features with the pulse phase confirming previous findings from Ginga and RossiXTE (Heindl et al. (1999); Nagase et al. (1991)). Variations up to 10% in centroid energy are observed. Three lines are still observed at the descending edge of the soft broad peak, at phase 0.5–0.6, with centroid energies of $`11.22\pm 0.3`$ keV, $`21.69\pm 0.2`$ keV and $`32.28\pm 0.5`$ keV. A comparison with the centroid energies of the three harmonics derived by Heindl et al. (1999), based on RossiXTE data is far from straightforward in consideration of the close but still different pulse phase interval (0.7-0.76) over which their spectrum was accumulated and considering also the different phase of the outburst.
## 3 Conclusions
The spectroscopic capabilities of the high energy instruments (HPGSPC and PDS) onboard BeppoSAX allowed to us study multiple absorption-like features in the spectrum of the X-ray pulsar transient X0115+63 . In particular four features centered at energies of $`12.7`$, $`24.`$, $`36`$ keV, and $`50`$ keV were found in the descending edge of the main peak of the pulse profile. We fitted the line centroids in Table 1 with a simple linear model for the form $`E=aN`$ (with $`N=1,2,3,4`$ and $`a`$ a free parameter). Unacceptable values of $`\chi _{dof}^2`$ were obtained: 79.3(3) for the power law plus cutoff model and 71.4(3) for the NPEX model. We conclude that the line centroids reported in Table 1 are not equispaced, for both continuum models. Stated differently, the corresponding harmonic ratios 1:($`1.9\pm 0.05`$):($`2.8\pm 0.05`$):($`3.9\pm 0.1`$) are significantly different from the classical values 1:2:3:4. A closer look at the data reveals that this result can be ascribed entirely to the value of the centroid of the first harmonic. As an example, in the case of a power law plus cutoff model, the first harmonic is at 12.79$`\pm `$0.05 keV, while a fit to the other three harmonics gives a spacing of 12.02$`\pm `$0.02 keV. Similar results are obtained by using the centroid energies obtained from the NPEX continuum model. We note that, since the first harmonics lies close to the energy interval over which the slope of the X-ray spectrum steepens rapidly, the determination of its centroid energy could be affected by a somewhat inadequate modelling of the continuum. For example, the excess of “shoulder” photons, below the fundamental, as predicted in many theoretical calculations (Isenberg et al. (1998)), may cause the fitted centroid energy of the fundamental to appear higher, therefore causing the harmonic ratios to be somewhat smaller than expected.
It is well know that in strong magnetic fields, the energy spacing of cyclotron harmonics is altered by relativistic effects, i.e. $`E_N=m_ec^2\{[1+2n(B/B_{crit})\mathrm{sin}^2\theta ]^{1/2}1\}/\mathrm{sin}^2\theta `$, where $`m_e`$ is the electron mass, $`\theta `$ the angle between the photon propagation angle and the B-field, and $`B_{crit}=4.414\times 10^{13}`$ G (see e.g. Araya & Harding (1996); Wang, Wasserman & Lamb (1993)). This formula, further corrected for the gravitational redshift, was fit to the centroid energies of the four harmonics observed by BeppoSAX. While formally unacceptable ($`\chi _{dof}^2`$ 79.9(2) for the power law plus cutoff model and 35.7(2) for the NPEX model), the best fit obtained in this way was somewhat better than the simple linear fit.
Despite the uncertainties described above, we conclude that the centroid energies of the four spectral features of X0115+63 are most naturally interpreted in terms of the fundamental, second, third and fourth harmonics of cyclotron resonant features in a strong magnetic field, taking into account relativistic corrections. This is the first time that four harmonics are observed in the X-ray spectrum of any cosmic source. We find that equivalent widths of the second, third, and fourth harmonics are larger than that of the fundamental, confirming and extending previous results from Ginga (Nagase et al. (1991)). Such a trend was predicted by Alexander & Mészáros (1989, 1991), who found that two-photon scattering and two-photon emission processes have a major effect in determining the depth of the second and higher harmonics relative to the fundamental. Detailed calculations show that, while the equivalent width of the second harmonic is always larger than the fundamental, this is not necessarily the case for the third (and the fourth) harmonics. In fact the strength of the third harmonic depends strongly on $`\theta `$ and the optical depth (Isenberg et al. (1998)). Comparison of our measured spectra with the ones calculated by Alexander & Mészáros (1991) shows a qualitative agreement. A systematic study of the BeppoSAX spectra of X0115+63 as function of pulse phase for different mass accretion rates (i.e. different outburst phases) is currently underway and will be published elsewhere.
The authors wish to thank Milvia Capalbi of the BeppoSAX Scientific Data Center and the BeppoSAX Mission Director R. C. Butler.
|
no-problem/9907/astro-ph9907032.html
|
ar5iv
|
text
|
# Integrated optics for astronomical interferometry
## 1 Introduction
Since Froehly (1981) proposed guided optics for astronomical interferometry, important progress has been made. In particular, the FLUOR instrument which combines two interferometric beams with single-mode fiber couplers (Coudé du Foresto 1994) has led to astrophysical results with unprecedent precision. This experiment demonstrated the great interest of spatial filtering combined with photometric calibration to improve the visibility accuracy. More recently, Kern et al. (1996) suggested to combine interferometric beams with integrated optics components since this technology allows to manufacture single-mode waveguides in a planar substrate. In paper I (Malbet et al. 1999), we presented and discussed thoroughly the advantages and limitations of integrated optics for astronomical interferometry. To validate the latter analysis, we have performed several laboratory experiments with existing components not optimized for interferometry but allowing to get first clues on this technology. The first experimental results are reported in this Letter.
## 2 Experimental set-up
We carried out laboratory tests with off-the-shelves integrated optics components designed for micro-sensor applications. The waveguides are made by ion exchange (here potassium or silver) on a standard glass substrate thanks to photolithography techniques (Schanen-Duport et al. 1996). The exchanged area is analogous to the core of an optical fiber and the glass substrate to the fiber cladding. Our $`5\text{ mm}\times 40\text{ mm}`$ component is schematically depicted in the right part of Fig. 1. We use it as a two-way beam combiner with two photometric calibration signals. The component operates in the H atmospheric band (1.43$`\mu \text{}`$\- 1.77$`\mu \text{}`$) and its waveguides are single-mode in that domain. From an optical point of view, the reverse Y-junction acts as one of the two outputs of a classical beam splitter. The second part of the interferometric signal with a $`\pi `$ phase shift is radiated at large scale in the substrate. Light is carried to the component thanks to standard non-birefringent silica fibers.
We have set up a laboratory Mach-Zehnder interferometer to test the interferometric capabilities of our components (see the left part of Fig. 1). The available sources are: a 1.54-$`\mu \text{}`$He-Ne laser, a 1.55-$`\mu \text{}`$laser diode and an halogen white-light source. The latter is used with an astronomical H filter.
We scan the interferograms by modulating the optical path difference (OPD) with four points per fringe. The delay line speed is restricted by the integration time ($``$1 ms for laser sources and $``$10 ms for the white-light source to get a sufficient signal-to-noise ratio) and the frame rate (50 ms of read-out time for the full frame). The OPD scan and the data acquisition are not synchronized, but for each image the translating stage provides a position with an accuracy of 0.3$`\mu \text{}`$. The simultaneous recording of the photometric and interferometric signals allows to unbias the fringe contrast from the photometric fluctuations as suggested by Connes et al. (1984) and validated by Coudé du Foresto (1994).
A typical white light interferogram $`I_0`$ is plotted in Fig. 2a together with the simultaneous photometric signals $`P_1`$ and $`P_2`$. To correct the raw interferogram from the photometric fluctuations, we substract a linear combination of $`P_1`$ and $`P_2`$ from $`I_0`$. The expression of the corrected interferogram is then
$$I_c=\frac{I_0\alpha P_1\beta P_2}{2\sqrt{\alpha P_1\beta P_2}}$$
(1)
with $`\alpha `$ and $`\beta `$ coefficients determined by occulting alternatively each input beam. The resulting corrected interferogram is displayed in Fig. 2b.
## 3 Results and discussion
As far as an astronomer is concerned, the physical quantities of interest when dealing with an interferometric instrument are the instrumental contrast, the optical stability, and the total optical throughput.
### 3.1 Laser-light constrasts
A 93% contrast is obtained with the He-Ne laser. The main source of contrast variations with time comes from temperature gradient and mechanical constraints on the input fibers. When special care is taken to avoid fiber bending and twists, the laser contrast variation is lower than 7$`\%`$ rms over a week. Using high-birefringent fibers which are far less sensitive to mechanical stresses will improve the contrast stability.
### 3.2 White-light contrasts
With an halogen white-light source, the contrast obtained is of the order of 7$`\%`$ with a potassium ion beam combiner connected with low-birefringent fibers (Fig. 2b) and 78$`\%`$ with a silver ion beam combiner connected with high-birefringent fibers (Fig. 2c). Two main sources of interferometric contrast drops between the two components have been identified: chromatic dispersion and polarization mismatch.
The consequence of residual differential dispersion between the two arms is to spread out the fringe envelope and decrease the contrast. Since the delay line translation is not perfectly linear, the Fourier relation between space and time is affected and an accurate estimate from the dispersion is difficult. Only the number of fringes and the shape of the interferogram gives an idea of the existing differential dispersion. The theoretical number of fringes is given by the formula $`2\frac{\lambda }{\mathrm{\Delta }\lambda }10`$ and the interferogram contains about $`13`$ fringes. Such a spread is not sufficient to explain the contrast drop between the laser- and white-light contrasts. More detailed studies of residual chromatic dispersion are in progress.
In the present case the contrast decay is mainly explained by differential birefringence. Low-birefringent fibers are known to be highly sensitive to mechanical constraints and temperature changes, leading to unpredictable birefringence. Coupling between polarisation modes can occur leading to a contrast loss which can be significant for unpolarized incident light (case of the white-light source). This is confirmed by the preliminary results obtained with high-birefringent fibers and the incident light polarized along the neutral axes: the contrast reaches 78$`\%`$ (Fig. 2c). The apparent asymmetry of the interferogram could be due to residual differential polarization and/or dispersion. Full characterizations are in progress and will be reported in Paper III (Haguenauer et al. 1999).
### 3.3 Total throughput
Fig. 3 summarizes the photon losses in the two components. We express the losses in terms of remaining photons when 100 incoherent photons are injected at each waveguide input. For the component made from potassium ion exchange, we obtain 20 and 14 photons on each photometric channel and less than 20 photons in the interferometric channel leading to a total of 54 photons for 200 photons injected, hence a total throughput of 27%. For silver ion exchange, respectively 30, 31 and 25 photons have been measured leading to a throughput of 43%. The main difference between the two results comes from the coupling efficiency between the fiber and the waveguides and the propagation losses.
Table 1 summarizes estimation of losses coming from different origins. The propagation losses and the coupling losses have been measured with a straight waveguide manufactured in the same conditions. The Fresnel losses have been theoretically estimated to 4$`\%`$. Any function causes additional losses which cannot be evaluated separately but have been estimated to 10$`\%`$. One should notice that the reverse Y-junction acts as only one of the two outputs of an optical beamsplitter (see paper I). Therefore 50$`\%`$ of the light is radiated outside the waveguide. The first two columns of Table 1 show that our measurements are consistent with the theoretical performances computed from the different optical losses reported in the Table.
Last column of Table 1 gives an order of magnitude of expected improvement in the future. The main progress concerns the beam combination function. We should be able to retrieve the second half of the combined photons thanks to new combination schemes like X-couplers, multiaxial beam combiners or multimode interferometric (MMI) multiplexers (see paper I) at the cost of a slight chromaticity of the function. Some components including these new functions are being manufactured and will be soon tested. The ultimate optical throughput would be around 70-80%, twice more than our current results.
## 4 Conclusion and future prospects
We have obtained first high-contrast white-light interferograms with an off-the-shelves integrated optics component used as a two aperture beam combiner. The high and stable contrasts as well as the high optical throughput validate our approach for combining stellar beams by means of integrated optics presented in paper I.
This preliminary analysis requires further characterizations and improvements. The importance of dispersion, birefringence and other phenomena in the fibers and in the components have to be fully understood. For this purpose, two-way beam combiners optimized for astronomy are under characterization (spectroscopic and polarimetric measurements for instance) in order to carefully control their optical properties. A complete description of optical and interferometric properties of integrated optics component will be presented in a forthcoming paper (Haguenauer et al. 1999). The optical fibers should maintain polarization to avoid specific contrast losses and have optimized lengths to avoid chromatic dispersion. This experimental precaution is decisive to achieve image reconstruction (Delage 1998).
Our research program is based on the study of new integrated optics technologies for long baseline interferometry in the infrared, and, the design of beam combiners for multiple apertures<sup>1</sup><sup>1</sup>1Actually 3-way and 4-way beam combiners have already been manufactured (see paper I). Some specific beam combiners will then be eventually used in a scientific instrumental prototype on astronomical interferometers. Preliminary tests on the GI2T/Regain interferometer (Mourard et al. 1998) will be carried out with the Integrated Optics Near-infrared Interferometric Camera (IONIC) prototype (Berger et al. 1998).
## 5 Acknowledgments
We would like to warmly thank E. Le Coarer for his precious support in instrument control. We thank the referee, Dr. Shaklan, for a careful reading of our paper and for suggestions which helped to improve its content. The work was partially funded by PNHRA / INSU, CNRS / Ultimatech and DGA / DRET (Contract 971091). The integrated optics components have been manufactured and fiber-connected by the GeeO company.
|
no-problem/9907/cond-mat9907307.html
|
ar5iv
|
text
|
# Universality classes in directed sandpile models
## Abstract
We perform large scale numerical simulations of a directed version of the two-state stochastic sandpile model. Numerical results show that this stochastic model defines a new universality class with respect to the Abelian directed sandpile. The physical origin of the different critical behavior has to be ascribed to the presence of multiple topplings in the stochastic model. These results provide new insights onto the long debated question of universality in abelian and stochastic sandpiles.
The class of sandpile models, consisting of the original Bak, Tang and Wiesenfeld (BTW) automata and its theme variations, is considered the prototypical example of a special class of driven non-equilibrium systems exhibiting a behavior dubbed self-organized criticality (SOC). Under an external drive, these systems spontaneously evolve into a stationary state. In the limit of infinitesimal driving the stationary state shows a singular response function associated to an avalanche-like dynamics, indicative of a critical behavior. Sandpile models have thus attracted a great deal of interest, as plausible candidates to explain the avalanche behavior empirically observed in a large number of natural phenomena.
In recent years, the possibility of understanding the sandpile critical behavior in analogy with other non-equilibrium critical phenomena such as branching processes , interface depinning models, and absorbing phase transitions has been pointed out. It is then most important to identify precisely, for sandpiles, the universality classes and upper critical dimensions, which are basic and discriminating features of the critical behavior. Despite significant numerical efforts, however, these issues remain largely unresolved. For instance, it is still an open problem wheter or not the original deterministic BTW sandpile and the stochastic Manna two-state model belong to the same universality class. Theoretical approaches support the idea of a single universality class, while numerical simulations provide contradictory results .
In order to have a deeper understanding of the universality classes puzzle, we turn our attention to directed sandpile models . In this case Dhar and Ramaswamy obtained an exact solution for the Abelian directed sandpile (ADS), that can be used as a benchmark to check the numerical simulation analysis. Directed sandpiles thus become an interesting test field to study how critical behavior is affected by the introduction of stochastic elements. Despite the fact that results obtained for directed models cannot be exported “tout court” to the isotropic ones, the eventual apparence of different universality classes provides interesting clues on the general problem of universality in sandpiles. This issue has been recently addressed in a particular case by Tadić and Dhar , but a general discussion of universality classes in directed sandpile automata is still lacking.
In this letter we present large scale numerical simulations of Abelian and stochastic directed sandpile (SDS) models. First, we study an ADS model for which we recover numerically the results expected from the analytical solution . Then we introduce a stochastic model which is a directed version of the Manna two-state sandpile . In this case, the set of critical exponents defines a different universality class. For both models we provide a very accurate study of finite size effects and the convergence to the asymptotic behavior. For small and medium lattice sizes we find scaling anomalies that are similar to those encountered in isotropic models. We also study in detail the geometrical structure of avalanches. The presence of multiple topplings appears to be the fundamental difference between Abelian and stochastic models. Numerical simulations in Euclidean dimension $`d>2`$ show that both universality classes have an upper critical dimension $`d_c=3`$, where strong logarithmic corrections to scaling are present.
We consider the following definition for an ADS model (see Fig. 1(a)): On each site of a $`d`$-dimensional hypercubic lattice of size $`L`$, we assign an integer variable $`z_i`$, called “energy”. Each time step, an energy grain is added to a randomly chosen site ($`z_iz_i+1`$). When a site acquires an energy greater than or equal to the threshold $`z_c=2d1`$, it topples. Topplings are directed along a fixed direction $`x_{}`$ (defined usually as “downwards”): when a site on the hyperplane $`x_{}`$ topples, it sends deterministically one energy grain to each nearest and next-nearest neighbor site on the hyperplane $`x_{}+1`$, for a total of $`2d1`$ grains. This definition differs from the ADS studied by Dhar and Ramaswamy in the orientation of the lattice. Both models, however, share the same universality class, being Abelian, deterministic, and directed.
The stochastic generalization of the above model is depicted in Fig. 1(b): The threshold is now $`z_c=2`$, independent of the spatial dimension. When a site at the hyperplane $`x_{}`$ topples, it sends two grains of energy to two sites, randomly chosen among its $`2d1`$ neighbors in the hyperplane $`x_{}+1`$. The dynamical rule of this model can be defined exclusive if the two energy grains are always distributed on different sites. On the contrary, a nonexclusive dynamics allows the transfer of two energy grains to the same site. We will consider separately the cases of the exclusive stochastic directed sandpile (ESDS) and the nonexclusive stochastic directed sandpile (NESDS). It is worth remarking that stochasticity does not alter the Abelian nature of the model . All three models are locally conservative; no energy grains are lost during a toppling event. Boundary conditions are periodic in the transverse directions and open at the bottom hyperplane $`x_{}=L`$, from which energy can leave the system.
In the critical stationary state, we can define the probability that the addition of a single grain is followed by an avalanche of toppling events. Avalanches are then characterized by the number of topplings $`s`$, and the duration $`t`$. According to the standard finite size scaling (FSS) hypotesis, the probability distributions of these quantities are described by the scaling functions
$$P(s)=s^{\tau _s}𝒢(s/s_c),$$
(1)
$$P(t)=t^{\tau _t}(t/t_c),$$
(2)
where $`s_c`$ and $`t_c`$ are the cutoff characteristic size and time, respectively. In the critical state the lattice size $`L`$ is the only characteristic length present in the system. Approaching the thermodynamic limit ($`L\mathrm{}`$), the characteristic avalanche size and time diverge as $`s_cL^D`$ and $`t_cL^z`$, respectively. The exponent $`D`$ defines the fractal dimension of the avalanche cluster and $`z`$ is the usual dynamic critical exponent. The directed nature of the model introduces a drastic simplification, since it imposes $`z=1`$. A general result concerns the average avalanche size $`s`$, that also scales linearly with $`L`$: a new injected grain of energy has to travel, on average, a distance of order $`L`$ before reaching the boundary. In the stationary state, to each energy grain input must correspond, on average, an energy grain flowing out of the system. This implies that the average avalanche size corresponds to the number of topplings needed for a grain to reach the boundary; i.e. $`sL`$. The same result can be exactly obtained by inspecting the conservation symmetry of the model .
For the ADS, the exact analytical solution in $`d=2`$ yields the exponents $`\tau _s=4/3`$, $`\tau _t=3/2`$ and $`D=3/2`$ . The upper critical dimension is found to be $`d_c=3`$, and it is also possible to find exactly the logarithmic corrections to scaling . The introduction of stochastic ingredients in the toppling dynamics of directed sandpiles has been studied only recently in a model that randomly stores energy on each toppling . This model is strictly related to directed percolation and defines a universality class “per se”. In our case stochasticity affects only the partition of energy during topplings, and there is no analytical insight for the critical behavior of this model. In order to discriminate between ADS and SDS we perform simulations of both models for sizes ranging from $`L=100`$ to $`L=6400`$. Statistical distributions are obtained averaging over $`10^7`$ avalanches. Comparison of numerical results on the ADS allows us to check the reliability and degree of convergence with respect to the lattice sizes used.
It is well known from the many numerical papers on sandpiles that an accurate determination of the exponents $`\tau _s`$ and $`\tau _t`$ is a subtle issue. An overall determination within a $`10\%`$ of accuracy is a relatively easy task. However, a truly accurate measurement, allowing a precise discrimination of universality classes, is strongly affected by the lower and upper cut-offs in the distribution. Extrapolations and local slope analysis are often very complicated and the relative error bars are not clearly defined. In this respect, it is far better to calculate exponents by methods that contain the system-size dependence explicitly; namely data collapse and moment analysis. Moment analysis was introduced by De Menech et al. in the context of the two dimensional BTW, and it has been used extensively on Abelian and stochastic models . The $`q`$-moment of the avalanche size distribution on a lattice of size $`L`$, $`s^q_L=s^qP(s)𝑑s`$, has the following size dependence
$$s^q_L=L^{D(q+1\tau _s)}y^{q\tau _s}𝒢(y)𝑑yL^{D(q+1\tau _s)},$$
(3)
where we have used the transformation $`y=s/L^D`$ in the finite size scaling (FSS) form Eq. (1). More generally, $`s^q_LL^{\sigma _s(q)}`$, where the exponents $`\sigma _s(q)`$ can be obtained as the slope of the log-log plot of $`s^q_L`$ versus $`L`$. Using Eq. (3), we obtain $`s^{q+1}_L/s^q_LL^D`$ or $`\sigma _s(q+1)\sigma _s(q)=D`$, so that the slope of $`\sigma _s(q)`$ as a function of $`q`$ is the cutoff exponent; i.e. $`D=\sigma _s(q)/q`$. This is not true for small $`q`$ because the integral in Eq. (3) is dominated by its lower cutoff. In particular, corrections to scaling are important for $`q\tau _s1`$. An additional and strong check on the numerical data can be found in the fact that, as we have previously shown, the first moment of the size distribution must scale linearly with $`L`$. This last constrain also allows the evaluation of the the exponent $`\tau _s`$ from the scaling relation $`(2\tau _s)D=\sigma _s(1)=1`$, that should be satisfied for large enough sizes.
Along the same lines we can obtain the moments of the avalanche time distribution. In this case $`t^q_LL^{\sigma _t(q)}`$, with $`\sigma _t(q)/q=z`$. Analogous considerations for small $`q`$ apply also for the time moment analysis. Here, an estimate of the asymptotic convergence of the numerical results is provided by the constraint $`z=1`$, that holds for large enough sizes. Then, the $`\tau _t`$ exponent can be found using the scaling relation $`(2\tau _t)=\sigma _t(1)`$.
Despite the fact that the moment method is usually rather accurate, it must be corroborated by a data collapse analysis. The FSS of Eqs. (1,2) has to be verified and must be consistent with the numerical exponents obtained from the moment analysis. This can be done by rescaling $`ss/L^D`$ and $`P(s)P(s)L^{D\tau _s}`$ and correspondingly $`tt/L^z`$ and $`P(t)P(t)L^{z\tau _t}`$. Data for different $`L`$ must then collapse onto the same universal curve if the FSS hypothesis is satisfied. Complete consistency between the methods gives the best collapse with the exponents obtained by the moments analysis. In Table I we report the exponents found for the ADS, ESDS and NESDS in $`d=2`$. Figure 2 shows the moments $`\sigma _s(q)`$. Figures 3 and 4 plot the FSS data collapse for sizes and times, respectively.
The exponents obtained for the ADS are in perfect agreement with the expected analytical results. This fact supports the idea that the system sizes used in the present work allow to recover the correct asymptotic behavior. It is worth remarking that, for small and medium lattice sizes, both moments and data collapse analysis present scaling features that can not be reconciled in the single scaling picture usually considered. These anomalies are not persistent and disappear for reasonably large sizes ($`L10^3`$). This evidence for a slow decaying of finite size effects could shed light into several anomalies reported in isotropic sandpiles, for which, unfortunately, it is very difficult to reach very large sizes . Results for the ESDS and NESDS are identical within the error bars, indicating that these two models are in the same universality class. On the other hand, the obtained exponents show, beyond any doubt, that Abelian and stochastic directed sandpile models do not belong to the same universality class.
The compelling numerical evidence for two distinct universality classes does not tell us what is the basic mechanism at the origin of the different critical behavior. In order to have a deeper insight into the dynamics of the various models, we have inspected the geometric structure of the resulting avalanches. In Fig. 5 we depict in a color plot the local density of topplings in two avalanches of size 50 000 corresponding to the two-dimensional ADS and ESDS models. From the figure it becomes apparent that the stochastic dynamics introduces multiple toppling events, which are by definition absent in the Abelian case. This gives rise to very different avalanche structures, eventually reflected in the asymptotic critical behavior. In particular, the fractal dimension $`D`$ is indicative of the scaling of toppling events with sizes. In the stochastic case we recover a higher fractal dimension than in the Abelian case. The multiple toppling mechanism has been proposed in the past as the origin of differences between isotropic Abelian and stochastic sandpiles as well. In that case, however, multiple toppling is a common feature of both models, and for the largest sizes reached so far share, they share the same fractal dimension $`D`$ .
Analysis of the models in three dimensions is strongly hindered by the presence of logarithmic corrections . Nonetheless, a naive application of the moment analysis yields values compatible with the mean-field results $`\tau _s=3/2`$, $`\tau _t=2`$, and $`D=2`$ . More interestingly, in Ref. the authors were able to deduce the exact form of the logarithmic corrections in $`d=3`$ for the avalanche time distribution, namely $`P(t)t^2\mathrm{ln}t`$. In Figure 6 we have checked that the same logarithmic corrections apply to both the Abelian and ESDS sandpiles. This remarkable fact lends support to the critical dimension of the stochastic model being $`d_c=3`$.
In summary, we have reported large scale numerical simulations of a stochastic directed sandpile model. This model defines unambigously a different universality class with respect to the Abelian directed sandpile model. The origin of this difference is traced back to the avalanche cluster geometric structure, providing new clues to understand the effect of stochastic elements in the dynamics of avalanche processes.
This work has been supported by the European Network under Contract No. ERBFMRXCT980183. We thank D. Dhar, D. Dickman, M. A. Muñoz, A. Stella, and S. Zapperi for helpful comments and discussions.
|
no-problem/9907/math9907191.html
|
ar5iv
|
text
|
# A stereological version of the Gauss-Bonnet formula
## 1. Introduction
Let $`DS^3`$ be a domain with boundary in an orientable smooth surface $`S`$ in $`^3`$. Then the Gauss-Bonnet formula gives a method to compute the Euler-Poincaré characteristic of $`D`$ in terms of the Gauss curvature $`K`$ of $`S`$, and the geodesic curvature $`k_g`$ of $`D`$ in $`S`$:
$$2\pi \chi (D)=_DK+_Dk_g.$$
This can be found in any elementary book of Classical Differential Geometry. In this paper, we give a stereological version of this formula, in the following sense: let $`uS^2`$ and denote by $`\pi _{u,\lambda }`$ the plane orthogonal to $`u`$ given by the equation $`x,u=\lambda `$. When $`\lambda `$ varies on $``$, the different planes $`\pi _{u,\lambda }`$ can be considered as a “sweeping” plane in $`^3`$ and its contact with $`D`$ and with $`D`$ will give a method to compute $`\chi (D)`$:
$$\chi (D)=\underset{xA}{}\mathrm{sign}K(x)+\frac{1}{2}\underset{yB}{}\mathrm{sign}(k_g(y)k_g^u(y)).$$
Here, $`A`$ is given by the points $`xD`$ such that $`D`$ is tangent to $`\pi _{u,\lambda }`$ at $`x`$ for some $`\lambda `$, $`B`$ is equal to the subset of points $`yD`$ where $`D`$ is tangent to $`\pi _{u,\lambda }S`$ for some $`\lambda `$, and $`k_g^u`$ denotes the geodesic curvature of $`\pi _{u,\lambda }S`$. We will show that for a generic $`uS^2`$:
1. the sets $`A,B`$ are finite;
2. if $`xA`$, then $`K(x)0`$;
3. if $`yB`$, then $`\pi _{u,\lambda }S`$ is a regular curve in $`S`$ and $`k_g(y)k_g^u(y)`$.
In this way, the above formula has sense for almost any $`uS^2`$ (that is, for any $`uS^2N`$, where $`N`$ is a null set in $`S^2`$). Moreover, this generalizes the result of , which computes the Euler-Poincaré characteristic of a plane domain $`D^2`$ by looking at contact with lines.
## 2. Proof of the formula
The classical Poincaré-Hopf Theorem stays that if $`S`$ is a closed orientable smooth surface and $`v`$ is a smooth vector field on $`S`$ with isolated zeros, then
$$\chi (S)=\underset{v(x)=0}{}\mathrm{Ind}_x(v),$$
where $`\chi (S)`$ is the Euler-Poincaré characteristic of $`S`$ and the index, $`\mathrm{Ind}_x(v)`$, is just the local degree of $`v`$ at $`x`$. As an immediate consequence, we get that if $`f:S`$ is a Morse function (that is, a function with non-degenerate critical points), then
$$\chi (S)=\underset{x\mathrm{\Sigma }(f)}{}\mathrm{Ind}_x(f),$$
where $`\mathrm{\Sigma }(f)`$ denotes the set of singular points of $`f`$ and the index, $`\mathrm{Ind}_x(f)`$, is given by $`+1`$ when $`x`$ is a local extreme of $`f`$, or $`1`$ when $`x`$ is a saddle point.
The Poincaré-Hopf Theorem was generalized for surfaces with boundary by Morse in the following way: suppose that $`D`$ is a compact orientable smooth surface with boundary (that we can assume embedded in an orientable smooth surface $`S`$ without boundary) and let $`v`$ be a smooth vector field on $`D`$ such that
1. $`v`$ has isolated zeros on $`D`$,
2. $`v`$ is not zero on $`D`$, and
3. $`v`$ is tangent to $`D`$ only at a finite number of points.
If $`yD`$ is a point where $`v`$ is tangent to $`D`$, generically we have that the integral curve of $`v`$ at $`y`$ is locally contained in $`SD`$ or in $`D`$ and we can assign an index $`+1`$ or $`1`$ respectively. In the general case, we can extend this by using the Transversality Theorem and define an index, $`\mathrm{Ind}_y(v)\{1,0,+1\}`$. Then, the Morse Theorem stays that
$$\chi (D)=\underset{v(x)=0}{}\mathrm{Ind}_x(v)+\frac{1}{2}\underset{v(y)T_yD}{}\mathrm{Ind}_y(v).$$
Now, we can interpret this in terms of critical points of functions. Suppose that $`f:D`$ is a Morse function such that $`f`$ has no critical points on $`D`$ and the restriction $`f|_D:D`$ is also a Morse function. For each point $`y\mathrm{\Sigma }(f|_D)`$, we have that the level set $`f^1(f(y))`$ is locally contained either in $`SD`$ or in $`D`$. In the first case, we say that $`y`$ is a point of type “island” and we assign an index $`\mathrm{Ind}_y(f)=+1`$ and in the second case, we say that it is point of type “bridge” and assign an index $`\mathrm{Ind}_y(f)=1`$ (see Figure 1).
a) “Island” b) “Bridge”
Figure 1
###### Lemma 2.1.
Let $`D`$ be a compact orientable smooth surface with boundary and let $`f:D`$ be a Morse function such that $`f`$ has no critical points on $`D`$ and the restriction $`f|_D:D`$ is also a Morse function. Then,
$$\chi (D)=\underset{x\mathrm{\Sigma }(f)}{}\mathrm{Ind}_x(f)+\frac{1}{2}\underset{y\mathrm{\Sigma }(f|_D)}{}\mathrm{Ind}_y(f).$$
An interesting application of this formula is obtained when $`DS^3`$ and we consider the family of height functions. Given $`uS^2`$, we define the height function $`h_u:^3`$ by $`h_u(x)=x,u`$. When we restrict this function to $`D`$ (or $`S`$), the level sets are the plane curves $`\pi _{u,\lambda }D`$ (resp. $`\pi _{u,\lambda }S`$), where $`\pi _{u,\lambda }`$ is the plane $`h_u^1(\lambda )`$. In particular, if $`\lambda `$ is a regular value of $`h_u|_D`$ (and hence of $`h_u|_S`$), we have that $`\pi _{u,\lambda }S`$ is in fact a smooth curve in $`S`$.
We will give a geometrical interpretation of the critical points of $`h_u|_D`$ and $`h_u|_D`$ and the corresponding indices. Before that, we need to fix some notation. Given $`xD`$, we denote by $`N(x)`$ and $`K(x)`$ the normal vector and the Gauss curvature of $`D`$ at $`x`$ respectively. Given $`yD`$, we denote by $`n(y)`$ and $`k_g(y)`$ the normal vector and the geodesic curvature of $`D`$ in $`D`$ at $`y`$ respectively (we choose the orientation in $`D`$ so that $`n(y)`$ points to the interior of $`D`$). Finally, we denote by $`k_g^u(y)`$ the geodesic curvature of the curve $`\pi _{u,\lambda }S`$ at a regular point $`y`$ of $`h_u|_D`$ (in the case that $`\pi _{u,\lambda }S`$ and $`D`$ are tangent at $`y`$, we choose in $`\pi _{u,\lambda }S`$ the same orientation).
###### Theorem 2.2.
Let $`DS^3`$ be a domain with boundary in an orientable smooth surface $`S`$ in $`^3`$ and let $`uS^2`$.
1. Let $`xD`$. It is a critical point of $`h_u|_D`$ if and only if $`\pi _{u,\lambda }`$ is tangent to $`D`$ at $`x`$ (where $`h_u(x)=\lambda `$) if and only if $`u=\pm N(x)`$.
2. Let $`xD`$ be a critical point of $`h_u|_D`$. It is non-degenerate if and only if $`K(x)0`$. Moreover, it is a local extreme of $`h_u|_D`$ when $`K(x)>0`$ and a saddle point when $`K(x)>0`$.
3. Let $`yD`$ be a regular point of $`h_u|_D`$. It is a critical point of $`h_u|_D`$ if and only if $`\pi _{u,\lambda }S`$ is tangent to $`D`$ at $`y`$ if and only if $`\stackrel{~}{u}=\pm n(y)`$, where $`\stackrel{~}{u}`$ denotes the normalized orthogonal projection of $`u`$ in $`T_yD`$.
4. Let $`yD`$ be a regular point of $`h_u|_D`$ which is also a critical point of $`h_u|_D`$. It is non-degenerate if and only if $`k_g(y)k_g^u(y)`$. Moreover, it is an island when $`k_g(y)>k_g^u(y)`$ and a bridge when $`k_g(y)<k_g^u(y)`$.
###### Proof.
Parts (1) and (2) are standard results on Generic Geometry and can be found for instance in . We will prove (3) and (4). Suppose that $`\alpha (s)`$ is a parametrization of $`D`$ by arc length, so that $`y=\alpha (s_0)`$ and $`n(y)=N(y)\alpha ^{}(s_0)`$. Since $`\{N(y),\alpha ^{}(s_0),n(y)\}`$ is a orthonormal basis of $`^3`$, we have that
$$u=u,N(y)N(y)+u,\alpha ^{}(s_0)\alpha ^{}(s_0)+u,n(y)n(y).$$
In particular, provided that $`u\pm N(y)`$, we have that $`\stackrel{~}{u}=v/v`$, where
$$v=u,\alpha ^{}(s_0)\alpha ^{}(s_0)+u,n(y)n(y).$$
Thus, $`\stackrel{~}{u}=\pm n(y)`$ if and only if $`u,\alpha ^{}(s_0)=0`$ if and only if $`y`$ is a critical point of $`h_u|_D`$.
On the other hand, let $`\beta (s)`$ be a parametrization of $`\pi _{u,\lambda }S`$ by arc length, with $`\beta (s_0)=y`$. In this case, $`u,\beta (s)=\lambda `$ for all $`s`$ and thus $`u,\beta ^{}(s_0)=0`$. This gives that $`\stackrel{~}{u}`$ is also equal to the normal vector to $`\beta ^{}(s_0)`$ in $`T_yD`$. In particular, $`y`$ is a critical point of $`h_u|_D`$ if and only if the two curves are tangent.
Suppose now that $`y`$ is a critical point of $`h_u|_D`$. We have that
$$\alpha ^{\prime \prime }(s_0)=k_n(y)N(y)+k_g(y)n(y),$$
where $`k_n(y)`$ is the normal curvature of $`\alpha `$. This gives that
$$u,\alpha ^{\prime \prime }(s_0)=k_n(y)u,N(y)+k_g(y)u,n(y).$$
But we can do the same with $`\beta (s)`$, getting
$$0=u,\beta ^{\prime \prime }(s_0)=k_n(y)u,N(y)+k_g^u(y)u,n(y).$$
This implies that
$$u,\alpha ^{\prime \prime }(s_0)=(k_g(y)k_g^u(y))u,n(y).$$
Since $`u,n(y)0`$, we conclude that $`y`$ is a non-degenerate critical point of $`h_u|_D`$ if and only if $`k_g(y)k_g^u(y)`$. The fact that the cases $`k_g(y)>k_g^u(y)`$ and $`k_g(y)<k_g^u(y)`$ correspond to an island or a bridge respectively can be deduced from the way we have chosen the orientation on $`D`$. ∎
The following consequence can be seen as a stereological version of the Gauss-Bonnet formula.
###### Corollary 2.3.
Let $`DS^3`$ be a domain with boundary in an orientable smooth surface $`S`$ in $`^3`$. For a generic $`uS^2`$ the height function $`h_u|_D`$ is a Morse function, has no critical points on $`D`$ and its restriction to $`D`$ is also a Morse function. In particular, we have that
$$\chi (D)=\underset{xD/N(x)=\pm u}{}\mathrm{sign}K(x)+\frac{1}{2}\underset{yD/n(y)=\pm \stackrel{~}{u}}{}\mathrm{sign}(k_g(y)k_g^u(y)).$$
###### Proof.
The formula for $`\chi (D)`$ is a direct consequence of Lemma 3.1 and Theorem 3.2. We will see the first part of the statement. Given $`uS^2`$, $`h_u|_D`$ is a Morse function if and only if $`u`$ is a regular value of the Gauss map $`N:DS^2`$ (just note that the points $`xD`$ where $`K(x)=0`$ are the critical points of the Gauss map). Therefore, the Sard Theorem implies that for a generic $`uS^2`$, $`h_u|_D`$ is a Morse function. The other two conditions are also an easy consequence of this theorem. ∎
It is also interesting to look at some particular cases. Suppose for instance, that $`S`$ is a plane, namely $`S=^2`$. Then we can use height functions $`h_u:^2`$ for $`uS^1`$, instead of the height functions of $`^3`$. The level curves in this case are the orthogonal lines to $`u`$, which have zero curvature. Moreover, $`h_u`$ has no critical points in $`^2`$ and the geodesic curvature is equal to the classical curvature of a plane curve. Thus, we get the following formula method to obtain the Euler number of a domain in $`^2`$ based on 1-dimensional observations, modified from , and that has been used in Stereology, .
###### Corollary 2.4.
Let $`D^2`$ be a domain with boundary. For a generic $`uS^1`$ the restriction of the height function $`h_u|_D`$ is a Morse function and
$$\chi (D)=\frac{1}{2}\underset{yD/n(y)=\pm u}{}\mathrm{sign}k(y).$$
In the case that $`S=S^2`$, the level curves associated to $`uS^2`$ are the corresponding parallels. Note that $`h_u|_{S^2}`$ is a Morse function with two critical points, $`u`$ and $`u`$. Moreover, if we denote by $`\theta _u,\gamma _u`$ the spherical coordinates associated to $`u`$, we have that the geodesic curvature of the parallel $`\gamma _u=\gamma _0`$ is $`\mathrm{tan}\gamma _0`$.
###### Corollary 2.5.
Let $`DS^2`$ be a domain with boundary. For a generic $`uS^2`$,
$$\chi (D)=\frac{1}{2}\underset{yD/n(y)=\pm \stackrel{~}{u}}{}\mathrm{sign}(k_g(y)+\mathrm{tan}\gamma _u(y))+\mathrm{\#}(\{u,u\}D),$$
where $`\mathrm{\#}(\{u,u\}D)`$ means the number of times that $`u`$ or $`u`$ belong to $`D`$.
It is also possible to compute the Euler number of $`DS^2`$ from observations on a meridian that sweeps through $`D`$. This is not a consequence of the construction associated to the height functions, but a different family of functions. Let $`uS^2`$ and let $`S^1`$ be the circle in $`S^2`$ orthogonal to $`u`$. Then, we have the regular function $`\theta _u:S^2\{u,u\}S^1`$, whose level sets are the meridians $`\theta _u=\theta _0`$ (which have zero geodesic curvature). Moreover, for a generic $`uS^2`$, the restriction of $`\theta _u`$ to $`D`$ is a Morse function, .
###### Corollary 2.6.
Let $`DS^2`$ be a domain with boundary. For a generic $`uS^2`$,
$$\chi (D)=\frac{1}{2}\underset{yD/n(y)\stackrel{~}{u}}{}\mathrm{sign}k_g(y)+\mathrm{\#}(\{u,u\}D).$$
###### Proof.
Let $`S=S^2\{u,u\}`$ and let $`\stackrel{~}{D}`$ be the domain with boundary in $`S`$ given by $`D(B_1B_2)`$, where $`B_1,B_2`$ are small geodesic balls centered at $`u,u`$ which do not intersect $`D`$. It follows from Lemma 3.1 that
$$\chi (\stackrel{~}{D})=\frac{1}{2}\underset{yD/n(y)\stackrel{~}{u}}{}\mathrm{sign}k_g(y).$$
On the other hand, $`\chi (D)=\chi (\stackrel{~}{D})+\mathrm{\#}(\{u,u\}D)`$, which concludes the proof. ∎
###### Remark 2.7.
The preceding result may be extended to ovaloids in the following way: Let $`S`$ be an ovaloid (compact and connected surface in $`^3`$ for which the Gaussian curvature $`K>0`$) and $`\stackrel{~}{D}`$ a domain in $`S`$; then, $`S`$ is diffeomorphic to a sphere $`S^2`$ through its Gauss map. Let $`DS^2`$ denote the image of $`\stackrel{~}{D}`$ under this diffeomorphism; then, $`\chi (D)=\chi (\stackrel{~}{D})`$ and it is possible to obtain $`\chi (\stackrel{~}{D})`$ from observations in the curves which are image of the meridians in $`S^2`$.
## 3. Stereological applications
The Euler-Poincaré characteristic for domains in $`^2`$ and $`^3`$ has been studied in several stereological applications (see, for instance ). The principle used to obtain the Euler number of an $`n`$-dimensional domain in $`^n`$ ($`n=2`$ or $`3`$), is based on what happens in an $`(n1)`$-dimensional plane that sweeps through the domain. For domains in $`^2`$, this principle is based on Corollary 2.4. Now, we will extend this principle to domains in a surface.
Let $`DS^3`$ be a domain with boundary in an orientable smooth surface $`S`$ in $`^3`$ and let $`uS^2`$. When $`\lambda `$ varies on $``$, the different planes $`\pi _{u,\lambda }`$ can be considered as a “sweeping” plane in $`^3`$, and Corollary 2.3 can be expressed as:
$$\chi (D)=(I_2B_2)+\frac{1}{2}(I_1B_1),$$
where $`I_1,B_1`$ denote the number of islands and bridges, respectively, observed in the level curves $`\pi _{u,\lambda }S`$ (see Figure 2) and $`I_2,B_2`$, similarly to , denote the number of islands and bridges observed in the “sweeping” plane $`\pi _{u,\lambda }`$, which contribute to the sum $`_{x\mathrm{\Sigma }(h_u)}\mathrm{Ind}_x(h_u)`$ (see Figure 2).
Figure 2
|
no-problem/9907/hep-th9907003.html
|
ar5iv
|
text
|
# Holography and Rotating AdS Black Holes
## I Introduction
One of the hallmarks of the duality revolution in string theory has been the linking of apparently unrelated areas in physics via unexpected pathways. The AdS/CFT correspondence is a striking example of this; by analyzing Dirichlet three-branes, it connects gravity in a particular background to a strongly coupled gauge theory. More specifically, the correspondence says that IIB string theory in a background of five-dimensional anti-de Sitter space times a five-sphere is dual to the large N limit of $`𝒩=4`$ supersymmetric Yang-Mills theory in four dimensions. (See for an extensive review of this vast subject.)
From a supergravity point of view, a Dirichlet p-brane is simply a charged extended black hole. Moreover, the string theory description of D-branes allows one to determine its world-volume action; to lowest order in $`\alpha ^{}`$ this is super Yang-Mills. In earlier work , the entropy of the black brane solution was correctly given to within a numerical coefficient by the entropy of the field theory on the brane. Later, this match was extended to the case of rotating branes .
The AdS/CFT correspondence extends the relationship between gauge theory and gravity from providing a description of a particular brane solution to describing the physics of the entire supergravity background by a dual conformal field theory in one dimension less. As such this realizes the principle of holography , the notion that the physics of the bulk is imprinted on its boundary.
Black holes provide an arena in which this correspondence between gravity and gauge theory may be examined. For nonrotating AdS black holes , the thermodynamics has been described by thermal conformal field theory . Recently a five-dimensional rotating black hole embedded in anti-de Sitter space has been discovered . Since rotation introduces an extra dimensionful parameter, the conformal field theory entropy is not so tightly constrained by the combination of extensivity and dimensional analysis; a successful correspondence between thermodynamic quantities is much more nontrivial. Our purpose in this paper, therefore, is to probe the correspondence by extracting the thermodynamics of the new rotating black hole from a dual conformal field theory in four dimensions.
We begin by demonstrating the holographic nature of the duality for nonrotating black holes: the thermodynamics of a nonrotating black hole in anti-de Sitter space emerges from a thermal conformal field theory whose thermodynamic variables are read off from the boundary of the black hole spacetime. In the high temperature limit, the field theory calculation gives the correct entropy of the Hawking-Page black hole up to a factor of 4/3.
We then describe the new rotating Kerr-AdS black hole solution and show how its thermodynamic properties can be recovered from the dual field theory, in the high temperature limit. In that limit, the entropy, energy and angular momentum, as derived from the statistical mechanics of the field theory, all agree with their gravitational counterparts, again up to a common factor of 4/3.
## II AdS/CFT Correspondence for Nonrotating Holes
The five-dimensional Einstein-Hilbert action with a cosmological constant is given by
$$I=\frac{1}{16\pi G_5}d^5x\sqrt{g}\left(R+12l^2\right),$$
(1)
where $`G_5`$ is the five-dimensional Newton constant, $`R`$ is the Ricci scalar, the cosmological constant is $`\mathrm{\Lambda }=6l^2`$, and we have neglected a surface term at infinity. Anti-de Sitter solutions derived from this action can be embedded in ten-dimensional IIB supergravity such that the supergravity background is of the form $`AdS_5\times S^5`$. The AdS/CFT correspondence then states that there is a dual conformal field theory in four dimensions from which one can extract the physics.
The line element of a “Schwarzschild” black hole in anti-de Sitter space in five spacetime dimensions can be written as
$$ds^2=\left(1\frac{2MG_5}{r^2}+r^2l^2\right)dt^2+\left(1\frac{2MG_5}{r^2}+r^2l^2\right)^1dr^2+r^2d\mathrm{\Omega }_3^2.$$
(2)
This solution has a horizon at $`r=r_+`$ where
$$r_+^2=\frac{1}{2l^2}\left(1+\sqrt{1+8MG_5l^2}\right).$$
(3)
The substitution $`\tau =it`$ makes the metric positive definite and, by the usual removal of the conical singularity at $`r_+`$, yields a periodicity in $`\tau `$ of
$$\beta =\frac{2\pi r_+}{1+2r_+^2l^2},$$
(4)
which is identified with the inverse temperature of the black hole. The entropy is given by
$$S=\frac{A}{4G_5}=\frac{\pi ^2r_+^3}{2G_5},$$
(5)
where $`A`$ is the “area” (that is 3-volume) of the horizon.
We shall take the dual conformal field theory to be $`𝒩=4`$, U(N) super-Yang-Mills theory. But since it is only possible to do calculations in the weak coupling regime, we shall consider only the free field limit of Yang-Mills theory. Then, in the high-energy regime which dominates the state counting, the spectrum of free fields on a sphere is essentially that of blackbody radiation in flat space, with $`8N^2`$ bosonic and $`8N^2`$ fermionic degrees of freedom. The entropy is therefore
$$S_{\mathrm{CFT}}=\frac{2}{3}\pi ^2N^2V_{\mathrm{CFT}}T_{\mathrm{CFT}}^3.$$
(6)
We would like to evaluate this “holographically”, i.e. by substituting physical data taken from the boundary of the black hole spacetime. At fixed $`rr_0r_+`$, the boundary line element tends to
$$ds^2r_0^2\left[l^2dt^2+d\mathrm{\Omega }_3^2\right].$$
(7)
The physical temperature at the boundary is consequently red-shifted to
$$T_{\mathrm{CFT}}=\frac{T_{BH}}{\sqrt{g_{tt}}}=\frac{T_{BH}}{lr_0},$$
(8)
while the volume is
$$V_{\mathrm{CFT}}=2\pi ^2r_0^3.$$
(9)
To obtain an expression for $`N`$, we invoke the AdS/CFT correspondence. Originating in the near horizon geometry of the D3-brane solution in IIB supergravity, the correspondence , relates $`N`$ to the radius of $`S^5`$ and the cosmological constant:
$$R_{S^5}^2=\sqrt{4\pi g_s\alpha _{}^{}{}_{}{}^{2}N}=\frac{1}{l^2}.$$
(10)
Then, since
$$(2\pi )^7g_s^2\alpha _{}^{}{}_{}{}^{4}=16\pi G_{10}=16\frac{\pi ^4}{l^5}G_5,$$
(11)
we have
$$N^2=\frac{\pi }{2l^3G_5}.$$
(12)
Substituting the expressions for $`N`$, $`V_{\mathrm{CFT}}`$ and $`T_{\mathrm{CFT}}`$ into Eq. (6), we obtain
$$S_{\mathrm{CFT}}=\frac{1}{12}\frac{\pi ^2}{l^6G_5}\left(\frac{1+2r_+^2l^2}{r_+}\right)^3,$$
(13)
which, in the high temperature limit $`r_+l1`$, reduces to
$$S_{\mathrm{CFT}}=\frac{2}{3}\frac{\pi ^2r_+^3}{G_5}=\frac{4}{3}S_{\mathrm{BH}},$$
(14)
in agreement with the black hole result, Eq. (5), but for a numerical factor of 4/3.
Similarly, the red-shifted energy of the conformal field theory matches the black hole mass, modulo a coefficient. The mass above the anti-de Sitter background is
$$M^{}=\frac{3\pi }{4}M.$$
(15)
This is the AdS equivalent of the ADM mass, or energy-at-infinity. The corresponding expression in the field theory is
$$U_{\mathrm{CFT}}^{\mathrm{}}=\sqrt{g_{tt}}\frac{\pi ^2}{2}N^2V_{\mathrm{CFT}}T_{\mathrm{CFT}}^4=\frac{\pi }{2}r_+^4l^2=\frac{4}{3}M^{},$$
(16)
where $`U_{\mathrm{CFT}}^{\mathrm{}}`$ is the conformal field theory energy red-shifted to infinity, and we have again taken the $`r_+l1`$ limit. The $`4/3`$ discrepancy in Eqs. (5) and (16) is construed to be an artifact of having calculated the gauge theory entropy in the free field limit rather than in the strong coupling limit required by the correspondence; intuitively, one expects the free energy to decrease when the coupling increases. The 4/3 factor was first noticed in the context of D3-brane thermodynamics . Our approach differs in that we take the idea of holography at face value, by explicitly reading physical data from the boundary of spacetime; nonetheless, Eq. (12) refers to an underlying brane solution.
At this level, the correspondence only goes through in the high temperature limit. Since the only two scales in the thermal conformal field theory are $`r_0`$ and $`T_{\mathrm{CFT}}`$, high temperature means that $`T_{\mathrm{CFT}}1/r_0`$, allowing us to neglect finite-size effects.
## III Five-Dimensional Rotating AdS Black Holes
The general rotating black hole in five dimensions has two independent angular momenta. Here we consider the case of a rotating black hole with one angular momentum in an ambient AdS space. The line element is
$`ds^2`$ $`=`$ $`{\displaystyle \frac{\mathrm{\Delta }}{\rho ^2}}\left(dt{\displaystyle \frac{a\mathrm{sin}^2\theta }{\mathrm{\Xi }_a}}d\varphi \right)^2+{\displaystyle \frac{\mathrm{\Delta }_\theta \mathrm{sin}^2\theta }{\rho ^2}}\left(adt{\displaystyle \frac{\left(r^2+a^2\right)}{\mathrm{\Xi }}}d\varphi \right)^2`$ (18)
$`+{\displaystyle \frac{\rho ^2}{\mathrm{\Delta }}}dr^2+{\displaystyle \frac{\rho ^2}{\mathrm{\Delta }_\theta }}d\theta ^2+r^2\mathrm{cos}^2\theta d\psi ^2,`$
where $`0\varphi ,\psi 2\pi `$ and $`0\theta \pi /2`$, and
$`\mathrm{\Delta }`$ $`=`$ $`\left(r^2+a^2\right)\left(1+r^2l^2\right)2MG_5`$ (19)
$`\mathrm{\Delta }_\theta `$ $`=`$ $`1a^2l^2\mathrm{cos}^2\theta `$ (20)
$`\rho ^2`$ $`=`$ $`r^2+a^2\mathrm{cos}^2\theta `$ (21)
$`\mathrm{\Xi }`$ $`=`$ $`1a^2l^2.`$ (22)
This solution is an anti-de Sitter space with curvature given by
$$R_{ab}=4l^2g_{ab}.$$
(23)
The horizon is at
$$r_+^2=\frac{1}{2l^2}\left((1+a^2l^2)+\sqrt{(1+a^2l^2)^2+8MG_5l^2}\right),$$
(24)
which can be inverted to give
$$MG_5=\frac{1}{2}(r_+^2+a^2)(1+r_+^2l^2).$$
(25)
The entropy is one-fourth the “area” of the horizon:
$$S=\frac{1}{2G_5}\frac{\pi ^2\left(r_+^2+a^2\right)r_+}{\left(1a^2l^2\right)}.$$
(26)
The entropy diverges in two different limits: $`r_+\mathrm{}`$ and $`a^2l^21`$. The first of these descibes an infinite temperature and infinite radius black hole, while the second corresponds to “critical angular velocity”, at which the Einstein universe at infinity has to rotate at the speed of light. The inverse Hawking temperature is
$$\beta =\frac{2\pi \left(r_+^2+a^2\right)}{r_+\left(1+a^2l^2+2r_+^2l^2\right)}.$$
(27)
The mass above the anti-de Sitter background is now
$$M^{}=\frac{3\pi }{4\mathrm{\Xi }}M,$$
(28)
the angular velocity at the horizon is
$$\mathrm{\Omega }_H=\frac{a\mathrm{\Xi }}{r_+^2+a^2},$$
(29)
and the angular momentum is defined as
$$J_\varphi =\frac{1}{16\pi }_Sϵ_{abcde}^d\psi ^edS^{abc}=\frac{\pi Ma}{2\mathrm{\Xi }^2},$$
(30)
where $`\psi ^a=\left(\frac{}{\varphi }\right)^a`$ is the Killing vector conjugate to the angular momentum in the $`\varphi `$ direction, and S is the boundary of a hypersurface normal to $`\left(\frac{}{t}\right)^a`$, with $`dS^{abc}`$ being the volume element on S.
Following methods discussed in , one can derive a finite action for this solution from the regularized spacetime volume after an appropriate matching of hypersurfaces at large $`r`$. The result is
$$I=\frac{\pi ^2\left(r_+^2+a^2\right)^2(1r_+^2l^2)}{4G_5\mathrm{\Xi }r_+(1+a^2l^2+2r_+^2l^2)}.$$
(31)
As noted in , the action changes sign at $`r_+l=1`$, signalling the presence of a phase transition in the conformal field theory. For $`r_+l>1`$, the theory is in an unconfined phase and has a free energy proportional to $`N^2`$. One can also check that the action satisfies the thermodynamic relation
$$S=\beta (M^{}J_\varphi \mathrm{\Omega }_H)I.$$
(32)
It is interesting to note that, by formally dividing both the free energy, $`F=I/\beta `$, and the mass by an arbitrary volume, one obtains an equation of state:
$$p=\frac{1}{3}\frac{r_+^2l^21}{r_+^2l^2+1}\rho ,$$
(33)
where $`p=F/V`$ is the pressure, and $`\rho `$ is the energy density. In the limit $`r_+l1`$ that we have been taking, this equation becomes
$$p=\frac{1}{3}\rho ,$$
(34)
as is appropriate for the equation of state of a conformal theory. This suggests that if a conformal field theory is to reproduce the thermodynamic properties of this gravitational solution, it has to be in such a limit.
## IV The dual CFT description
The gauge theory dual to supergravity on $`AdS_5\times S^5`$ is $`𝒩=4`$ super Yang-Mills with gauge group $`U(N)`$ where $`N`$ tends to infinity . The action is
$$S=d^4x\sqrt{g}\mathrm{Tr}\left(\frac{1}{4g^2}F^2+\frac{1}{2}\left(D\mathrm{\Phi }\right)^2+\frac{1}{12}R\mathrm{\Phi }^2+\overline{\psi }/D\psi \right).$$
(35)
All fields take values in the adjoint representation of U(N). The six scalars, $`\mathrm{\Phi }`$, transform under $`SO(6)`$ R-symmetry, while the four Weyl fermions, $`\psi `$, transform under $`SU(4)`$, the spin cover of $`SO(6)`$. The scalars are conformally coupled; otherwise, all fields are massless. We shall again take the free field limit. The angular momentum operators can be computed from the relevant components of the stress energy tensor in spherical coordinates. This approach is to be contrasted with in which generators of R-rotations are used corresponding to spinning D-branes.
The free energy of the gauge theory is given by
$$F_{\mathrm{CFT}}=+T_{\mathrm{CFT}}\underset{i}{}\eta _i_0^{\mathrm{}}𝑑l_i𝑑m_i^\varphi 𝑑m_i^\psi \mathrm{ln}\left(1\eta _ie^{\beta (\omega _im_i^\varphi \mathrm{\Omega }_\varphi )}\right),$$
(36)
where $`i`$ labels the particle species, $`\eta =+1`$ for bosons and -1 for fermions, $`l_i`$ is the quantum number associated with the total orbital angular momentum of the ith particle, and $`m_i^{\varphi (\psi )}`$ is its angular momentum component in the $`\varphi (\psi )`$ direction. Here $`\mathrm{\Omega }`$ plays the role of a “voltage” while the “chemical potential” $`m^\varphi \mathrm{\Omega }`$ serves to constrain the total angular momentum of the system.
The free energy is easiest to evaluate in a corotating frame, which corresponds to the constant-time foliation choice of hypersurfaces orthogonal to $`t^a`$. Since, at constant $`r=r_0`$, the boundary has the metric
$`ds^2`$ $`=`$ $`r_0^2\left[l^2dt^2+{\displaystyle \frac{2al^2\mathrm{sin}^2\theta }{\mathrm{\Xi }}}dtd\varphi +{\displaystyle \frac{\mathrm{sin}^2\theta }{\mathrm{\Xi }}}d\varphi ^2+{\displaystyle \frac{d\theta ^2}{\mathrm{\Delta }_\theta }}+\mathrm{cos}^2\theta d\psi ^2\right],`$ (37)
the constant-time slices of the corotating frame have a spatial volume of
$$V=\frac{2\pi ^2r_0^3}{1a^2l^2}.$$
(38)
The spectrum of a conformally coupled field on $`S^3`$ is essentially given by
$$\omega _l\frac{l}{r_0},$$
(39)
where $`l`$ is the quantum number for total orbital angular momentum. Eq. (36) can now be evaluated by making use of the identities
$`{\displaystyle _0^{\mathrm{}}}𝑑xx^n\mathrm{ln}\left(1e^{x+c}\right)`$ $`=`$ $`\mathrm{\Gamma }(n+1)\mathrm{Li}_{n+2}(e^c)=\mathrm{\Gamma }(n+1){\displaystyle \underset{k=1}{\overset{k=\mathrm{}}{}}}{\displaystyle \frac{e^{kc}}{k^{n+2}}},`$ (40)
$`{\displaystyle 𝑑xx\mathrm{Li}_2(e^{ax+c})}`$ $`=`$ $`{\displaystyle \frac{1}{a^2}}\left[ax\mathrm{Li}_3(e^{ax+c})+\mathrm{Li}_4(e^{ax+c})\right],`$ (41)
where $`\mathrm{Li}_n`$ is the nth polylogarithmic function, defined by the sum above. The result is
$$F_{\mathrm{CFT}}=\frac{\pi ^4}{24}\frac{r_0}{\frac{1}{r_0^2}\mathrm{\Omega }^2}(8N^2)T_{\mathrm{CFT}}^4,$$
(42)
yielding an entropy of
$$S_{\mathrm{CFT}}=\frac{2}{3}\frac{\pi ^5}{l^3G_5}\frac{r_0^3}{1\mathrm{\Omega }^2r_0^2}T_{\mathrm{CFT}}^3.$$
(43)
The physical temperature that enters the conformal field theory is
$$T_{\mathrm{CFT}}=\frac{1}{lr_0}T_{\mathrm{BH}}.$$
(44)
Similarly, the angular velocity is scaled to
$$\mathrm{\Omega }_{\mathrm{CFT}}=\frac{al^2}{lr_0}.$$
(45)
Substituting Eqs. (44) and (45) into Eq. (43) and taking the high temperature limit as before, we have
$$S_{\mathrm{CFT}}=\frac{2}{3G_5}\frac{\pi ^2r_+^3}{(1a^2l^2)}=\frac{4}{3}S_{\mathrm{BH}}.$$
(46)
The inclusion of rotation evidently does not affect the ratio of the black hole and field theory entropies.
In the corotating frame, the free energy is simply of the form $`N^2VT^4`$, with the volume given by Eq. (38). However, with respect to a nonrotating AdS space, the free energy takes a more complicated form since now the volume is simply $`2\pi ^2r_0^3`$. By keeping this volume and the temperature fixed, one may calculate the angular momentum of the system with respect to the nonrotating background:
$$J_\varphi ^{\mathrm{CFT}}=\frac{F}{\mathrm{\Omega }}|_{V,T_{\mathrm{CFT}}}=\frac{ar_+^4\pi \left(1+a^2l^2+2r_+^2l^2\right)^4}{48l^6\mathrm{\Xi }^2\left(r_+^2+a^2\right)^4}.$$
(47)
In the usual $`r_+l1`$ limit, we obtain
$$J_\varphi ^{\mathrm{CFT}}=\frac{2\pi Ma}{3\mathrm{\Xi }^2}=\frac{4}{3}J_\varphi ^{\mathrm{BH}},$$
(48)
so that the gauge theory angular momentum is proportional to the black hole angular momentum, Eq. (30), with a factor of 4/3.
The black hole mass formula, Eq. (28), refers to the energy above the nonrotating anti-de Sitter background. We should therefore compare this quantity with the red-shifted energy in the conformal field theory. Here a slight subtlety enters. Since the statistical mechanical calculation gives the energy in the corotating frame, we must add the center-of-mass rotational energy before comparing with the black hole mass. Then we find that
$$U_{\mathrm{CFT}}^{\mathrm{}}=\sqrt{g_{tt}}\left(U_{\mathrm{corotating}}+J_{\mathrm{CFT}}\mathrm{\Omega }_{\mathrm{CFT}}\right)=\frac{4}{3}M^{},$$
(49)
with $`M^{}`$ given by Eq. (28), evaluated at high temperature. Using $`U_{\mathrm{CFT}}^{\mathrm{}}=\sqrt{g_{tt}}U_{\mathrm{CFT}}`$ and previous expressions for thermodynamic quantities, one may check that the first law of thermodynamics is satisfied.
## V Discussion
There are two interesting aspects of these results. The first is that the same relative factor that appears in the entropy appears in the angular momentum and the energy. A priori, one has no reason to believe that the functional form of the free energy will be such as to guarantee this result (see, for example,). The second is that the relative factor of 4/3 in the entropy is unaffected by rotation. Indeed, one could expand the entropy of the rotating system in powers and inverse powers of the ’t Hooft coupling. The correspondence implies that
$$S_{\mathrm{CFT}}=\underset{m}{}a_m\lambda ^m=\underset{n}{}b_n\left(\frac{1}{\sqrt{\lambda }}\right)^n=S_{\mathrm{BH}}.$$
(50)
We may approximate the series on the gauge theory side as $`a_0`$ and on the gravity side as $`b_0`$. Then, generically, we would expect these coefficients to be functions of the dimensionless rotational parameter $`\mathrm{\Xi }`$ so that $`a_0(\mathrm{\Xi })=f(\mathrm{\Xi })b_0(\mathrm{\Xi })`$ with $`f(\mathrm{\Xi }=1)=4/3`$. Our somewhat unexpected result is that $`f(\mathrm{\Xi })=4/3`$ has, in fact, no dependence on $`\mathrm{\Xi }`$.
## VI Acknowledgments
We would like to thank José Barbón, Roberto Emparan, and Kostas Skenderis for helpful discussions. D. B. is supported by European Commission TMR programme ERBFMRX-CT96-0045. M. P. is supported by the Netherlands Organization for Scientific Research (NWO).
|
no-problem/9907/nucl-ex9907019.html
|
ar5iv
|
text
|
# Nuclear transparency and the onset of strong absorption regime in the limit-from{¹²𝐶}+²⁴𝑀𝑔 system
## 1 Figure Captions
Figure 1. Plot of the reflection coefficients as a function of the laboratory energy and angular momentum.
Figure 2(left) The reflection coefficients for 20MeV(a), 23MeV(b), 31.2MeV(c), 40MeV(d) and for the S-matrix of equation 1(e). The dashed line in fig2(e) is the background reflection coefficients ($`|S_0|`$, eq.1). See text for details.
Figure 2(right) Argand diagrams of the S-matrix as a function of the angular momentum for the same energies and for the S-matrix of equation 1. See text for details.
|
no-problem/9907/astro-ph9907286.html
|
ar5iv
|
text
|
# Untitled Document
AN AMENDED FORMULA FOR THE DECAY OF RADIOACTIVE MATERIAL FOR COSMIC TIMES
Moshe Carmeli
Department of Physics, Ben Gurion University, Beer Sheva 84105, Israel
(E-mail: carmelim@bgumail.bgu.ac.il)
and
Shimon Malin
Department of Physics and Astronomy, Colgate University, Hamilton,
New York 13346, USA
(E-mail: SMALIN@CENTER.COLGATE.EDU)
ABSTRACT
An amended formula for the decay of radioactive material is presented. It is a modification of the standard exponential formula. The new formula applies for long cosmic times that are comparable to the Hubble time. It reduces to the standard formula for short times. It is shown that the material decays faster than expected. The application of the new formula to direct measurements of the age of the Universe and its implications is briefly discussed.
1. INTRODUCTION
In this paper we present an amended formula for the decay of radioactive material for cosmic times, when the times of the decay are of the order of magnitude of the Hubble time. It is reduced to the standard formula for short times. To this end we proceed as follows.
We assume, as usual, that the probability of disintegration during any interval of cosmic time $`dt^{}`$ is a constant,
$$\frac{dN}{dt^{}}=\frac{1}{T^{}}N,$$
$`(1a)`$
in analogy to the standard formula
$$\frac{dN}{dt}=\frac{1}{T}N,$$
$`(1b)`$
where $`T^{}`$ is a constant to be determined in terms of the half-lifetime $`T`$ of the decaying material.
It has been shown that the addition of two cosmic times $`t_1`$ backward with respect to us (now), and $`t_2`$ backward with respect to $`t_1`$, is not just $`t_1+t_2`$. Rather, it is given by \[1-5\]
$$t_{1+2}=\frac{t_1+t_2}{1+t_1t_2/\tau ^2}.$$
$`(2)`$
In Eq.(2) $`\tau `$ is the Hubble time in the limit of zero gravity, and thus it is a universal constant. Equation (2) is the universal formula for the addition of cosmic times, and reduces to the standard formula of times $`t_{1+2}=t_1+t_2`$ for short times with respect to $`\tau `$.
2. DERIVATION OF THE FORMULA
Let us substitute in the formula for the addition of cosmic times, Eq.(2), $`t_1=t`$ and $`t_2=dt`$. Then
$$t_{1+2}=\frac{t+dt}{1+tdt/\tau ^2}\left(t+dt\right)\left(1\frac{tdt}{\tau ^2}\right)\left[t+dt\left(1\frac{t^2}{\tau ^2}\right)\right].$$
$`(3)`$
Accordingly
$$\left(t+dt\right)\left[t+dt\left(1\frac{t^2}{\tau ^2}\right)\right],$$
$`(4)`$
or
$$dtdt\left(1\frac{t^2}{\tau ^2}\right).$$
$`(5)`$
So far the times denoted backward times. Since radioactivity deals with forward times, we use now the standard notation of times, and Eq.(5) will be written as
$$dtdt^{}=dt\left(1\frac{t^2}{\tau ^2}\right).$$
$`(6)`$
Equation (1a) will thus have the form
$$\left(1\frac{t^2}{\tau ^2}\right)^1\frac{dN}{dt}=\frac{1}{T^{}}N.$$
$`(7)`$
The solution of Eq.(7) is then given by
$$N=N_0\mathrm{exp}\left[\frac{t}{T^{}}\left(1\frac{t^2}{3\tau ^2}\right)\right],$$
$`(8)`$
in analogy to the solution of the standard equation (1b),
$$N=N_0\mathrm{exp}\left(\frac{t}{T}\right).$$
$`(9)`$
3. DETERMINING $`T^{}`$ IN TERMS OF HALF-LIFE TIME $`T`$
From the solution (9) we have
$$N\left(T\right)=N_0/e,$$
$`(10)`$
where $`T`$ is the half-life time of the material, as expected. From Eq.(8), we obtain
$$N\left(T^{}\right)=N_0\mathrm{exp}\left[\left(1\frac{T^2}{3\tau ^2}\right)\right]=\left(N_0/e\right)\mathrm{exp}\frac{T^2}{3\tau ^2}.$$
$`(11)`$
Using Eq.(10), we now have
$$N\left(T^{}\right)=N\left(T\right)\mathrm{exp}\frac{T^2}{3\tau ^2}.$$
$`(12)`$
Under the assumption that $`T^{}0`$, we thus have
$$N\left(T^{}\right)>N\left(T\right).$$
$`(13)`$
In order to determine $`T^{}`$ in terms of $`T`$, we procede as follows. We substitute in Eq.(8) $`t=T`$, and using Eq.(10),we obtain
$$N\left(T\right)=N_0\mathrm{exp}\left[\frac{T}{T^{}}\left(1\frac{T^2}{3\tau ^2}\right)\right]=N_0/e.$$
$`(14)`$
As a result we have
$$\frac{T}{T^{}}\left(1\frac{T^2}{3\tau ^2}\right)=1,$$
$`(15)`$
or
$$T^{}=T\left(1\frac{T^2}{3\tau ^2}\right),$$
$`(16)`$
and thus
$$T^{}<T.$$
$`(17)`$
Using Eq.(16) in Eq.(8) we therefore obtain
$$N\left(t\right)=N_0\mathrm{exp}\left[\frac{t\left(1t^2/3\tau ^2\right)}{T\left(1T^2/3\tau ^2\right)}\right].$$
$`(18)`$
Accordingly we have
$$N\left(t\right)=N_0\mathrm{exp}\left[\frac{t\alpha \left(t\right)}{T}\right],$$
$`(19)`$
where
$$\alpha \left(t\right)=\frac{1t^2/3\tau ^2}{1T^2/3\tau ^2}1;(tT).$$
$`(20)`$
Also we have
$$N_0\mathrm{exp}\left[\frac{t\alpha \left(t\right)}{T}\right]N_0\mathrm{exp}\left(\frac{t}{T}\right).$$
$`(21)`$
Consequently, Eq.(19) provides a large deviation from Eq.(9) when $`T`$ is comparable to $`\tau `$ and we measure radioactivity over astronomical times. For example, Thorium is a radioactive element with a half-life of 14.1 billion years, as compared to the estimated 13 billion years age of the Universe. Such measurements/observations can be carried out, and the detected deviations can be drawn by a graph (see Fig. 1). In principle, it follows from Eq.(19) that $`N(t)`$ for a given $`t`$ is less than that obtained through the traditional formula; i.e., the material decays faster than expected.
4. DISCUSSION
Accurate measurements for the decay of radioactive materials from the Earth and from stars in our galaxy, could provide crucial information about the age of the Universe. It is well known that two of the most straightforward methods of calculating the age of the Universe – through redshift measurements, and through stellar evolution – yield incompatible results. Recent measurements of the distances of faraway galaxies through the use of the Hubble Space Telescope indicate an age much less than the ages of the oldest stars that we calculate through the stellar evolution theory \[6-16\].
At present there is no conclusion to this contraversy; a cosmological constant would probably verify the situation, but it is possible that the discrepancy will disappear with more accurate measurements of the age of the Universe using both methods. The discussion given in this paper clearly goes in the right direction in solving this important impasse.
REFERENCES
1. M. Carmeli, Found. Phys. 25, 1029 (1995).
2. M. Carmeli, Found. Phys. 26, 413 (1996).
3. M. Carmeli, Inter. J. Theor. Phys. 36, 757 (1997).
4. M. Carmeli, Cosmological Special Relativity: The Large-Scale Structure of Space, Time and Velocity, World Scientific (1997), p. 24.
5. M. Carmeli, Aspects of cosmological relativity, in: Proceedings of the Fourth Alexander Friedmann International Seminar on Gravitation and Cosmology, held in St. Peterburg, June 17-25, 1998, Russian Academy of Sciences Press, in print.
6. W.L. Freedman et al., Nature 371, 757 (1994).
7. W.L. Freedman, HST highlights: The extragalactic distance scale, in: Seventeenth Texas Symposium on Astrophysics and Cosmology, H. Böhringer, G.E. Morfill and J.E. Trümper, Editors, Annals of the New York Academy of Sciences, New York, Vol. 759 (1995), p. 192.
8. A. Renzini, in: Observational Tests of Inflation, T. Shanks et al., Editors, Kluwer, Boston (1991).
9. A.R. Sandage, Astr. J. 106, 719 (1993).
10. X. Shi, D.N. Schramm, D.S.P. Dearborn and J.W. Truran, Comments on Astrophysics 17, 343 (1995).
11. D.E. Winget et al., Astrophys. J. 315, L77 (1987).
12. E. Pitts and R.J. Taylor, Mon. Not. R. Astr. Soc. 255, 557 (1992).
13. W. Fowler, in: 14th Texas Symposium on Relativistic Astrophysics, E.J. Fenyores, Editor, New York Academy of Sciences, New York (1989), p. 68.
14. D.D. Clayton, in: 14th Texas Symposium on Relativistic Astrophysics, E.J. Fenyores, Editor, New York Academy of Sciences, New York (1989), p. 79.
15. S.M. Carroll, W.J. Press and E.L. Turner, A. Rev. Astr. Astrophys. 30, 499 (1992).
16. M.J. Pierce et al., Nature 371, 385 (1994).
CAPTIONS
Fig. 1: Two curves describing the standard exponential decay $`N/N_0=\mathrm{exp}\left(t/T\right)`$ and the amended cosmic decay $`N/N_0=\mathrm{exp}\left[t\alpha \left(t\right)/T\right]`$. For a measured $`N/N_0`$ the two curves occur at two different times $`t_1`$ and $`t_2`$, with $`t_2>t_1`$, where $`t_1`$ and $`t_2`$ correspond to the amended and the standard decay formulas. Accordingly, cosmic times of decaying materials on Earth and stars are actually shorter than has been believed so far.
|
no-problem/9907/math9907182.html
|
ar5iv
|
text
|
# On the reducibility of characteristic varieties
## 1. Vanishing cycles of conical sheaves.
This section presents an argument from . Let $`V`$ be a complex vector space, and suppose an object $`𝐀`$ in the derived category $`D^b(V)`$ is constructible with respect to an algebraic, $`^{}`$-conical stratification. Let $`f:V`$ be a linear function, and let $`W=f^1(0)`$. For definitions of the sheaf-theoretic concepts we use, including the vanishing cycles functor $`\varphi _f`$ and the Fourier transform $`F`$, see .
###### Theorem 1.
The multiplicity of $`[T_{\{0\}}^{}V]`$ in $`CC(𝐀)`$ is the same as the multiplicity of $`[T_{\{0\}}^{}W]`$ in $`CC(\varphi _f𝐀)`$.
###### Proof.
The Fourier transform $`F:D^b(V)D^b(V^{})`$ respects the characteristic cycle: we have $`CC(F𝐀)=CC(𝐀)`$, using the natural identification $`T^{}VV\times V^{}T^{}V^{}`$ (this is given as Exercise 9.7 in ). Since the multiplicity of the zero section $`T_V^{}^{}V^{}`$ in $`CC(F𝐀)`$ is the Euler characteristic of the stalk cohomology of $`𝐀`$ at a point in the open stratum, it will be enough to show that for generic inclusions $`i_1`$, $`i_2`$ of a point $`\{p\}`$ into $`V^{}`$ and $`W^{}`$, respectively, the restrictions $`i_1^{}F𝐀`$ and $`i_2^{}F\varphi _f𝐀`$ are isomorphic.
We can consider $`f`$ as an element of $`W^{}V^{}`$. Let $`s:WW\times W^{}`$ be given by $`s(w)=(w,f)`$. Then $`\varphi _f`$ is naturally isomorphic to following the top row in the following diagram:
Here $`\nu _W`$, $`\nu _W^{}`$ are the specialization functors (using the natural identifications $`T_WVW\times V/W`$ and $`T_W^{}W^{}\times W^{}`$), the functors marked $`F`$ are the appropriate Fourier transforms, and $`s^{}(\omega )=(\omega ,f)`$. The left quadrilateral and the triangle naturally commute by , Propositions 2.3 and 2.4, and the right hand square commutes by , Proposition 3.7.13.
Thus $`F\varphi _f𝐀(s^{})^{}\nu _W^{}F𝐀`$. It is easy now to see that $`\nu _W^{}`$ and $`(s^{})^{}`$ both leave the dimensions of the stalk cohomology at a generic point unchanged (for $`(s^{})^{}`$, use the fact that $`\nu _W^{}F𝐀`$ is conical). ∎
Similar techniques have been useful in the calculation of certain categories of perverse sheaves , .
## 2. The main result
Suppose a smooth complex variety $`V`$ carries an algebraic Whitney stratification $`𝒮`$. Denote the conormal variety by $`\mathrm{\Lambda }T^{}V`$, and let $`\mathrm{\Lambda }_S=\overline{T_S^{}V}`$ be the component of $`\mathrm{\Lambda }`$ lying over $`S𝒮`$. Let $`\stackrel{~}{\mathrm{\Lambda }}`$ be the smooth part of $`\mathrm{\Lambda }`$, and for each stratum $`S`$ put $`\stackrel{~}{\mathrm{\Lambda }}_S=\stackrel{~}{\mathrm{\Lambda }}\mathrm{\Lambda }_S`$.
###### Definition.
If two strata $`S`$ and $`T`$ satisfy $`dim_{}(\mathrm{\Lambda }_S\mathrm{\Lambda }_T)=dim_{}\mathrm{\Lambda }1`$, we say that $`S`$ and $`T`$ meet microlocally in codimension one.
Given an $`𝒮`$-constructible perverse sheaf $`𝐏`$ on $`V`$, we let $`_S(𝐏)`$ be the Morse or vanishing cycles local system of $`𝐏`$ at the stratum $`S`$ (for a definition, see ). It is a local system on $`\stackrel{~}{\mathrm{\Lambda }}_S`$, whose dimension is, up to a sign, the multiplicity of $`[\mathrm{\Lambda }_S]`$ in $`CC(𝐏)`$.
Now suppose that $`V`$ is a complex vector space and the stratification $`𝒮`$ is $`^{}`$-conical. Let $`S\{0\}`$ be a stratum, and take a point $`xS`$. Define a loop $`\gamma `$ in $`\stackrel{~}{\mathrm{\Lambda }}_S`$ by choosing a point $`(x,\xi )\stackrel{~}{\mathrm{\Lambda }}_S`$ and letting $`\gamma (\theta )=(e^{i\theta },\xi )`$, $`\theta [0,2\pi ]`$. Note that any two such loops are homotopic.
###### Theorem 2.
Suppose that $`𝐏`$ is an $`𝒮`$-constructible perverse sheaf on $`V`$. If $`S`$ meets $`\{0\}`$ microlocally in codimension one, and the monodromy of $`_S(𝐏)`$ around $`\gamma `$ is not multiplication by $`(1)^{d1}`$, then $`_0(𝐏)0`$.
In the case $`𝐏=\mathrm{𝐈𝐂}^{}(\overline{S})`$, we get:
###### Corollary 3.
Suppose $`XV`$ is an even-dimensional conical variety, $`S`$ is the smooth locus of $`X`$, and $`\{0\}`$ and $`S`$ meet microlocally in codimension one. Then $`_0(\mathrm{𝐈𝐂}^{}(X))0`$.
###### Proof.
Just note that $`_S(\mathrm{𝐈𝐂}^{}(\overline{S}))_{\stackrel{~}{\mathrm{\Lambda }}_S}`$. ∎
###### Proof of Theorem 2.
Since we are free to choose the point $`(x,\xi )\stackrel{~}{\mathrm{\Lambda }}_S`$, we choose it as follows. The dual cone $`S^{}V^{}T_0^{}V`$ to $`S`$ is defined to be $`\mathrm{\Lambda }_ST_0^{}V`$; $`S`$ and $`S^{}`$ are cones over dual projective varieties in $`V`$ and $`V^{}`$. By assumption $`S^{}`$ is a divisor and so it is not contained in any $`T^{}`$ for any stratum $`TS,\{0\}`$. Let $`\xi `$ be a smooth point of $`S^{}S^{}_{RS,\{0\}}R^{}`$. Then $`L=\{vV(v,\xi )\mathrm{\Lambda }\}`$ is a line through the origin contained in $`S\{0\}`$; let $`x`$ be any nonzero point in $`L`$.
In microlocal language, the point $`(0,\xi )`$ represents a codimension one point of $`\mathrm{\Lambda }`$: it is a smooth point of both $`\mathrm{\Lambda }_0`$ and $`\mathrm{\Lambda }_S`$, they intersect transversely there, and it lies in no other component of $`\mathrm{\Lambda }`$.
Let $`f:V`$ be the linear function with $`df_x=\xi `$. Our choice of $`\xi `$ implies that $`f|_S`$ has singularities only on $`L`$ and that if we choose a normal slice $`N`$ to $`L`$ at $`xL`$, the singularity of $`f|_{NS}`$ is Morse at $`x`$.
Let $`L^{}=L\{0\}`$ and embed $`L^{}`$ into $`\stackrel{~}{\mathrm{\Lambda }}`$ by the map $`\alpha (y)=(y,df_y)=(y,\xi )`$. Put $`𝐏_f=\varphi _f(𝐏)`$; it is a conical perverse sheaf supported on $`L`$. The local system on $`L^{}`$ can be computed using stratified Morse theory:
$$𝐏_f|_L^{}=\alpha ^{}_S(𝐏),$$
where $`=\varphi _{(f|_S)}(_S)`$. But the local system $``$ is easy to compute by Picard-Lefchetz theory; its monodromy is multiplication by $`(1)^{d1}`$, where $`d=dim_{}(S)`$. Thus the monodromy of $`𝐏_f|_L^{}`$ is nontrivial.
An elementary application of the theory of perverse sheaves on a complex line (see and the example following the proof of Theorem 3.3 in ) implies that a perverse sheaf with nontrivial monodromy around the origin must have a nonzero vanishing cycle at $`0`$; thus $`_0(𝐏_f)0`$. Then by Theorem 1, $`_0(𝐏)=_0(𝐏_f)`$, so we are done. ∎
###### Remark.
A strengthening of Theorem 1 using Morse local systems instead of characteristic cycles gives the slightly stronger conclusion that $`_0(𝐏)`$ is a nontrivial local system.
We have stated Theorem 2 assuming that the ambient variety is a vector space and the smaller stratum is a point, but it can also be applied to a pair of strata $`(S,T)`$ in a general stratified complex manifold $`V`$ with $`T\overline{S}`$, so long as the stratification is conical along $`T`$; simply intersect with a normal slice $`N`$ to $`S`$ at a point $`sS`$, or, equivalently, apply the specialization functor $`\nu _S:D^b(V)D^b(T_SV)`$ and restrict to the fiber over $`s`$.
If $`V^n`$ but the stratification of $`V`$ is not conical, one can still look at the specialization $`𝐏^{}=\nu _0(𝐏)`$, which is conical. Since $`_0(𝐏^{})_0(𝐏)`$ (see ), Theorem 2 may apply to $`𝐏^{}`$ to give information about $`𝐏`$. We cannot use this to deduce that Theorem 2 holds without the conical assumption, however. There are stratifications for which $`S`$ meets $`0`$ microlocally in codimension one, but the smooth part of the specialization $`\nu _0(\overline{S})`$ doesn’t.
For instance, if $`\overline{S}`$ is the variety $`\{(x,y,z)^3xy=z^3\}`$, the specialization $`𝐏^{}=\nu _S(\mathrm{𝐈𝐂}^{}(\overline{S}))`$ is supported on $`\{xy=0\}`$. A little calculation shows that the composition series of the perverse sheaf $`𝐏^{}`$ consists of simple intersection cohomology sheaves with constant coefficients supported on $`\{x=0\}`$, $`\{y=0\}`$, $`\{x=y=0\}`$, and $`\{(0,0,0)\}`$. If Theorem 2 applied to $`𝐏^{}`$, it would have to apply for one of these simple components, since the Morse local systems $`_S`$ functors are exact on the category of perverse sheaves. But this is clearly not the case, since all but the last have $`_0=0`$.
This singularity appears in the flag variety for $`G=G_2`$, associated to the pair of Weyl group elements $`(tst,tstst)`$, where $`s`$,$`t`$ are the reflections for the long and short simple roots, respectively.
## 3. Kashiwara and Saito’s example
In Kashiwara and Saito gave an example of a pair of Schubert varieties $`ZY`$ in the variety of complete flags in $`^8`$ so that $`\mathrm{\Lambda }_Z`$ appears in the support of $`CC(\mathrm{𝐈𝐂}^{}(Y))`$. We recall their description of a normal slice to $`Y`$ at a generic point of $`Z`$. Let $`V^{16}`$ be the space $`\mathrm{Mat}(2\times 2,)^4`$ of $`4`$-tuples of $`2\times 2`$ complex matrices. Let $`X`$ be the variety
$$\{(A_0,A_1,A_2,A_3)Vdet(A_i)=0\text{and}A_iA_{i+1}=0\text{for all}i\},$$
where the index $`i`$ is taken modulo $`4`$. It is an $`8`$-dimensional conical subvariety of $`V`$.
We show that $`X`$ satisfies the hypotheses of Corollary 3. Let $`G=(GL_2())^4\times ^{}`$ act on $`V`$ by
$$(g_0,g_1,g_2,g_3,t)(A_0,A_1,A_2,A_3)=(tg_0A_0g_1^1,g_1A_1g_2^1,g_2A_2g_3^1,g_3A_3g_0^1).$$
Then $`X`$ is $`G`$-invariant, and in fact is the closure of a $`G`$-orbit: $`X=\overline{G(A,A,A,A)}`$, where $`A`$ is any nonzero nilpotent matrix. The open orbit is the subset of points $`(A_0,A_1,A_2,A_3)`$ in $`X`$ for which all the $`A_i`$ are nonzero.
###### Proposition 4.
The smooth part of $`X`$ meets $`\{0\}`$ microlocally in codimension one.
###### Proof.
This is equivalent to showing that the dual cone $`X^{}V^{}`$ is a divisor. Using the inner product on $`V`$ given by matrix coordinates, $`V^{}`$ is naturally identified again with $`\mathrm{Mat}(2\times 2,)^4`$, with the action of $`G`$ given by
$$(g_0,g_1,g_2,g_3,t)(A_0,A_1,A_2,A_3)=(t^1g_1A_0g_0^1,g_2A_1g_1^1,g_3A_2g_2^1,g_0A_3g_3^1).$$
Certainly $`X^{}`$ is $`G`$-stable, and it is easily checked that $`x=(I+A^t,I,I,I)`$ lies in $`X^{}`$, where again $`A`$ is any nonzero nilpotent matrix. Its stabilizer is $`G_x=\{(g,g,g,g,1)g=aI+bA^t,a0\}`$, a two-dimensional group, so $`dim_{}X^{}=172=15`$. ∎
Thus Corollary 3 applies to this example, showing that $`[\mathrm{\Lambda }_{\{0\}}]`$ appears in $`CC(\mathrm{𝐈𝐂}^{}(X))`$.
Also note that $`X`$ is a toric variety: although the maximal torus $`TSL_8()`$ which acts on the flag variety is only $`7`$-dimensional, there is a larger torus in $`G`$ which acts on $`X`$ with finitely many orbits. A laborious calculation with an algorithm from or the equivalent formula in (Theorem 2.12 in Chapter 10) shows that $`dim_0(\mathrm{𝐈𝐂}^{}(X))=1`$.
## 4. The Lagrangian Grassmannian
In , Boe and Fu computed the characteristic cycles $`CC(\mathrm{𝐈𝐂}^{}(Y))`$ for Schubert varieties $`Y`$ in Hermitian symmetric spaces. In the compact cases, reducible characteristic varieties appeared only for the Lagrangian Grassmannian $`X`$ of Lagrangian subspaces of a complex symplectic space $`^{2n}`$. For concreteness, suppose that the symplectic form is given by $`\omega =_{i=1}^n𝐞_i^{}𝐞_{2n+1i}^{}`$, where the $`𝐞_i^{}`$ form the dual basis to the standard basis of $`^{2n}`$.
The Schubert decomposition of $`X`$ is given as follows. Let $`B\mathrm{Sp}(^{2n},\omega )`$ be the Borel group of transformations preserving the standard flag. Given a word $`w\{\alpha ,\beta \}^n`$ of length $`n`$ in the letters $`\alpha `$ and $`\beta `$, we define a cell $`S_w`$ as follows. Let $`\overline{w}\{\alpha ,\beta \}^{2n}`$ be the word for which $`\overline{w}(2n+1i)\overline{w}(i)=w(i)`$ for $`1in`$. Then $`E_w=span\{𝐞_i\overline{w}(i)=\alpha \}`$ is a point in $`X`$, and we put $`S_w=BE_w`$ and $`X_w=\overline{S_w}`$. Also let $`N_w=B^{}E_w`$, where $`B^{}`$ is the opposite Borel to $`B`$; it is a normal slice to $`S_w`$ through $`E_w`$.
Let $`𝐏_w=\mathrm{𝐈𝐂}^{}(X_w)`$. Given $`v,w\{\alpha ,\beta \}^n`$, let $`m_v^w`$ be the multiplicity with which $`[\mathrm{\Lambda }_v]`$ appears in $`CC(𝐏_w)`$. Boe and Fu’s calculation of these numbers can be summarized as follows.
###### Theorem 5 (Boe and Fu , Theorem 7.1D).
$`m_v^w=0`$ or $`1`$ for any $`v,w\{\alpha ,\beta \}^n`$. It is $`1`$ if and only if there is a chain
$$X_w=X_1X_2\mathrm{}X_n=X_v$$
of Schubert varieties $`X_i=X_{w_i}`$ so that for all $`1i<n`$ the codimension $`dim_{}X_idim_{}X_{i+1}`$ is even and $`S_{w_i}`$ meets $`S_{w_{i+1}}`$ microlocally in codimension one.
Suppose that there exists a chain $`\{w_i\}`$ satisfying the conditions of this theorem. We will use Theorem 2 to give a shorter proof that $`m_v^w0`$. Let $`E_i=E_{w_i}`$, $`S_i=S_{w_i}`$, $`\stackrel{~}{\mathrm{\Lambda }}_i=\stackrel{~}{\mathrm{\Lambda }}_{S_i}`$. Assume inductively that $`m_{w_i}^w>0`$ (this is trivially true for $`i=1`$), so $`_i=_{S_i}(𝐏_w)`$ is a nonzero local system on $`\stackrel{~}{\mathrm{\Lambda }}_i`$. We will show that this implies $`_{i+1}`$ is nonzero as well.
Put $`E=E_{i+1}`$, $`N=N_{w_{i+1}}`$, and $`S=NS_i`$ and let
$$𝐐=𝐏_w|_N[dim_{}S_i]\mathrm{𝐈𝐂}^{}(X_wN).$$
The degree shift makes $`𝐐`$ a perverse sheaf.
We give $`N`$ the stratification induced from $`\{S_w\}`$. Let $`\gamma `$ be a loop in $`\stackrel{~}{\mathrm{\Lambda }}_S`$ as in Theorem 2. We will show that the local system $`_S(𝐐)`$ has trivial monodromy around $`\gamma `$, or equivalently, that $`_i`$ has trivial monodromy around $`\rho \gamma `$, where $`\rho :\mathrm{\Lambda }_N\mathrm{\Lambda }`$ is the inverse to the homeomorphism given by restricting the natural projection $`T^{}X|_NT^{}N`$ to $`\mathrm{\Lambda }T^{}X|_N`$. We can then apply Theorem 2 to show that $`_{\{0\}}(𝐐)=_{S_{i+1}}(𝐏)0`$.
There is an action of the torus $`T=(^{})^{n+1}`$ on $`X`$ which preserves the Schubert stratification, defined by acting on $`^{2n}`$ via
$$(z_0,z_1,\mathrm{},z_n)(x_1,\mathrm{},x_{2n})=(z_1^1x_1,\mathrm{},z_n^1x_n,z_0z_nx_{n+1},\mathrm{},z_0z_1x_{2n}).$$
The $`E_w`$ are all fixed points of this action, the normal slice $`N`$ is preserved, and the induced action on $`N`$ is linear. This action makes all Schubert variety singularities conical, as the following lemma shows.
###### Lemma 6.
For any word $`w\{\alpha ,\beta \}^n`$, there is a homomorphism $`\chi _w:^{}T`$ for which the induced action of $`^{}`$ on $`N_w`$ is the conical action.
###### Proof.
Let $`\chi _w(z)=(z,z^{a_1},\mathrm{},z^{a_n})`$, where $`a_i=1`$ if $`w(i)=\beta `$, and $`a_i=0`$ if $`w(i)=\alpha `$. Checking that this gives the required action is an exercise in local Grassmannian coordinates, similar to arguments in and . ∎
###### Proposition 7.
The loop $`\rho \gamma `$ is homotopic (in $`\pi _1(\stackrel{~}{\mathrm{\Lambda }}_i)`$) to a loop generated by the action of a loop in $`(^{})^{n+1}`$.
###### Proof.
Acting on $`\gamma `$ by the loop $`\theta \chi _{w_{i+1}}(e^{i\theta })`$, we obtain the loop $`\chi _{w_{i+1}}^1\gamma (\theta )=(x,e^{i\theta }\xi )`$; i.e. the point in $`N`$ stays fixed and the covector goes around by a conical action. We can then slide the loop $`\rho (\chi _{w_i}^1\gamma )`$ in $`\stackrel{~}{\mathrm{\Lambda }}_i`$ to get a loop of the form $`(E_i,e^{i\theta }\xi ^{})`$ for some $`\xi ^{}T_{E_i}X`$. Acting now by $`\chi _{w_i}(e^{i\theta })`$ now produces a trivial loop. ∎
The fact that the monodromy of $`_i`$ around $`\rho \gamma `$ is trivial now follows from the following proposition.
###### Proposition 8.
Let $`X`$ be a complex variety with an action of a connected algebraic group $`G`$ and a $`G`$-invariant stratification. If $`𝐏=\mathrm{𝐈𝐂}^{}(\overline{S})`$ for some stratum $`S`$, then the Morse local systems $`_T(𝐏)`$ are constant along loops generated by acting by loops in $`G`$.
###### Proof.
Since $`𝐏`$ is a $`G`$-equivariant perverse sheaf, the local systems $`_T(𝐏)`$ must also be $`G`$-equivariant. ∎
|
no-problem/9907/physics9907012.html
|
ar5iv
|
text
|
# Identification and collapse—the seeds of quantum superluminal communication
## Abstract
We deeply analyze the relation between the identification of the classical definite states and collapse of the superposition state of such definite states during identification, and show that although identification rejects superposition, it needs not result in collapse, which then provides one possibility to achieve quantum superluminal communication.
The special role of measurement was first stressed by Bohr in his complementarity principle to consistently interpret the non-classicality of the quantum world, then von Neumann rigorously formulated the measurement process in his measurement theory involving projection postulate, but the inherent fuzziness in their definition of measurement or projection still exists, thus in order to end the infinite spreading of linear superposition, identification of the observer is implicitly resorted by von Neumann and further advocated by Wigner to break the linear superposition and generate the definite result, this may be the first statement about the relation between identification and collapse, it is simply if identified then collapse.
But, when facing the problem of quantum cosmology, this relation needs to be greatly revised, since for the state of the whole universe, no outside measurement device or observer exists, the special role of measurement and identification are essentially deprived, and the collapse process, if exists, must be the own thing of the wave function. The recent dynamical collapse theories further revise the above relation, according to which the normal linear evolution and projection process of the wave function are unified in one revised stochastic Schrödinger equation, the collapse process is just the natural result of such evolution, thus the new relation is whether or not identified collapse will happen.
Although collapse needs not resort to the identification of the observer, and essentially an objective process, people still implicitly stick to the orthodox view, which asserts that after the conscious observer can identify the classical definite state, the collapse of the observed superposition state of such states must happen, and have still tried to demonstrate that according to the dynamical collapse theory, our brain just satisfies this condition, thus it appears that identification is essentially connected with collapse again, which is evidently not accounted for by the dynamical collapse theory itself, and de facto results only from the requirement of the orthodox interpretation of present quantum theory.
On the other hand, the wide acceptance of this orthodox view is also due to its confusion with the well-known conclusion that identification rejects superposition, which states that when the conscious observer can identify the measurement result of the superposition state, then collapse must happen, where the identification time denotes the time to identify the definite result for the measured superposition state, not for the measured classical definite state, while this conclusion is reasonable, since it only means that, on the one hand, if the observed superposition state is identified as a definite result or perception, namely the identification part of the conscious observer is in a definite state, then collapse must happen; on the other hand, if the identification part of the conscious observer is in a superposition state, then no definite perception exists, and no definite result is identified either. ( It is difficult to accept that when identified as a definite result the superposition state still exists. )
In the following, we will physically re-analyze the relation between the identification and collapse on the basis of the dynamical collapse theory, and show that although identification rejects superposition, it needs not result in collapse, and their combination can result in quantum superluminal communication. First, if the collapse process of the measured superposition state is completed before the conscious observer identifies the measurement result, which may result from the entanglement of the measuring device, then the identification is indeed irrelevant to the collapse process, since what he identifies is just a classical definite state; Secondly, if the collapse process of the measured superposition state is not completed before the conscious observer identifies the measurement result, then the identification process of the conscious observer will surely influence the collapse process of the measured superposition state, especially in an extreme situation, if the conscious observer is the only ”measuring device”, then the collapse process will be mainly determined by the identification process, we will further analyze this situation in detail.
On the one hand, as to the identification process of a conscious observer about a classical definite state, which is one of the states in the above measured superposition state, there are mainly two physical properties characterizing the process, one is the entangled energy to identify the state, the other is the identification time after which the result is identified, and in general there exists no essential relation between them, but it is reasonable that with the natural selection, in which only the classical definite states are input to the conscious observer, the entangled energy will turn to be smaller and smaller, and the identification time will turn shorter and shorter.
On the other hand, according to the general dynamical collapse theories, if the entangled energy turns to be small, then the collapse time will turn to be long, then it is reasonable that with the natural evolution there must appear the conscious observer, for which the collapse of the observed superposition state indeed happens after the relevant classical definite state is identified, and his collapse time, or identification time, for identifying some superposition state is much longer than his identification time for identifying one of the definite classical states in the corresponding superposition state, so that such conscious observer can be conscious of the time difference of these two identifications, and distinguish the measured non-orthogonal single states, then it will be an easy thing to achieve quantum superluminal communication.
On the whole, we have shown that although identification rejects superposition, it needs not result in collapse, and this just provides one possibility to achieve quantum superluminal communication.
|
no-problem/9907/astro-ph9907153.html
|
ar5iv
|
text
|
# Using the Bulge of M31 as a Template for the Integrated X-ray Emission from LMXBs
## 1. Introduction
Nearly all that is known about the X-ray spectra of low mass X-ray binaries (LMXBs) has come from observations of LMXBs in the Galactic bulge or those that lie in the Galactic plane. Unfortunately, this also means that large quantities of Galactic hydrogen gas ($`N_H10^{22}`$ cm<sup>-2</sup>) lie between us and the LMXBs we wish to observe. Nearly all of the X-ray flux below 1 keV from these LMXBs is absorbed by intervening material between us and the LMXB. As a consequence, very little is known about the X-ray properties of LMXBs at very soft X-ray energies. A recent survey of 49 Galactic LMXBs observed with the Einstein Observatory found that a majority of the spectra were adequately fit with a powerlaw plus high energy exponential cutoff spectral model, with $`\mathrm{\Gamma }`$ between –0.2 and 1.0 and a high energy cutoff in the 3–7 keV range. (Christian & Swank 1997). Thermal bremsstrahlung models with $`kT=510`$ keV are also frequently employed to describe the emission from LMXBs. Such models contribute relatively little to the X-ray emission in the 0.1–1 keV range compared to the 1–10 keV range. Given the large hydrogen column densities towards most these objects, though, any soft component would have been completely absorbed.
Is there reason to believe that the X-ray spectrum of LMXBs is interesting below 1 keV? There are two examples of Galactic LMXBs that lie in directions of low hydrogen column densities that were observed with ROSAT and/or ASCA. Both LMXBs show evidence for very soft X-ray emission. Choi et al. (1997) confirmed earlier reports of a 0.1 keV blackbody component in addition to a harder power-law component in the X-ray spectrum of Her X-1 with ASCA. The very soft blackbody component has been interpreted as the thermal re-emission by an opaque distribution of gas around the neutron star. The Galactic LMXB MS 1603+2600 also exhibits very soft emission (Hakala et al. 1998), although there is some question as to whether this system is an LMXB or a cataclysmic variable (Ergma & Vilhu 1993). There are several LMXBs in globular clusters that lie in directions of low column densities that do not show strong excess soft X-ray emission. However, these LMXBs reside in low-metallicity environments. A recent study of 12 LMXBs located in globular clusters of M31 by Irwin & Bregman (1999) found a correlation between the X-ray spectral properties of the LMXB with the metallicity of the host globular cluster. The one LMXB in a globular cluster with greater than solar metallicity had a much softer X-ray spectrum than those LMXBs located in metal-poor clusters.
The possible existence of a soft component of LMXBs is particularly important in the case of early-type galaxies that are very underluminous in X-rays for a given optical luminosity (low $`L_X/L_B`$). In these galaxies it is suspected that the X-ray emission is primarily stellar in nature. Whereas it is well-established that the X-ray emission in X-ray bright elliptical galaxies is predominantly from hot ($``$0.8 keV) gas, X-ray faint galaxies appear to be lacking this component. Instead, their X-ray emission is characterized by a two-temperature (5–10 keV + 0.3 keV) model (Fabbiano, Kim, & Trinchieri 1994; Pellegrini 1994; Kim et al. 1996). The 5–10 keV component is generally regarded as the integrated emission from LMXBs, and has been seen in nearly all early-type galaxies observed with ASCA (see, e.g., Matsumoto et al. 1997). The origin of the soft component remains a mystery. Although it has been suggested that the source of the emission might be warm interstellar gas, recent work has suggested that the source of the soft emission is the same collection of LMXBs responsible for the 5–10 keV component (Irwin & Sarazin 1998a,b).
Given the paucity of Galactic LMXB candidates that are not heavily absorbed, the bulge of M31 provides the nearest laboratory for studying a large number of LMXBs in high metallicity environments at low X-ray energies. At a Galactic hydrogen column density of $`6.7\times 10^{20}`$ cm<sup>-2</sup>, the 0.1–1 keV spectra of LMXBs in the bulge of M31 are not completely absorbed (25% transmission at 0.35 keV), as is the case towards the Galactic bulge. Here, we analyze the joint ASCA and ROSAT PSPC spectrum of the inner 5 of the bulge of M31. By combining both instruments we are able to fit the spectrum of the bulge of M31 over a broad energy range (0.2–10 keV), using the advantages of both instruments to complement one another. This spectrum can be used as a template for what the spectrum of a collection of LMXBs should look like in more distant early-type galaxies.
## 2. Previous X-ray Observations of the Bulge of M31
The first spectral study of the bulge of M31 was performed with the Einstein IPC (0.2–4.5 keV) and MPC (1.2–10 keV) instruments by Fabbiano, Trinchieri, & Van Speybroeck (1987). For the inner 5 of the bulge, this study found that a thermal bremsstrahlung model with $`kT=613`$ keV and no intrinsic absorption above the Galactic value fit the data well. Makishima et al. (1989) used Ginga (2–20 keV) data to study all of M31 (disk + bulge), and found a best-fit temperature of $`7.2\pm 0.4`$ keV with significant excess absorption, although this model yielded a rather large reduced $`\chi ^2`$ value. A better fit was obtained with a powerlaw with high energy cutoff model with $`\mathrm{\Gamma }=1.9\pm 0.3`$ and a cutoff energy of $`6.8\pm 0.5`$ keV. However, the absorption was once again an order of magnitude higher than the Galactic value.
Supper et al. (1997) analyzed a long ROSAT PSPC observation of the bulge of M31 and found a best-fit bremsstrahlung temperature of $``$1 keV, although the authors state that the temperature could not be well-determined. We interpret this to mean that the reduced $`\chi ^2`$ value of this model was large, and our analysis of the same data (§ 4) confirms this. Irwin & Sarazin (1998b) analyzed a much shorter (2800 second) ROSAT PSPC observation and found a best-fit bremsstrahlung temperature of $`0.78\pm 0.07`$ keV from a fit that was marginally acceptable, although a significantly better fit was found with a two component model: a Raymond-Smith (1977) thermal model with $`kT=0.36_{0.06}^{+0.09}`$ keV and a metallicity $`0.012_{0.005}^{+0.012}`$ solar, and a harder bremsstrahlung component with $`kT>6.4`$ keV. Both ROSAT PSPC results indicated that below 2 keV, the spectrum of the bulge of M31 was not well-represented by a hard 5–10 keV bremsstrahlung model, in contrast to the Einstein and Ginga results.
## 3. ROSAT PSPC and ASCA Data Reduction
We have chosen a long ROSAT PSPC observation of the bulge of M31 from the HEASARC archive (RP600068N00). The exposure time was 30,005 seconds. The spectrum of the inner 5 of the bulge was extracted, and the energy channels were rebinned to contain at least 25 counts. A background spectrum, extracted from an annulus of $`30^{}40^{}`$ and corrected for vignetting, was scaled to and subtracted from the source spectrum. Energy channels below 0.2 keV and above 2.4 keV were then excluded.
A long ASCA observation of M31 was also taken from the HEASARC archive (63007000). The data were screened using the standard screening criteria applied to all the archival data (Revision 2 processing). The spectrum of the inner 5 was extracted from the GIS2, GIS3, SIS0, and SIS1 data, with a total exposure time of 177,113 seconds for the combined GIS data and 102,333 seconds for the combined SIS data. We chose to analyze the BRIGHT2 SIS data, since the data can be corrected for echo and dark frame error effects in this mode. The SIS data were taken in 4-CCD mode, but nearly all of the inner 5 of the bulge fit within one chip. Therefore, only data from this one chip were used in the analysis, in order to avoid complications that might arise from averaging together the responses from different chips. Background was obtained from the deep blank sky data provided by the ASCA Guest Observer Facility. We used the same region filter to extract the background as we did the data, so that both background and data were affected by the detector response in the same manner. Energy channels below 0.8 keV and above 10 keV were excluded. Once again, the energy channels were rebinned to contain at least 25 counts. Because of differences in the calibrations among the five data sets, as well as possible temporal variations in the flux between the ROSAT and ASCA observations, we have chosen to let the normalizations of all spectral models be free parameters. In all fits the SIS normalizations were consistent with one another, but about 20% less than the GIS and PSPC normalization. This is a result of having excluded a fraction of the data that fell off the primary chip. Table 1 gives details of the observations.
## 4. Results of Spectral Fitting
### 4.1. One Component Models
As a first step, we have attempted to fit a variety of single component spectral models to the data, the first of which being a thermal bremsstrahlung (TB) model. A very poor fit to the data (reduced $`\chi ^2=3.19`$) was found, with a best-fit temperature of 6.0 keV. A similarly poor fit was found when the ROSAT and ASCA data were analyzed separately. Poor fits were also obtained for blackbody, disk-blackbody, powerlaw, powerlaw with high energy cutoff (CPL), MEKAL (MKL), a self-Comptonization spectrum after Lamb & Sanford (1979; CLS), and a self-Comptonization spectrum after Sunyaev & Titarchuk (1980) models, with a reduced $`\chi ^2`$ always exceeding two for $``$1065 degrees of freedom. These models were chosen since they have often been found in the literature to describe the emission from LMXBs.
To illustrate that a soft component is needed in addition to the hard component, we have excluded the ROSAT data and fit the ASCA data only in the 2.0–10 keV range with a single component bremsstrahlung model with the absorption fixed at the Galactic value. Now the fit is good (reduced $`\chi ^2=1.09`$) with $`kT=7.4\pm 0.3`$ keV, consistent with the best-fit bremsstrahlung model obtained by Ginga over a similar energy range. We have extrapolated this model down to an energy of 0.2 keV, and plotted the ROSAT PSPC data and ASCA data below 2.0 keV over it in Figure 1. Although the 7.4 keV bremsstrahlung model provides a reasonable fit to the ROSAT data between 1.5–2.4 keV, below 1.5 keV there is a considerable excess of soft X-ray emission. This same feature is present in the ASCA data in the 0.8–1.5 keV range. A similar exercise using a cut-off powerlaw (CPL) model yielded similar results, with best-fit values of $`1.5\pm 0.1`$ and $`12.4_{3.3}^{+6.9}`$ keV for the powerlaw exponent and cut-off energy, respectively.
### 4.2. Two Component Models
Next, various combinations of the spectral models described above were fit to the data. Whereas the parameters for the models were linked between the five data sets, the normalizations of each model were allowed to be independent of one another. The absorption was left as a free parameter. Of all the possible combinations, only three gave a fit with a reduced $`\chi ^2`$ less than 1.3 – the MKL + TB, MKL + CLS, and MKL + CPL models. The best-fit parameters for these models are shown in Table 4.1. The errors given are 90% confidence levels for one interesting parameter. For all fits there were approximately 1060 degrees of freedom. The three models gave identical reduced $`\chi ^2`$ values. Despite the low reduced $`\chi ^2`$ values, the null hypothesis probability for the fits was quite small (0.1%) because of the large number of degrees of freedom in the fit. However, the residuals had an approximately Gaussian distribution about zero, so although the models are not formally acceptable, they should lead to accurate fluxes for the two components. The best-fit MKL + TB model is shown in Figure 2 along with the residuals. In all three models, the best-fit absorption was $`35\%`$ below the Galactic H i value of Stark et al. (1992); therefore we have also fixed the absorption at the Galactic value for these two models, and determined the best-fit parameters. This caused an increase in the reduced $`\chi ^2`$ with a very small null hypothesis probability, but still provided a better fit than other models with the absorption left as a free parameter. Other than the metallicity of the MEKAL component, the best-fit parameters did not change significantly by fixing the absorption at the Galactic value.
The temperature of the MEKAL component was well-determined (better than 10% accuracy) with a value between 0.36–0.38 keV, and a metallicity around 20% solar when the absorption was left free, and around 4% solar when the absorption was fixed at the Galactic value. For the hard component, the TB model gave a temperature $`7.8\pm 0.3`$ keV. The temperatures are consistent with the results of the analysis of a shorter PSPC observation (Irwin & Sarazin 1998b), but now the ASCA data have tightly constrained the temperature of the hard component, whereas only a lower limit was found before. For the CLS model, a slightly higher temperature was found along with a low optical depth. For the CPL model, values of $`\mathrm{\Gamma }=1.21.3`$ and a cutoff energy of 6–7 keV are similar to those found for individual Galactic LMXBs with Einstein data by Christian & Swank (1997). Table 4.2 gives the unabsorbed fluxes for the soft and hard components for each model over various energy ranges.
Our results contrast with those of Fabbiano et al. (1987), who found no evidence for a soft component despite the fact that the Einstein IPC was sensitive to photon energies down to 0.2 keV. Although the IPC had poor energy at low energies, a soft component should have been detected. The soft excess emission becomes apparent around an energy of 1.5 keV (Figure 1), so energy resolution effects should not have hindered its detection. We find no obvious explanation for this discrepancy. The soft component is seen in multiple ROSAT PSPC pointings and in all four instruments of ASCA. In addition, we analyzed a shorter ASCA observation of M31 from the HEASARC archive (60037030) and found similar evidence for a soft component.
## 5. Discussion
### 5.1. The Origin of the Soft Component
From spectral fitting, the need for a soft component in the bulge of M31 is clearly evident. But what is the source of the emission? Two possibilities exist. First, the soft emission may emanate from a warm interstellar medium (ISM). The second alternative is that the source of the soft emission is the same as that of the hard component, namely LMXBs. The case for each is presented below.
A high resolution ROSAT HRI study of the bulge of M31 (Primini, Forman & Jones 1993) found that 45 point sources within the inner 5 of the bulge accounted for 58% of the bulge emission with the remaining emission being diffuse. Of the remaining emission, the authors estimate that about 14% is the result of the large-angle scattering component of the point response function of the HRI from the resolved point sources, and another 15%–26% is attributable to the integrated emission from point sources below the detection threshold, given their derived luminosity function for the bulge sources. This leaves between 25%–30% of the total emission unexplained by discrete sources or instrumental effects. Other sources such K and M dwarf stars, cataclysmic variables, and RS CVn stars were insufficient to explain the remaining diffuse emission. Primini et al. (1993) were forced to conclude that the diffuse emission is either a new class of X-ray sources or a hot component of the interstellar medium.
At first glance the interstellar medium explanation seems quite attractive. From the spectral fits presented here, the soft component represents 35%–40% of the total X-ray emission in the 0.1–2.0 keV range (Table 4.2), at least for the case where the absorption was allowed to vary. This is roughly the same percentage of the total emission as the unexplained diffuse emission found by Primini et al. (1993). Furthermore, the soft emission is best-described by a MEKAL model, which is often used to describe the X-ray emission from a metal-enriched, optically thin thermal plasma. The stellar velocity dispersion of the bulge of M31 is 151 km s<sup>-1</sup> (Whitmore 1980). Using the velocity dispersion–X-ray temperature relation derived from a sample of 30 elliptical galaxies by Davis & White (1996) of $`kT\sigma ^{1.45}`$, gas in a potential well of this magnitude should have a temperature of about 0.3 keV, in good agreement with the spectral fits. The best-fit metallicity value is consistent with metallicity measurements obtained from the hot gas in early-type galaxies (see, e.g., Matsumoto et al. 1997).
However, equally compelling pieces of evidence support the claim that the source of the soft emission is LMXBs. First of all, a similar spatial study of the bulge of M31 with Einstein HRI data by Trinchieri & Fabbiano (1991) came to a different conclusion than Primini et al. (1993) concerning the nature of the unresolved emission. Trinchieri & Fabbiano (1991) found that 75% of the bulge emission was resolved into 46 point sources (as opposed to 58% found by Primini et al. 1993 with ROSAT), with the remaining diffuse emission easily attributable to point sources below the detection threshold. Trinchieri & Fabbiano (1991) also found a somewhat steeper luminosity distribution function than Primini et al. (1993). Einstein covered a harder energy bandpass than did ROSAT, so if the fainter sources had harder spectra than the brighter sources, this could lead to a difference in the measured slope between the two studies. Whether this is the cause of the discrepancy between the two studies is unclear. Deep observations with Chandra will be necessary to determine exactly what percentage of the total bulge emission cannot be attributed to point sources.
Second, the determination of the contribution of the soft component relative to the total emission is dependent on the magnitude of the absorption component assumed. When the absorption was fixed at the Galactic value, the amount of (unabsorbed) soft flux increased considerably when compared to the case where the best-fit column density was less than the Galactic value (Table 4.2). When the column density was fixed at the Galactic value, the soft component accounted for 55%–60% of the total emission in the ROSAT (0.1–2.0 keV) band. This is twice the amount of unexplained diffuse emission found by Primini et al. (1993). Thus, at least half of the soft emission must emanate from the discrete sources themselves rather than from any gas present in the bulge, if the model in which the absorption is fixed at the Galactic value is used.
But perhaps the strongest evidence supporting a soft component emanating from the LMXBs rather than an interstellar medium lies in the spectra of the individual LMXBs that have been resolved in the bulge. Supper et al. (1997) analyzed the spectra of 7 of the 22 bulge point sources resolved by the ROSAT PSPC and fit them with simple bremsstrahlung fits. Since the statistics for any one LMXB were poor, this simple model provided an adequate fit to the spectra. The best-fit temperatures were in the range 0.45–1.5 keV, well below the canonical temperature of 5–10 keV previously assumed for LMXBs. This is consistent with the value of $`kT=0.78\pm 0.07`$ keV derived for the bulge of M31 as a whole with the same model by Irwin & Sarazin (1998b), although the fit in that case was only marginal due to better statistics. Nearly all of the remaining 15 point sources had X-ray colors (ratio of X-ray counts in three separate energy bands coving the bandpass of the PSPC) that were similar to the 7 for which temperatures were derived, indicating that they had similar spectra. Since the bright sources, faint sources, and the total emission from the bulge all seem to have similar spectra, this strongly suggests that the soft component seen in the integrated emission from the bulge is emanating from the LMXBs themselves, with little emanating for a hot interstellar medium component.
One final piece of evidence involves comparing the bulge of M31 to the bulge of the nearby Sa galaxy NGC 1291. Bregman, Hogg, & Roberts (1995) fit the ROSAT PSPC spectrum of NGC 1291 with a hard + soft component model similar to the one used to fit the bulge of M31. In fact, the X-ray colors of the bulge of NGC 1291 are nearly identical to those of the bulge of M31, despite the fact that the 0.5–2.0 keV X-ray–to–optical luminosity ratio of NGC 1291 is a factor of 1.7 higher than that of the bulge of M31 (Irwin & Sarazin 1998b). It seems unlikely that the difference in the $`L_X/L_B`$ values can be due to there being a higher percentage of the ISM component in NGC 1291 than in M31; this would lead to a difference in the X-ray colors between the two galaxies. Irwin & Sarazin (1998b) found that the C32 color (defined as the ratio of counts in the 0.91–2.02 keV band to the counts in the 0.52–0.90 keV band) for M31 and NGC 1291 was $`1.16\pm 0.05`$ and $`1.15\pm 0.14`$, respectively, after correcting for absorption (this color is only modestly dependent on absorption). Taking the MKL+TB spectral model for M31 presented in this paper, we added an additional MEKAL component to represent an ISM component that might be present in the spectrum of NGC 1291. This model had a temperature of 0.3 keV and a metallicity of 30% solar. This component was added in an amount such that the 0.5–2.0 keV luminosity increased by a factor of 1.7, to represent the difference in the $`L_X/L_B`$ values between M31 and NGC 1291. Doing this caused the C32 value to decrease by more than 30%. Yet the C32 value for NGC 1291 was identical to that of M31. If the difference in $`L_X/L_B`$ is due in part to an ISM, the ISM component needs to be exactly matched by an increase in the LMXB component to keep C32 in NGC 1291 the same as in M31. A more likely explanation is that the X-ray emission mechanism is identical in the two galaxies (solely LMXB emission), with NGC 1291 having a higher percentage of LMXBs per unit optical luminosity than M31.
### 5.2. Implications For Early-type Galaxies
At some level, stellar X-ray emission must contribute to the total X-ray emission in early-type galaxies, although that level is yet to be determined. In X-ray bright (high $`L_X/L_B`$) galaxies, there is little doubt that a hot ($``$0.8 keV) interstellar medium is responsible for most of the X-ray emission in these galaxies. But even in these galaxies, a measurable hard (5–10 keV) component has been detected with ASCA (Matsumoto et al. 1997), that seems to scale roughly with optical luminosity and has been attributed to the integrated emission from LMXBs. The X-ray faint early-type galaxies remain a puzzle. Low X-ray count rates in these galaxies make them difficult to study, but the emerging picture is that their X-ray spectra are much different than their X-ray bright counterparts.
The first piece of evidence that the spectra of X-ray faint early-type galaxies differed from those of X-ray bright galaxies came from observations performed by Einstein. Kim, Fabbiano, & Trinchieri (1992) found that X-ray faint galaxies exhibited significant excess very soft X-ray emission. Subsequent observations using ROSAT found the X-ray emission of several X-ray faint galaxies to be described by a two-component (very soft \+ hard) model, with the hard component attributed to LMXBs and the very soft component of unknown origin (Fabbiano et al. 1994; Pellegrini 1994; Fabbiano & Schweizer 1995). Irwin & Sarazin (1998b) showed this to be the case in all X-ray faint early-type galaxies. The temperature of the hard component was unconstrained, however, due to the limited bandpass of ROSAT.
Kim et al. (1996) performed a joint ROSAT \+ ASCA analysis of the X-ray faint galaxy NGC 4382. The agreement between their derived spectral parameters for NGC 4382 and the ones presented here for the bulge of M31 are remarkable. Kim et al. (1996) found a good fit (reduced $`\chi ^2`$=1.03 for 190 degrees of freedom) with a Raymond-Smith + TB model with variable absorption. Although we used a MEKAL model instead of a Raymond-Smith model, the difference between the two models was found to be minimal, affecting the metallicity the most. For the Raymond-Smith component Kim et al. (1996) found a temperature of $`kT=0.270.41`$ keV (90% confidence) and a metallicity unconstrained but greater than 10% at the 90% confidence level. They found the temperature of the TB component to be $`kT=4.312.8`$ keV. A best-fit column density that was $``$$`2\times 10^{20}`$ cm<sup>-2</sup> below the Galactic value was also found for NGC 4382 as was the case for the bulge of M31. In addition, the contribution of each component to the total emission are in good agreement for both galaxies. Kim et al. (1996) found an unabsorbed hard–to–soft flux ratio of 1.5 (1.1–1.9), 2.5 (1.7–2.3), and 3.6 (2.5–4.7) in the 0.1–2 keV, 0.2–4 keV, and 0.25–10 keV bands, respectively (the error ranges were calculated using the 1$`\sigma `$ confidence levels on the fluxes given by Kim et al. 1996). From Table 4.2, our flux ratios in those energy bands are 1.7 (1.3–2.1), 2.9 (2.4–3.4), and 4.8 (4.0–5.6), respectively (90% confidence levels). This agreement in the spectral properties of NGC 4382 and M31 suggests a common emission mechanism.
The fact that the X-ray spectral properties of M31 and NGC 4382 are virtually identical, coupled with the fact that no more than 25% of the X-ray emission from the bulge of M31 can result from a warm ISM component points to the interesting (yet not entirely unexpected) conclusion that LMXBs constitute the majority of the X-ray emission in X-ray faint early-type galaxies. In these galaxies, it is quite possible that the ISM has been removed from the galaxy either by Type Ia supernovae-driven winds, or by environmental effects such as ram pressure stripping from the intracluster medium through which the galaxy is moving.
The question of whether the soft component seen in the bulge of M31 is stellar or gaseous will soon be unambiguously answered by Chandra. With its excellent spatial resolution, Chandra will easily determine if the soft emission is resolved or diffuse. In addition, Chandra will be able to resolve point sources in nearby early-type galaxies, at least those at a distance of Virgo or closer. The results presented above predict that in both cases the soft component will be found to emanate primarily from LMXBs.
We thank the anonymous referee for useful comments and suggestions. This research has made use of data obtained through the High Energy Astrophysics Science Archive Research Center Online Service, provided by the NASA/Goddard Space Flight Center. This work has been supported by NASA grant NAG5-3247.
|
no-problem/9907/hep-th9907137.html
|
ar5iv
|
text
|
# The non-chiral fusion rules in rational conformal field theories
## Abstract
We introduce a general method in order to construct the non-chiral fusion rules which determine the operator content of the operator product algebra for rational conformal field theories. We are particularly interested in the models of the complementary $`D`$like solutions of the modular invariant partition functions with cyclic center $`Z_N`$. We find that the non-chiral fusion rules have a $`Z_N`$grading structure.
One of the most important requirements in the construction of two-dimensional conformal field theories (CFT), the bootstrap requirement, is the existence of a closed associative operator product algebra (OPA) among all fields . The information about the OPA structure is collected in the structure constants, the computation of which is fundamental since it permits in principle the determination of all correlation functions. But unfortunately, in practice, the structure of the OPA seems to be very complicated and so is, indeed, the computation of the structure constants.
The structure of the OPA is expressed by the so-called fusion rules , which describe how the chiral-sectors ‘vertices’, representing the holomorphic $`\left(z\right)`$ and antiholomorphic $`\left(\overline{z}\right)`$ dependencies of the primary fields, combine in the OPA . Although these fusion rules contain a large amount of information, this description seems to be incomplete. Indeed, for the maximally extended chiral algebras, i.e., models with no pure holomorphic primary fields, the chiral fusion rules describe completely the OPA structure since the two structures are isomorphic . But for the non-maximally extended algebras, the relationship between the two structures is still not clear and in these special cases, the chiral fusion rules describe in a ‘chiral’ half-manner only the full OPA structure .
The purpose of this letter is to propose a construction that provides a complete description of the OPA structure using the notion of the non-chiral fusion rules, first introduced in for the $`D`$ series of the minimal $`\left(c<1\right)`$ models. The non-chiral fusion rules, in contrast to the chiral ones, determine the operator content of the operator product algebra, i.e., they determine which fields are present in the product (fusion) of two primary fields.
The first step is of course the chiral fusion rules: a non-chiral fusion coupling is vanishing if and only if the corresponding chiral fusion coupling in the two sectors vanishes. This is called the naturality statement . The second step consists in combining properly the fusion rules in the two sectors by considering the closure of the OPA so that only combinations permitted by the operator content are considered. We note that the operator content, in relation with the modular solutions of the partition function, has been completely determined for a particular class of rational conformal theories (RCFT) known as “simple current solutions” . We will henceforth specialize to this particular type of models. The third step is to show how the ”simple-current symmetry”, a symmetry of the fusion rules, manifests itself in the construction of the non-chiral fusion rules. In this letter, we restrict ourselves to the case where the current symmetry forms a cyclic group $`Z_N`$.
Let us now start our construction. The existence of a closed associative operator algebra among all fields is expressed in the following :
$$\mathrm{\Phi }_𝐈(z,\overline{z})\mathrm{\Phi }_𝐉(w,\overline{w})=\underset{𝐊}{}C_{\mathrm{𝐈𝐉𝐊}}\left(zw\right)^{h_kh_ih_j}\left(\overline{z}\overline{w}\right)^{\overline{h}_k\overline{h}_i\overline{h}_j}\left[\mathrm{\Phi }_𝐊(w,\overline{w})+\mathrm{}\right].$$
(1)
$`I(i,\overline{i})`$ indicates the different fields with $`i\left(\overline{i}\right)`$ representing the contributions (vertices) of the holomorphic (antiholomorphic) sectors to the primary fields. $`h_i`$ and $`\overline{h}_i`$ are the conformal dimensions of $`\mathrm{\Phi }_𝐈`$ and the ellipsis stands for terms involving descendant fields. $`C_{\mathrm{𝐈𝐉𝐊}}`$ are the structure constants of the operator algebra. We define the non-chiral fusion rules as a formal structure which determine the operator content of the operator algebra. Thus, the fusion of the two primaries $`\mathrm{\Phi }_𝐈`$ and $`\mathrm{\Phi }_𝐉`$ produces the field $`\mathrm{\Phi }_𝐊`$ if and only if the structure constant $`C_{\mathrm{𝐈𝐉𝐊}}`$ in (1) is non-zero. If we adopt $`(i,\overline{i})`$ and $``$ as notations for a primary field $`\mathrm{\Phi }_𝐈`$ and the OPA operation respectively, the non-chiral fusion rules are formally given by:
$$(i,\overline{i})(j,\overline{j})=\underset{𝐊}{}𝒩_{\mathrm{𝐈𝐉}}^𝐊(k,\overline{k});$$
$$𝒩_{\mathrm{𝐈𝐉}}^𝐊0C_{\mathrm{𝐈𝐉𝐊}}0,$$
(2)
Eq. (2) is similar to how the chiral fusion rules (describing an associative structure) in the two sectors are usually defined. Indeed, we usually associate with each primary field $`\mathrm{\Phi }_𝐈`$ a formal object $`\left(i\right)`$ representing its chiral part and introduce a formal multiplication $``$ by:
$$\left(i\right)\left(j\right)=\underset{k}{}𝒩_{ij}^k\left(k\right),$$
(3)
where $`𝒩_{ij}^k`$ (chiral fusion rules coefficients) counts the number of distinct ways in which $`\left(k\right)`$ occurs in the chiral fusion of the two fields $`\left(i\right)`$ and $`\left(j\right)`$ respectively . If we denote by an arrow $`()`$ the presence of $`\left(k\right)`$ in the fusion $`\left(i\right)\left(j\right)`$, the naturality statement amounts to:
$$\{\begin{array}{c}\left(i\right)\left(j\right)\left(k\right)\\ \left(\overline{i}\right)\left(\overline{j}\right)\left(\overline{k}\right)\end{array}C_{\mathrm{𝐈𝐉𝐊}}0,$$
(4)
which means that for the non-chiral fusion rules, one must have the condition:
$$𝒩_{ij}^k0,𝒩_{\overline{i}\overline{j}}^{\overline{k}}0𝒩_{\mathrm{𝐈𝐉}}^𝐊0.$$
(5)
As mentioned earlier on, the second step is to consider the closure of the OPA: we require the compatibility of the non-chiral fusion rules with the operator content determined by the modular constraints on partition functions. Denoting the set of the operator content of a given conformal model by $`𝒪`$, this requirement is expressed by:
$$𝒩_{\mathrm{𝐈𝐉}}^𝐊0\text{ if }𝐈,𝐉,𝐊𝒪.$$
(6)
The third step relies, as we said, on symmetry considerations. Indeed, a symmetry of the fusion rules provided by the presence of simple currents is quite useful in the construction of a large class of modular invariant partition functions . The construction of simple currents as solutions to modular invariant partition functions reproduces the diagonal $`\left(A\right)`$-like and the non-diagonal $`\left(D\right)`$-like series of the $`SU\left(2\right)`$ Kac-Moody algebra. In the $`\left(D\right)`$ series case, it was emphasized in that the OPA is structured following a discrete cyclic symmetry $`Z_2`$. Here we show how the analogous to this symmetry appears in more general cases as a consequence of a simple-current structure of the $`D`$-like series.
But first, a brief reminder of the structure of a simple-current symmetry. A simple current $`\left(j\right)`$ is a primary field (chiral part) for which the chiral fusion rules with $`\left(i\right)`$ give one non-vanishing fusion coefficient $`𝒩_{ji}^k`$, so that $`\left(j\right)\left(i\right)=\left(k\right)`$ . Due to the associativity of the fusion product, the fusion of two simple currents is again a simple current. Simple currents thus form an abelian group under the fusion product, called the center of the theory. Since the number of primary fields is finite in a rational theory, the number of simple currents is finite. This implies that simple currents are unipotent, i.e., there must be an integer $`N`$ so that $`j^N=1.`$ The smallest integer $`N`$ with this property is called the order of the simple current. Furthermore, by rationality, there must also be a smallest positive integer $`N_i`$ such that $`\left(j\right)^{N_i}\left(i\right)=\left(i\right)`$. By associativity, $`N_i`$ must be a divisor of $`N`$. If a simple current takes a field into itself, we call this latter a fixed point of that current. We see then that any simple current organizes the primary fields into orbits: $`\left\{\left(j^\alpha i\right);\alpha =0\mathrm{}N_i\right\}`$.
The presence of simple currents in a conformal field theory allows not only the organization of the fields into orbits but also the expression of a symmetry through a conservation of a monodromy charge $`Q\left(i\right)`$. This latter is defined modulo an integer and expresses the monodromy property of the (OPA) (1) of a primary field $`\left(i\right)`$ with a simple current $`\left(j\right)`$: $`Q\left(i\right)h_j+h_ih_{\left(ji\right)}mod1.`$ This definition imposes the monodromy charge to be additive under the operator product, i.e., $`Q\left(ik\right)=Q\left(i\right)+Q\left(k\right)`$ and it follows that all terms in the OPA of two fields must have the same charge. As a consequence, the monodromy charge must be conserved $`\left(_nQ\left(i_n\right)=Q\left(1\right)=0mod1\right)`$ in order to have a non-vanishing $`n`$-point correlation function $`i_1i_2\mathrm{}i_n`$. So, if we denote the charge associated with a simple current $`j`$ by $`Q`$, then obviously the charge associated to $`j^d`$ is equal to $`dQ`$, as usual modulo $`1`$. This implies that simple currents of order $`N`$ $`\left(j^N=1\right)`$ have a charge equal to an integer, and therefore $`Q\left(j\right)\frac{r}{N}mod1`$, $`r`$ defined modulo $`N`$.
The conservation of the monodromy charge means that the presence of simple currents in a conformal field theory implies that the center forms an abelian discrete symmetry group. This important fact is very useful in order to find solutions to a large class of modular invariant partition functions using an orbifold-like method with respect to the center. The resulting modular invariant partition functions are known as simple-current invariants. A modular invariant partition function $`\left(Z=M_{i,k}\chi _i\chi _k\right)`$ is called a simple current invariant if all fields paired non-diagonally are related by simple currents: $`M_{i,k}0`$ if $`k=ji`$. In the case where the center forms a cyclic center $`Z_N`$, i.e., there is only one orbit of simple currents $`\{1,j,j^2,\mathrm{},j^{N1}\}`$, the non-zero couplings $`M_{i,k}`$ are given by :
$$M_{i,j^ni}=\frac{N}{N_i}\delta ^1\left[Q\left(i\right)+\frac{1}{2}nQ\left(j\right)\right],$$
(7)
where $`\delta ^1`$ is equal to one if its argument is an integer, zero otherwise. These solutions (7) can be structured as either an automorphism-invariant and integer-spin-invariant structures or a combination of these. In the first structure, the two chiral sectors $`(i,j)`$ are combined via an automorphism of fusion rules $`\pi \left(i\right)=j`$. The second structure can always be regarded as a diagonal invariant of a larger than originally considered chiral algebra . This is what happens when some current $`j^n`$ has an integer spin so that the coupling $`M_{0,j^n}`$ is different from zero and hence produces an holomorphic field corresponding to the extra currents (Noether currents) that will extend the algebra. For the cyclic center $`Z_N`$ with $`N`$ prime, one has a pure automorphism invariant for $`Q\left(j\right)0`$ or integer invariant for $`Q\left(j\right)=0`$. For the non-prime values of $`N`$, the general form of modular partition function can be a combination of the automorphism and integer-invariant solutions. In the sequel, we will exploit the operator-content structure provided by the forms of the modular-invariant partition functions for $`N`$ prime in the construction of the non-chiral fusion rules. The same conclusions will be straightforwardly applied to non-prime values of $`N`$.
We now turn to the construction of the non-chiral fusion rules. We start with the automorphism invariant cases. The fact that the two chiral sectors of a primary field are related by an automorphism of fusion rules makes the OPA structure isomorphic to the structure of the fusion rules . To see this, we first consider the operator content obtained from (7) for $`Q\left(j\right)0`$. This content can be structured in sets:
$$A_k=\left\{(j^ki,j^{Nk}i)/Q\left(i\right)=0\right\}.$$
One can see that the two sectors are related by an automorphism of fusion rules $`\pi \left(i\right)=i^c`$ where $`i^c`$ is the conjugate field of $`i`$ under the fusion operation. To construct the non-chiral fusion rules, let us consider two fields $`(j^ki,j^{Nk}i)`$ and $`(j^pi^{},j^{Np}i^{})`$ of $`A_k`$ and $`A_p`$ respectively. Due to charge conservation in the chiral fusion rules, we have:
$`\left(j^ki\right)\left(j^pi^{}\right)`$ $`=`$ $`j^{k+p}\left(ii^{}\right)j^{k+p}k,Q\left(k\right)=0;`$
$`\left(j^{Nk}i\right)\left(j^{Np}i^{}\right)`$ $`=`$ $`j^{N\left(k+p\right)}\left(ii^{}\right)j^{N\left(k+p\right)}k,Q\left(k\right)=0.`$
The compatibility with the operator content implies then:
$$(j^ki,j^{Nk}i)(j^pi^{},j^{Np}i^{})=\underset{k}{}(j^{p+k}\left(ii^{}\right)_k,j^{N\left(p+k\right)}\left(ii^{}\right)_k),$$
(8)
where $`\left(ii^{}\right)_k`$ represents all fields $`k`$ which are results of a chiral fusion of $`i`$ and $`i^{}`$. In a more compact form, the non-chiral fusion rules are written as:
$$A_kA_p=A_l,l=p+kmodN.$$
(9)
The non-chiral fusion rules have then the structure of a $`Z_N`$-grading where $`N`$ is the order of the center $`Z_N`$. This structure (8) is isomorphic to the chiral fusion rules and thus its associativity derives directly from the associativity of the fusion rules.
It is important to make here two observations related to the $`Z_N`$ structure (9). First, one can see that the set of scalar fields $`A_0`$ forms a closed sub-algebra in the OPA. Secondly, the $`Z_N`$-grading structure of the non-chiral fusion rules (9) points to the conservation of another charge (other than the monodromy charge) in the non-chiral fusion rules. Indeed, if we assign to each field in a set $`\left(A_p\right)`$ a charge $`\left(p/N\right)`$ which will be called a “distance charge”, this charge is conserved.
The above two observations are very important in the construction of the non-chiral fusion rules in the integer invariant cases where the problem is less evident. Indeed, if the automorphism structure discussed so far of the operator content allows us to have an isomorphism between the OPA and the fusion rules, this is not the situation in the integer invariant cases. Formally, the difficulties arise principally from the fact that if $`Q\left(j\right)=0`$, then the charge assigned to $`\left(j^pi\right)`$ with $`Q\left(i\right)=0`$ is also zero.
We tackle the issue in these cases starting from the modular partition functions (7) which yields the following structure of the operator content:
$$A_k=\left\{(i,j^ki)/Q\left(i\right)=0\right\},k=1\mathrm{}N.$$
(10)
Let us now proceed naively with our algorithm as we did in the automorphism invariant cases and see what happens. From the structure of the operator content, one can easily see that it is not possible to have a nice structure as in (9). Indeed, one can see that for each $`\left(i\right)`$ with $`Q\left(i\right)=0`$, we can find $`p`$ and $`\left(k\right)`$ with $`Q\left(k\right)=0`$ such that $`j^pk=i.`$ As a consequence, in the results of the products of the ‘scalar’ fields $`A_0A_0,`$ it is possible to obtain fields from different sets (10) which are not ‘scalars’. A more serious problem comes from the presence of orbits with length $`N_iN`$ or, in other words, the presence of fixed-point fields $`\left(j\right)\left(f\right)=\left(f\right)`$. From (7), this means ($`N_i=1`$) that one has $`N`$ copies of the scalar field $`(f,f)`$ in the operator content. In the structure of the operator content (10), this multiplicity translates in that a fixed point of $`\left(j\right)`$ is also a fixed point of $`\left(j^k\right)`$, so that the fields $`(f,j^kf)A_k`$ are equal to $`(f,f)`$. In the construction of the non-chiral fusion rules, the presence of a multiplicity of a field creates a problem since we are not able to differentiate the behavior of the different copies of this field in the (OPA).
Following our early work , the existence of multiple copies ($`N`$ copies) of the same primary field $`\left\{\mathrm{\Phi }_p,p=0\mathrm{}N1\right\}`$ implies, by the principle of correspondence fields-states, that the corresponding lowest-weight state (ground state vector) is degenerate. This observation suggests the existence of a discrete symmetry $`Z_N`$ of the ground states such that:
$$Z_N\left(\mathrm{\Phi }_p\right)=e^{\frac{i2\pi p}{N}}\mathrm{\Phi }_p.$$
(11)
Thus the symmetry $`Z_N`$ enables one to separate the contributions from the different copies of a degenerate field: we only need to impose the consistency of the non-chiral fusion rules with the action of this symmetry. But we must also determine the action of $`Z_N`$ on the other fields of the theory. In order to do this, we first use a heuristic argument, namely that the fields of the same structure (fields belonging to the same set $`A_p`$) must have the same behavior under $`Z_N`$. Observing that each copy $`\mathrm{\Phi }_p`$ of a degenerate field belongs to a set $`A_p`$, we can write that:
$$Z_N\left(\mathrm{\Phi }\right)=e^{\frac{i2\pi p}{N}}\mathrm{\Phi },\mathrm{\Phi }A_p.$$
(12)
The action of the discrete symmetry $`Z_N`$ provides each field from $`A_p`$ with a charge equal to the distance charge $`\left(p/N\right)`$. The consistency of the non-chiral fusion rules with the $`Z_N`$ action (11, 12) implies the conservation of this charge under the fusion operation, so that finally we obtain a $`Z_N`$-grading (like in the automorphism cases) of these rules:
$$(i,j^pi)(i^{},j^p^{}i^{})=\underset{k}{}(\left(ii^{}\right)_k,j^{p+p^{}}\left(ii^{}\right)_k).$$
(13)
In these rules, the different copies $`\mathrm{\Phi }_p`$ of a degenerate field are considered not as $`(f,f)`$ but as $`(f,j^pf)`$. One first note that the set $`A_0`$ of the ‘scalar’ fields form a sub-algebra of the OPA. By ‘scalar’ here we mean not only the spinless fields but also the single fields under the action of the symmetry $`Z_N`$. A more important remark is the associativity of (13) which is also a consequence of the isomorphism with the structure of the chiral fusion rules.
The conservation of the $`Z_N`$ charge $`p/N`$ in non-chiral fusion rules leads us to think that associating a simple-current field interpretation would provide our construction with a less heuristic argument. In the integer invariant cases, the simple current $`\left(j\right)`$ is present as a chiral part of some physical fields, namely:
$$𝐉_q^p=(j^q,j^{p+q})A_p.$$
(14)
These fields $`𝐉_q^p`$ are indexed by two integers $`q`$ and $`p`$. Indeed, the first index $`q`$ is present in the two sectors as a power of $`j`$ and the second index, the distance $`p,`$ is present also as a power of $`j`$ but only in one sector and so represents the non-diagonal anisotropy in the coupling between the two sectors. These fields (14) form a sub-algebra of the OPA:
$$𝐉_{q_1}^{p_1}𝐉_{q_2}^{p_2}=𝐉_{q_3}^{p_3};q_3=q_1+q_2modN,p_3=p_1+p_2modN,$$
(15)
and their action on a field $`(i,j^pi)`$ $`A_p`$ is determined by our algorithm:
$$𝐉_{q_1}^{p_1}(i,j^pi)=(k,j^{p+p_1}k)A_{\left[p+p_1modN\right]},k=j^{q_1}i.$$
(16)
Thus, one sees that the physical fields $`𝐉_q^p`$ in (14) are the simple-current fields of the non-chiral fusion rules. From their structure, it is easily seen that these simple fields are divided into two categories. The first one is that of the scalar fields $`p=0`$. They form a sub-algebra of the current-field algebra (15) and are generated under fusion operation from $`(j,j)`$. The second category is characterized by a non-vanishing distance $`p`$ and hence is composed by the set of the non-vanishing spin fields. As a representative element of this set, one has the holomorphic (antiholomorphic) field $`(1,j)`$ ($`(j,1)`$). From (15), it is clear that the non-vanishing simple spin fields are all generated from $`(1,j)`$ and $`(j,j)`$.
As for the chiral fusion rules, the presence of simple fields implies the conservation of a charge in the non-chiral fusion rules. Indeed, the action of the simple scalar field $`(j,j)`$ (16) on a given field affects the two sectors on equal footing so that the general structure of this field does not change. Hence, the simple field $`(j,j)`$ can be seen as playing the same role as the simple current $`\left(j\right)`$ and so it expresses the conservation of the monodromy charge in the non-chiral fusion rules. This fact comes actually from imposing the naturality statement in the construction of the non-chiral fusion rules. More important is the holomorphic simple field $`(1,j)`$ since its action (16) affects only the right sector and hence allows the establishment of a correspondence between fields in different sets $`A_p`$:
$$(1,j)A_p=A_{p+1}.$$
(17)
The holomorphic simple field $`(1,j)`$ plays the role of the generator of the distance $`p`$. In terms of orbifold partition functions , eq. (17) means that fields in “twisted sectors” $`\left\{A_{p0}\right\}`$ are generated gradually by the non-chiral fusion of the “twist field” $`(1,j)`$ on fields of the “untwisted sector” $`A_0`$. From this, we can deduce an important fact about the behavior of the degenerate fields. Indeed, by denoting each component of a degenerate field by $`\left\{\mathrm{\Phi }_p\right\}_{p=0}^{p=N1}`$ as an element of a set $`\left\{A_p\right\}_{p=0}^{p=N1}`$ , we can set from (17) that:
$$(1,j)\mathrm{\Phi }_p=\mathrm{\Phi }_{p+1}.$$
(18)
Eq. (18) is nothing but a reinterpretation of the action of the $`Z_N`$ cyclic symmetry (11).
It is now possible to get the general structure of the non-chiral fusion rules, by establishing their $`Z_N`$-grading structure. First we start from the general form of the non-chiral fusion rules of the set of ‘scalar’ fields $`A_0`$:
$$A_0A_0=\underset{i=0}{\overset{N1}{}}B_0^iA_i,$$
(19)
where the coefficient $`B_0^i`$ is non-vanishing if at least one field from the set $`A_i`$ appears in the non-chiral fusion of two given scalar fields. Since the scalars are self-conjugate, we must have at least $`B_0^00`$. The general structure of the non-chiral fusion rules can be deduced by using (17) as a $`Z_N`$-gradation of the scalar rules (19):
$$A_pA_q=\underset{i=0}{\overset{N1}{}}B_0^iA_{i+k},k=p+qmodN.$$
(20)
It is important at this point to recall from eq. (18) that each component $`\mathrm{\Phi }_p`$ of a degenerate field behaves like a field of $`A_p`$ in (20). In order to complete the determination of the non-chiral fusion rules, it remains to establish the ‘scalar’ rules (19). It is important in this regard to recall the nature of the set of ‘scalar’ fields $`A_0`$: it represents the “untwisted sector” in terms of orbifolding partition functions. This sector, having an integer charge, forms a sub-algebra of the original diagonal theory from which the orbifold procedure has been performed, see . Therefore, it is natural to set that the scalar rules (19) must have the following ‘associative’ structure:
$$A_0A_0=A_0,$$
and so, finally, the non-chiral fusion rules (20) will have the desired $`Z_N`$-grading structure (13).
In conclusion, we have developed in this work a method by which we construct non-chiral fusion rules as a formal structure describing the operator content of the OPA of a CFT. We are particularly interested in the non-diagonal rational conformal models obtained from a simple-current construction of modular invariant partition functions with cyclic center $`Z_N`$. The starting point of this construction is the naturality statement from which we have imposed the consistency of the non-chiral fusion rules with the usual fusion rules in the two chiral sectors. The results of these fusion operations are then combined properly by considering the closure of the OPA and so the consistency with the operator content which was taken to be induced by simple currents. In a third step, we have used the simple current structure of the models to establish the conservation of a charge termed as distance charge. This charge is related to the anisotropy (the distance) in the coupling between the two sectors via the simple current $`j`$. If this symmetry is trivially satisfied in the automorphism-invariant cases as expected from , the situation is less trivial in the integer-invariant cases and a more careful analysis is required. For these last cases, the analysis shows the prominent role played by the holomorphic fields $`(1,j^p)`$ as simple current fields of the distance charge symmetry. This important fact reminds us of the Martinec conjuncture which states that the original ‘chiral algebra’ symmetry is not sufficient in order to have a rational theory, and the symmetry of the full ‘set of holomorphic fields’ is needed in order to accomplish a description of it. As a result, it is checked that each field $`(i,j^pi)`$ ($`Q\left(i\right)=0`$) of the integer-invariant part of the operator content will have a distance charge equal to:
$$\frac{p}{N}+\alpha \frac{N_i}{N}\left(1\delta _{N_iN}\right)mod1,\alpha =0\mathrm{}\frac{N}{N_i}1,\text{ }p=0\mathrm{}N_i1.$$
(21)
It is important to point out that the second contribution in the distance charge (21), present for orbits with length $`N_i`$ $`N`$, allows the differentiation between the behaviors of the $`N/N_i`$ copies of a degenerate primary field $`(i,j^pi)`$ in the OPA. As a final result of our construction and by structuring the fields following their distance charge in sets $`A_p`$, the non-chiral fusion rules have then a $`Z_N`$-grading structure.
As a final comment, we recall that the main goal of our construction of the non-chiral fusion rules is one important step towards the ultimate determination of the structure constants of the OPA, following our early work for the minimal models. It is therefore of great importance to be able to find the structure of the non-chiral fusion rules for the more general cases of non-cyclic center $`Z_{p_1}Z_{p_2}\mathrm{}Z_{p_n}`$. This is currently under investigation.
Acknowledgments
We would like to thank J. Fuchs for his interest in this work and his constructive correspondence. One of us (A.R.) would like to thank A. Abada for all his help and encouragements.
|
no-problem/9907/chao-dyn9907003.html
|
ar5iv
|
text
|
# Critical states of transient chaos
## I Introduction
Transient chaos has attracted an increasing interest in the last decade due to its connection to diffusion and chaotic advection . Transiently chaotic behavior develops often in a time period preceding the convergence of the trajectories to an attractor, or their escape from the considered region of space as is the case of chaotic scattering. The length of this time-period depends on the starting point of the trajectory and is unlimited, there are trajectories (though with Lebesgue measure zero) that never escape. The behavior of the very long trajectories is governed by the properties of the maximal invariant set, the chaotic repeller and the natural measure on it . This measure is related to the conditionally invariant measure , namely the former one is the restriction of the latter one to the repeller accompanied with a normalization to unity there (see for reviews ).
Transient chaos is much richer in possibilities than the permanent one. Regarding e.g. the frequently studied chaotic systems, the 1D maps, there are rigorous theorems stating that in case of everywhere expanding maps exhibiting permanent chaos there exists a unique absolutely continuous invariant measure. However, this is not any more valid in case of transient chaos for the conditionally invariant measure , which in many respects takes over the role of an invariant measure. The main purpose of the present paper is to further investigate this question. It will be shown that one has to distinguish between normal (non-critical) and critical conditionally invariant measures. While the first is typically unique, there are continuously many critical conditionally invariant measures. The latter ones deserve their name since their corresponding natural measure is degenerate. Namely, it is non-zero only on a non-fractal subset of the repeller, on a fixed point in case of 1D maps, we are going to study in the present paper.
The map generated by the function $`f(x)`$ is assumed to map two subintervals $`I_0`$ and $`I_1`$ of $`[0,1]`$ to the whole $`[0,1]`$ (see Fig. 1). It is monotonically increasing in $`I_0=[0,\widehat{x}_0]`$ and decreasing in $`I_1=[\widehat{x}_1,0]`$, $`f(0)=f(1)=0`$ and $`f(\widehat{x}_0)=f(\widehat{x}_1)=1`$. The value of $`f`$ is undefined in $`(\widehat{x}_0,\widehat{x}_1)`$, and we consider the trajectory being in this interval to escape in the next iteration. We assume $`f`$ is smooth and hyperbolic ($`1<|f^{}(x)|<\mathrm{}`$) in $`I_0`$ and $`I_1`$, or we allow singular behavior with infinite slope in $`\widehat{x}_0`$, $`\widehat{x}_1`$, and $`x=1`$.
Instead of treating the Frobenius-Perron operator for the density $`P^{(k)}(x)`$ we deal with the measure $`\mu ^{(k)}(x)=\mu ^{(k)}([0,x])`$, where $`\mu ^{(k)}([x_1,x_2])=_{x_1}^{x_2}P^{(k)}(x)𝑑x`$. Note that $`\mu ^{(k)}(x)`$ is a monotonically increasing function. The upper index refers to the discrete time. The equation of time evolution for the measure can be written as
$$\mu ^{(k+1)}(x)=T\mu ^{(k)}(x)T_0\mu ^{(k)}(x)+T_1\mu ^{(k)}(x),$$
(1)
where the contributions of the two branches are
$`T_0\mu ^{(k)}(x)`$ $`=`$ $`\mu ^{(k)}(f_0^1(x)),`$ (2)
$`T_1\mu ^{(k)}(x)`$ $`=`$ $`\mu ^{(k)}(1)\mu ^{(k)}(f_1^1(x)).`$ (3)
$`f_0^1(x)`$ ($`f_1^1(x)`$) denotes the lower (upper) branch of the inverse of $`f(x)`$. Since a portion of the trajectories escapes in every step, normalization is necessary to ensure that the iteration converges to a certain measure , which is then an eigenfunction of $`T`$, namely
$$T\mu (x)=e^\kappa \mu (x).$$
(4)
The measure $`\mu `$ is called conditionally invariant measure (the notation $`\mu `$ without an upper index always refers to that in the present paper), and $`\kappa `$ is the escape rate. The afore mentioned definition of the natural measure $`\nu `$ yields its connection to $`\mu `$. Namely, the natural measure of a non-fractal set $`A`$ ($`\mu (A)>0`$) is given by
$$\nu (A)=\underset{n\mathrm{}}{lim}\frac{\mu (Af^n[0,1])}{\mu (f^n[0,1])}.$$
(5)
The paper is organized as follows. The infinity of the coexisting conditionally invariant measures and the general condition for criticality are presented in Section 2. To get a deeper understanding of the conditionally invariant measures we study the spectrum of the Frobenius-Perron operator in Section 3. In Section 4 we show how maps with critical state are connected to each other by singular conjugation, thereby the singular measures are brought into connection with nonsingular ones. In Section 5 it is shown that the natural measures corresponding to critical states are fully concentrated to the fixed point at $`x=0`$, which is the main property of criticality. Section 6 is devoted to demonstrate some of the results on examples with further discussion.
## II Conditionally invariant measures
First we assume that $`f`$ is nonsingular in $`I_0`$ and $`I_1`$, and in the second part of this section we shall study the case when the map may be singular with infinite slope in $`x=\widehat{x}_0,x=\widehat{x}_1`$ and/or $`x=1`$. We shall see that even in the first case there are continuously many conditionally invariant measures $`\mu _\sigma `$ that have different power law behavior $`\mu _\sigma x^\sigma `$ with $`\sigma >0`$ at $`x=0`$. In order to show this we study what a measure do we get asymptotically when we start with an initial measure $`\mu ^{(0)}`$ that is smooth but scales as $`\mu ^{(0)}(x)ax^\sigma `$ for $`x1`$. (The simplest possibility is to choose $`\mu ^{(0)}=x^\sigma `$, which will be used in the numerical calculations. Note that $`\sigma =1`$ corresponds to the Lebesgue measure.) The escape rate obtained in the asymptotics shall be denoted by $`\kappa _\sigma `$. First we study the action of the terms $`T_i`$ of $`T`$ on $`\mu ^{(0)}`$ separately (see Eqs. (1),(3)). It can easily be seen that the leading term of $`T_0\mu ^{(0)}`$ is
$$T_0ax^\sigma ae^{\lambda _0\sigma }x^\sigma \text{if}x1,$$
(6)
where $`\lambda _0=\mathrm{log}(f^{}(0))`$ is the local Liapunov exponent at the fixed point. On the other hand, since $`T_1`$ does not take values of $`\mu ^{(0)}`$ from the vicinity of $`x=0`$, $`T_1\mu ^{(0)}`$ is smooth with linear behavior
$$T_1\mu ^{(0)}bx\text{if}x1.$$
(7)
Therefore $`T_0\mu ^{(0)}`$ dominates in $`T_0\mu ^{(0)}+T_1\mu ^{(0)}`$ if $`\sigma <1`$, and $`T_0\mu ^{(0)}`$ and $`T_1\mu ^{(0)}`$ both scale linearly near $`x=0`$ if $`\sigma =1`$. So the scaling of $`\mu ^{(0)}`$ at $`x=0`$ is retained if $`\sigma 1`$. That is why we expect that the conditionally invariant measure and the corresponding escape rate may depend on $`\sigma `$.
Turning to the asymptotics for large time we rewrite the $`n`$-th iterate as
$$T^n\mu ^{(0)}=T_0^n\mu ^{(0)}+\underset{k=1}{\overset{n}{}}T^{k1}T_1T_0^{nk}\mu ^{(0)}.$$
(8)
It can be seen from Eq. (6) that the first term on the r. h. s. of (8) gives a contribution $`ae^{\lambda _0\sigma n}x^\sigma `$ for small $`x`$. The similar factor in the sum can also be estimated as $`T_0^{nk}\mu ^{(0)}ae^{\lambda _0\sigma (nk)}x^\sigma `$. Acting on this function by $`T_1`$ a function is created that is proportional to $`x`$ in the vicinity of $`x=0`$, similarly to (7). Therefore $`T_1T_0^{nk}\mu ^{(0)}`$ yields an asymptotics $`e^{\kappa _1k}`$ for large $`k`$ under the action of $`T^{k1}`$. Since for large $`n`$ at least one of $`nk`$ and $`k`$ is large we obtain
$$T^n\mu ^{(0)}ae^{\lambda _0\sigma n}x^\sigma +\underset{k=1}{\overset{n}{}}𝒪\left(e^{\lambda _0\sigma (nk)}e^{\kappa _1k}x\right).$$
(9)
Consequently, in case $`\lambda _0\sigma <\kappa _1`$ ($`\lambda _0\sigma >\kappa _1`$) the first (last) term dominates for large $`n`$ and small $`x`$ and $`T^n\mu ^{(0)}`$ behaves asymptotically as $`e^{\lambda _0\sigma n}`$ ($`e^{\kappa _1n}`$). That means, there is a critical value $`\sigma _c=\kappa _1/\lambda _0`$ such that for every $`\sigma <\sigma _c`$ in the limit $`n\mathrm{}`$ with normalization in each step we obtain a conditionally invariant measure $`\mu _\sigma `$ with leading term proportional to $`x^\sigma `$ at $`x=0`$, while in case $`\sigma >\sigma _c`$ we obtain $`\mu _1`$. It is easy to see applying $`T`$ on $`\mu _1`$ that $`\kappa _1<\lambda _0`$, i. e. $`\sigma _c<1`$. The corresponding escape rates are
$`\kappa _\sigma `$ $`=`$ $`\lambda _0\sigma \text{if}\sigma <\sigma _c,`$ (10)
$`\kappa _\sigma `$ $`=`$ $`\kappa _1\text{if}\sigma >\sigma _c.`$ (11)
So the escape rate in case $`\sigma <\sigma _c`$ is determined by the slope taken at the fixed point $`x=0`$. We consider the system to be critical with respect to $`\mu _\sigma `$ if $`\sigma <\sigma _c`$ since the density of the corresponding natural measure is a Dirac delta function located at the origin, as will be shown in Section 4. Deeper understanding of Eqs. (10), (11) shall be reached in the next section by studying the spectrum of $`T`$.
In the second part of this section we allow singularities of $`f`$ at the maximum points $`\widehat{x}_i=f_i^1(1)`$ with $`i=0,1`$ and/or at $`x=1`$. Namely, the inverse branches behave as
$`f_i^1(x)`$ $``$ $`\widehat{x}_i+B_i(1x)^\psi \text{if}\mathrm{\hspace{0.33em}\hspace{0.33em}1}x1,`$ (12)
$`f_1^1(x)`$ $``$ $`1Cx^\omega \text{if}x1,`$ (13)
where $`\psi 1`$ and $`\omega 1`$. If $`\psi >1`$ ($`\omega >1`$) the map has infinite slope at the maximum points $`\widehat{x}_0,\widehat{x}_1`$ (at the point $`x=1`$). Studying the effect of one iteration on a monotonic function $`\chi (x)`$ that is smooth in $`(0,1)`$ and obeys scaling $`\chi (x)ax^\sigma `$ at $`x=0`$ we see that Eq. (6) remains valid. $`T_0\chi (x)`$ and $`T_1\chi (x)`$ have leading terms with exponent $`\psi `$ at $`x=1`$,
$$T_i\chi (x)T_i\chi (1)b_i(1x)^\psi ,i=0,1\text{if}\mathrm{\hspace{0.33em}\hspace{0.33em}1}x1.$$
(14)
If $`\chi (x)`$ scales as $`\chi (x)\chi (1)b(1x)^\psi `$ near $`x=1`$ then acting by $`T_1`$ on it the result scales as
$$T_1\chi (x)cx^\beta \text{with}\beta =\psi \omega \text{if}x1.$$
(15)
Consequently, starting with a $`\chi (x)`$ that satisfies
$`\chi (x)`$ $``$ $`ax^\sigma \text{if}x1,`$ (16)
$`\chi (x)`$ $``$ $`\chi (1)b(1x)^\psi \text{if}\mathrm{\hspace{0.33em}\hspace{0.33em}1}x1`$ (17)
$`T\chi `$ retains these properties if $`\sigma \beta `$.
We start with a $`\mu ^{(0)}`$ in a class given by Eqs. (16), (17) and investigate Eq. (8) similarly to the case of maps nonsingular in $`I_0,I_1`$, which maps correspond to the case $`\beta =1`$. By Eq. (6) we obtain again that $`T_0^n\mu ^{(0)}ae^{\lambda _0\sigma n}x^\sigma `$ and $`T_0^{nk}\mu ^{(0)}ae^{\lambda _0\sigma (nk)}x^\sigma `$ near $`x=0`$. According to Eqs. (14) and (15) $`T_1T_0^{nk}\mu ^{(0)}`$ belongs to the class of functions defined by Eqs. (16) (17) with $`\sigma =\beta `$. Its iterates by $`T^{k1}`$ decay proportionally to $`e^{\kappa _\beta k}`$ for large $`k`$. Since for large $`n`$ either $`k`$ or $`nk`$ is large, we finally estimate $`T^n\mu ^{(0)}`$ using Eq. (8) as
$$T^n\mu ^{(0)}ae^{\lambda _0\sigma n}x^\sigma +\underset{k=1}{\overset{n}{}}𝒪\left(e^{\lambda _0\sigma (nk)}e^{\kappa _\beta k}x\right).$$
(18)
That means, the border value of $`\sigma `$ is now $`\sigma _c=\kappa _\beta /\lambda _0`$. In case $`\sigma <\sigma _c`$ we obtain a conditionally invariant measure $`\mu _\sigma `$ belonging to the class of functions given by Eqs. (16), (17), while in case $`\sigma >\sigma _c`$ we obtain the conditionally invariant measure $`\mu _\beta `$, which belongs to the class given by (16), (17) with $`\sigma =\beta `$. Applying $`T`$ on $`\mu _\beta `$ one can easily see that $`\kappa _\beta <\lambda _0`$, i. e. $`\sigma _c<\beta `$. The corresponding escape rates are
$`\kappa _\sigma `$ $`=`$ $`\lambda _0\sigma \text{if}\sigma <\sigma _c,`$ (19)
$`\kappa _\sigma `$ $`=`$ $`\kappa _\beta \text{if}\sigma >\sigma _c.`$ (20)
Again the escape rate in case $`\sigma <\sigma _c`$ is determined alone by the slope of the map at $`x=0`$ and the measure belonging to such a $`\sigma `$ represents a critical state. Let us emphasize that while there is a continuum infinity of critical conditionally invariant measures the noncritical one is unique.
It seems to be impossible to determine the full basin of attraction of the conditionally invariant measures. In the class of functions that are monotonic and smooth in $`(0,1)`$ those belong to the basin of attraction of $`\mu _\sigma `$ with $`\sigma <\sigma _c`$ that
* scale as $`ax^\sigma `$ at $`x=0`$ and not slower than $`\mu ^{(0)}(1)b(1x)^{\sigma /\omega }`$ at $`x=1`$,
* or scale faster than $`ax^\sigma `$ at $`x=0`$ and scale as $`\mu ^{(0)}(1)b(1x)^{\sigma /\omega }`$ at $`x=1`$.
The basin of attraction of $`\mu _\beta `$ consists of the functions that scale faster than $`ax_c^\sigma `$ at $`x=0`$ and faster than $`\mu ^{(0)}(1)b(1x)^{\sigma _c/\omega }`$ at $`x=1`$.
Note that the possible singular behavior of the noncritical conditionally invariant measure is determined completely by the map. The behavior of critical conditionally invariant measures on the right hand side is also determined by the map. By this reason we classify these critical measures by the behavior near $`x=0`$. Their leading term at $`x=0`$ is analytic when $`\sigma `$ is integer. The number of such measures is $`[\sigma _c]`$, where \[ \] denotes integer part. If $`\psi =\omega =1`$ then $`[\sigma _c]=0`$.
## III Eigenvalue spectrum
The conditionally invariant measures obtained in the previous section are particular eigenfunctions of the operator $`T`$ (see Eqs. (1,3)), namely, they are monotonous (positive definite) functions. To get further insight into the appearance of the upper value $`\sigma _c`$ of the parameter $`\sigma `$ specifying the critical measures we study more general eigenfunctions of the operator $`T`$. We allow that $`f`$ may have singularity at $`x=\widehat{x}_0`$, $`x=\widehat{x}_1`$ and/or in $`x=1`$, as in the second part of the previous section. However, for sake of simplicity here we also assume that the inverse branches of the map are analytic, so $`\psi `$ and $`\omega `$ in Eqs. (12), (13) (and thereby $`\beta `$ in (15)) are integers. We also assume, as it is typical, that there is a discrete spectrum of the Frobenius-Perron operator in the space of analytic functions. This has been proved for certain one-parameter families of maps .
We shall see that for any value of $`\sigma `$ an expansion in terms of the basis functions $`T_0x^{\sigma +n}`$ and $`x^{\beta +n}`$ with $`n=0,1,\mathrm{}`$ is convenient for the search of eigenfunctions. Therefore we start from the form
$$\varphi =\underset{n=0}{\overset{N(\sigma )1}{}}c_nT_0x^{\sigma +n}+\underset{n=0}{\overset{\mathrm{}}{}}d_nx^{\beta +n},$$
(21)
where $`N(\sigma )=\beta \sigma `$ if $`\sigma `$ is integer, and $`N(\sigma )=\mathrm{}`$ otherwise. The limitation by $`N(\sigma )`$ is necessary if $`\sigma `$ is integer, since $`T_0x^{\sigma +n}`$ can be expanded on the basis functions $`x^{\beta +l}`$ if $`\sigma +n`$ is an integer greater or equal to $`\beta `$. Note that
$$T_0x^{\sigma +n}=\underset{m=0}{\overset{\mathrm{}}{}}g_{mn}x^{\sigma +m}.$$
(22)
It also follows that
$`g_{mn}`$ $`=`$ $`e^{\lambda _0(\sigma +m)}\text{if}m=n,`$ (23)
$`g_{mn}`$ $`=`$ $`0\text{if}m<n,`$ (24)
and the basis functions in the first sum of Eq. (21) are transformed by $`T_0`$ as
$$T_0T_0x^{\sigma +n}=\underset{m=0}{\overset{\mathrm{}}{}}g_{mn}T_0x^{\sigma +m}.$$
(25)
The transformation by $`T_1`$ can be obtained similarly to Eq. (15),
$$T_1T_0x^{\sigma +n}=\underset{m=0}{\overset{\mathrm{}}{}}H_{mn}x^{\beta +m}.$$
(26)
Clearly, the basis functions in the second sum of Eq. (21) are transformed by $`T`$ in the way
$$Tx^{\beta +n}=\underset{m=0}{\overset{\mathrm{}}{}}Q_{mn}x^{\beta +m}.$$
(27)
As seen from Eqs. (25), (26) and (27) the iteration of the vectors $`𝐜,𝐝`$ formed from the expansion coefficients in Eq. (21) under the action of $`T`$ can be described as
$$T\left(\begin{array}{c}𝐜\\ 𝐝\end{array}\right)=\left(\begin{array}{cc}𝐆& \mathrm{𝟎}\\ 𝐇& 𝐐\end{array}\right)\left(\begin{array}{c}𝐜\\ 𝐝\end{array}\right),$$
(28)
where $`𝐆`$ is a matrix constructed from the coefficients $`g_{mn}`$ but with truncation to size $`N(\sigma )\times N(\sigma )`$ if $`\sigma `$ is integer.
In case $`𝐜=0`$ only $`𝐐`$ is in effect, so the eigenvalue problem yields the eigenvalues $`\mathrm{\Lambda }_{\beta ,n}`$ of $`𝐐`$ and corresponding eigenfunctions $`\varphi _{\beta ,n}`$, whose expansion starts with $`x^\beta `$. On the other hand, since $`𝐆`$ is a triangular matrix its leading eigenvalue is $`e^{\lambda _0\sigma }`$ (see Eqs. (23), (24)) that belongs to an eigenvector denoted by $`𝐜_\sigma `$. Finally, an eigenfunction $`\varphi _\sigma `$ of $`T`$ with an $`x^\sigma `$ scaling at $`x=0`$ can be obtained in the form of (21) with eigenvalue $`\mathrm{\Lambda }_\sigma `$ and coefficients
$$𝐜=𝐜_\sigma ,𝐝=(𝐠_{00}𝐐)^1\mathrm{𝐇𝐜}_\sigma ,$$
(29)
where $`\mathrm{\Lambda }_\sigma =𝐠_{00}=e^{\lambda _0\sigma }`$, except the special cases when $`\sigma `$ is an integer greater or equal to $`\beta `$ or $`\mathrm{\Lambda }_\sigma `$ coincides with an eigenvalue $`\mathrm{\Lambda }_{\beta ,n}`$ of $`𝐐`$. Thereby we have obtained besides the assumed discrete spectrum of analytic eigenfunctions an almost continuous spectrum of $`T`$. Most of these eigenfunctions have nonanalyticity due to noninteger value of $`\sigma `$.
The next important question is, how the conditionally invariant measures can be selected from these eigenfunctions. A measure should be nonnegative for any set, so the condition is that $`\varphi (x)`$ should be monotonous. This is ensured when it can be generated starting from a monotonous $`\varphi ^{(0)}(x)`$ as the limit of iteration $`T^n\varphi ^{(0)}(x)`$, normalizing it in each step. We can study this condition using the above results. After one iteration of a general monotonic $`\varphi ^{(0)}`$ with $`𝒪(x^\sigma )`$ scaling at $`x=0`$ the iterate $`T\varphi ^{(0)}`$ is a linear combination of the basis functions $`T_0x^{\sigma +n}`$ and $`x^{\beta +n}`$ with $`n0`$. Therefore the limit of infinite iterations yields the eigenfunction that has the largest eigenvalue among the eigenfunctions that can be expanded on this basis. That one is the eigenfunction $`\varphi _\sigma `$ if $`\mathrm{\Lambda }_\sigma =e^{\lambda _0\sigma }>\mathrm{\Lambda }_{\beta ,0}`$, i. e. $`\sigma <\mathrm{log}(\mathrm{\Lambda }_{\beta ,0})/\lambda _0`$. On the other hand, when starting with an initial $`\varphi ^{(0)}`$ with $`𝒪(x^\beta )`$ scaling at $`x=0`$ we do not get terms with any smaller exponent, thereby we obtain the eigenfunction $`\varphi _{\beta ,0}`$. Therefore the correspondence between these eigenfunctions and the conditionally invariant measures and between the eigenvalues and escape rates can be described as
$`\varphi _\sigma =\mu _\sigma `$ $`,`$ $`\mathrm{\Lambda }_\sigma =e^{\kappa _\sigma }\text{if}\sigma <\sigma _c=\kappa _\beta /\lambda _0,`$ (30)
$`\varphi _{\beta ,0}=\mu _\beta `$ $`,`$ $`\mathrm{\Lambda }_{\beta ,0}=e^{\kappa _\beta }.`$ (31)
The spectrum of the Frobenius-Perron operator allowing singular eigenfunctions has been studied for piece-wise linear maps by MacKernan and Nicolis . Except the tent map, they considered eigenfunctions singular at internal points of the interval. They pointed out the existence of the continuous parts of the spectrum, but did not raise the question of the possible monotonous property of the eigenfunctions for a region of eigenvalues, which has been our main concern here.
Finally we note that singular eigenfunctions of the generalized Frobenius-Perron operator have been of importance in the thermodynamic formalism to describe phase transition like phenomena, where the ”temperature” has played the role of the control parameter .
## IV Conjugation
It can be seen that all critical systems can be brought to the same form by the application of smooth conjugation. For this purpose a conjugation function $`u`$ has to be introduced, which is smooth everywhere except in $`x=0`$ and $`x=1`$, where it may be singular. These singularities can be characterized by the exponents $`\eta `$ and $`\alpha `$:
$`u(x)`$ $``$ $`x^\eta \text{if}x1,`$ (32)
$`u(x)`$ $``$ $`1(1x)^\alpha \text{if}\mathrm{\hspace{0.33em}\hspace{0.33em}1}x1.`$ (33)
By definition, the conjugation transforms the map and the measure in the following way:
$`\stackrel{~}{f}_i^1(x)`$ $`=`$ $`u\left(f_i^1\left(u^1(x)\right)\right),`$ (34)
$`\stackrel{~}{\mu }(x)`$ $`=`$ $`\mu \left(u^1(x)\right).`$ (35)
To see how the conjugation changes the exponents important from the point of view of criticality, transformation (35) has to be applied to the conditionally invariant measure, and transformation (34) to the branches of the inverse map. The conditionally invariant measure $`\mu `$ is in the class of functions given by Eqs. (16,17) and the branches of the inverse map are described in Eqs. (12,13). The conjugation results in the following transformation rules for the characterizing exponents:
$`\stackrel{~}{\psi }`$ $`=`$ $`{\displaystyle \frac{\psi }{\alpha }},`$ (36)
$`\stackrel{~}{\omega }`$ $`=`$ $`\alpha {\displaystyle \frac{\omega }{\eta }},`$ (37)
$`\stackrel{~}{\lambda }_0`$ $`=`$ $`\lambda _0\eta ,`$ (38)
$`\stackrel{~}{\sigma }`$ $`=`$ $`{\displaystyle \frac{\sigma }{\eta }}.`$ (39)
From these transformation rules it is clearly seen that by the application of the appropriately chosen $`u`$ any two of the three quantities $`\psi `$, $`\omega `$ and $`\sigma `$ can be set to unity. This means all critical systems can be brought to the same form, which shows that criticality is the same, regardless it is caused by the singular measure or the singularity of the map in $`x=\widehat{x}_0,x=\widehat{x}_1`$ and $`x=1`$.
It is worth noting that any conditionally invariant measure can be chosen as conjugating function. The conjugation in this case results in the equivalent map, which has the Lebesgue measure as a conditionally invariant one. Such maps will be called Lebesgue maps in the following. The equivalent Lebesgue map will be denoted by $`\stackrel{~}{f}^{(\sigma )}`$, if the conditionally invariant measure chosen for the conjugation is $`\mu _\sigma `$, i.e. the measure decaying at $`x=0`$ with the exponent $`\sigma `$. Naturally, the Lebesgue measure is not singular in $`x=0`$, so $`\stackrel{~}{\sigma }=1`$. Moreover, since the conditionally invariant measure is asymptotically proportional to $`(1x)^\psi `$ in $`x=1`$, the conjugation sets the exponent $`\stackrel{~}{\psi }`$ to unity, too, so after the conjugation both $`\stackrel{~}{\sigma }`$ and $`\stackrel{~}{\psi }`$ are equal to unity. However, if criticality is in existence, i.e. $`\sigma <\sigma _c`$, than $`\stackrel{~}{\omega }`$ will be greater than one, i.e. the equivalent Lebesgue map is singular in $`x=1`$. It is obvious that the map has as many equivalent Lebesgue maps as conditionally invariant measures.
## V Properties of the natural measure
During the investigations of the piecewise parabolic map it was found that the natural measure of the fixed point at $`x=0`$ is positive when the Lebesgue-measure as initial measure was iterated . Numerical results suggested, and later it was supported by analytical considerations that this measure is not only positive but is equal to unity . We shall prove here that this phenomenon is quite a general property: for any map with critical conditionally invariant measure the natural measure of the fixed point at $`x=0`$ is equal to unity.
Since $`f_0^1`$ has a finite slope $`e^{\lambda _0}`$ at $`x=0`$,
$$C_n(x)=\frac{f_0^n(x)}{e^{\lambda _0n}x}\stackrel{n\mathrm{}}{}C_{\mathrm{}}(x),$$
(40)
where $`0<C_{\mathrm{}}(x)<\mathrm{}`$. Furthermore, the critical conditionally invariant measure $`\mu `$ is asymptotically proportional to $`x^\sigma `$, that is
$$m(x)=\frac{\mu (x)}{x^\sigma }\stackrel{x0}{}M,$$
(41)
where $`\sigma <\sigma _c`$ and $`M`$ is finite and positive. We introduce the following notation for the set of the preimages of the unit interval $`I=I_0^{(0)}=[0,1]`$. The first two preimage intervals are $`I_0^{(1)}=f_0^1(I)`$ and $`I_1^{(1)}=f_1^1(I)`$. Similarly, the $`(n+1)`$-th preimages can be generated from the $`n`$-th ones as $`I_i^{(n+1)}=f_0^1(I_i^{(n)})`$ and $`I_{2^n+i}^{(n+1)}=f_1^1(I_i^{(n)})`$. The set of all the $`n`$-th preimages of $`I`$ is denoted by $`I^{(n)}=_{i=0}^{2^n1}I_i^{(n)}`$. We want to determine the natural invariant measure of a single point, the fixed point located at $`x=0`$, a series of intervals containing this point must be found, whose limit is the fixed point itself. The natural measure of the fixed point is equal to the limit of the series of the natural measures of these intervals. The series of the leftmost intervals of the $`k`$-th interval sets is an appropriate and convenient choice, so the natural measure of $`x=0`$ is
$$\nu (\{0\})=\underset{k\mathrm{}}{lim}\nu (I_0^{(k)})=\underset{k\mathrm{}}{lim}\underset{n\mathrm{}}{lim}\frac{\mu \left(I_0^{(k)}I^{(n)}\right)}{\mu \left(I^{(n)}\right)}.$$
(42)
Since $`\mu \left(I_0^{(k)}I^{(n)}\right)>\mu \left(I_0^{(n)}\right)`$, $`k<n`$, we estimate the natural measure from below by keeping only the leftmost interval. From the criticality and Equations (41) and (40) follows
$`\nu (\{0\})`$ $``$ $`\underset{n\mathrm{}}{lim}{\displaystyle \frac{\mu (f_0^n(1))}{\mu \left(I^{(n)}\right)}}`$ (43)
$`=`$ $`\underset{n\mathrm{}}{lim}{\displaystyle \frac{m\left(C_n(1)e^{\lambda _0n}\right)C_n(1)^\sigma e^{\lambda _0n\sigma }}{e^{\kappa n}}}`$ (44)
$`=`$ $`MC_{\mathrm{}}(1)^\sigma >0,`$ (45)
so the positive natural measure of the fixed point is proven.
It can also be shown that this measure is equal to unity. For this purpose the features of the conjugation to an equivalent Lebesgue map have to be used. Let us choose as the conjugating function the non-critical conditionally invariant measure $`\mu _\beta `$, where $`\beta =\psi \omega `$. The conjugation results in the map $`\stackrel{~}{f}^{(\beta )}`$, which is characterized by exponents $`\stackrel{~}{\psi }=1`$ and $`\stackrel{~}{\omega }=\beta `$ . The index $`(\beta )`$ in the following will be omitted. Any $`\mu _\sigma `$ conditionally invariant measure transforms into $`\stackrel{~}{\mu }_{\stackrel{~}{\sigma }}`$, where $`\stackrel{~}{\sigma }=\sigma /\beta `$. Neither $`\mu _\beta `$ nor its conjugated pair, the $`\stackrel{~}{\mu }_1`$ Lebesgue measure are critical, so $`\stackrel{~}{\sigma }<\stackrel{~}{\sigma }_c<1`$ must hold for any critical $`\stackrel{~}{\mu }_{\stackrel{~}{\sigma }}`$ measure. Since $`\stackrel{~}{\mu }_1`$ is Lebesgue measure, the $`\stackrel{~}{\mathrm{}}\left(\stackrel{~}{I}^{(n)}\right)`$ total length of the $`n`$-th preimage set of the unit interval $`I`$ is equal to its measure with respect to $`\stackrel{~}{\mu }_1`$. This fact makes the exact determination of $`\stackrel{~}{\mathrm{}}\left(\stackrel{~}{I}^{(n)}\right)`$ possible. Since $`\mu (f^n(A))=T^n\mu (A)=e^{\kappa n}\mu (A)`$ for any set $`AI`$ and $`\mu `$ conditionally invariant measure
$$\stackrel{~}{\mathrm{}}\left(\stackrel{~}{I}^{(n)}\right)=\stackrel{~}{\mu }_1\left(\stackrel{~}{f}^n(\stackrel{~}{I})\right)=e^{\stackrel{~}{\kappa }_1n}=e^{\kappa _\beta n}$$
(46)
holds. Now we can calculate the natural measure concentrated in the fixed point at $`x=0`$ for $`\stackrel{~}{\mu }_{\stackrel{~}{\sigma }}`$, $`\sigma <\sigma _c`$ critical conditionally invariant measures. For this purpose we can use Eq. (42). Since, provided that $`k<n`$, $`\stackrel{~}{I}_0^{(k)}=[0,\stackrel{~}{f}_0^k(1)]`$ and $`\stackrel{~}{I}^{(n)}=[0,\stackrel{~}{f}_0^k(1)]\stackrel{~}{I}^{(n)}[\stackrel{~}{f}_0^k(1),1]\stackrel{~}{I}^{(n)}`$, we can write that
$`\stackrel{~}{\nu }_{\stackrel{~}{\sigma }}(\{0\})`$ $`=`$ $`\underset{k\mathrm{}}{lim}\underset{n\mathrm{}}{lim}{\displaystyle \frac{\stackrel{~}{\mu }_{\stackrel{~}{\sigma }}(\stackrel{~}{I}_0^{(k)}\stackrel{~}{I}^{(n)})}{\stackrel{~}{\mu }_{\stackrel{~}{\sigma }}(\stackrel{~}{I}^{(n)})}}`$ (47)
$`=`$ $`\underset{k\mathrm{}}{lim}{\displaystyle \frac{1}{1+\underset{n\mathrm{}}{lim}{\displaystyle \frac{\stackrel{~}{\mu }_{\stackrel{~}{\sigma }}([\stackrel{~}{f}_0^k(1),1]\stackrel{~}{I}^{(n)})}{\stackrel{~}{\mu }_{\stackrel{~}{\sigma }}([0,\stackrel{~}{f}_0^k(1)]\stackrel{~}{I}^{(n)})}}}}.`$ (48)
The measure $`\stackrel{~}{\mu }_{\stackrel{~}{\sigma }}([0,\stackrel{~}{f}_0^k(1)]\stackrel{~}{I}^{(n)})`$ can be treated similarly as $`\mu (f_0^n(1))`$ in Equation (43):
$`\stackrel{~}{\mu }_{\stackrel{~}{\sigma }}([0,\stackrel{~}{f}_0^k(1)]\stackrel{~}{I}^{(n)})\stackrel{~}{\mu }_{\stackrel{~}{\sigma }}(\stackrel{~}{I}_0^{(n)})`$ (49)
$`=m\left(\stackrel{~}{C}_n(1)e^{\stackrel{~}{\lambda }_0n}\right)\stackrel{~}{C}_n(1)^{\stackrel{~}{\sigma }}e^{\stackrel{~}{\lambda }_0\stackrel{~}{\sigma }n}.`$ (50)
The expression $`\stackrel{~}{\mu }_{\stackrel{~}{\sigma }}([\stackrel{~}{f}_0^k(1),1]\stackrel{~}{I}^{(n)})`$ is the measure of an interval set located in $`[\stackrel{~}{f}_0^k(1),1]`$ with total length not greater than $`e^{\stackrel{~}{\kappa }_1n}`$, which is the length of the $`n`$-th preimage interval set. This measure is not greater than the maximum of the measure of such interval sets. Since for any fixed value $`k`$ there exist a $`\stackrel{~}{\mu }_{\stackrel{~}{\sigma },\mathrm{max}}^{}(k)`$ finite upper bound of the derivative of the conditionally invariant measure in $`[f_0^k(1),1]`$,
$$\stackrel{~}{\mu }_{\stackrel{~}{\sigma }}\left([f_0^k(1),1]\stackrel{~}{I}^{(n)}\right)\stackrel{~}{\mu }_{\stackrel{~}{\sigma },\mathrm{max}}^{}(k)e^{\stackrel{~}{\kappa }_1n}.$$
(51)
Using inequalities (49), (51) and that $`\stackrel{~}{\lambda }_0\stackrel{~}{\sigma }<\stackrel{~}{\kappa }_1`$ due to the criticality, the limit of the fraction in the denominator of the right hand side of Eq. (47) is equal to zero, so
$$\stackrel{~}{\nu }_{\stackrel{~}{\sigma }}(\{0\})=1$$
(52)
whenever $`\stackrel{~}{\mu }_{\stackrel{~}{\sigma }}`$ is a critical conditionally invariant measure. Now we prove that not only the conjugated natural measure, but the original one is concentrated in the fixed point, too. We have already seen that for any fixed $`k`$
$$\underset{n\mathrm{}}{lim}\frac{\stackrel{~}{\mu }_{\stackrel{~}{\sigma }}(\stackrel{~}{I}_0^{(k)}\stackrel{~}{I}^{(n)})}{\stackrel{~}{\mu }_{\stackrel{~}{\sigma }}(\stackrel{~}{I}^{(n)})}=1.$$
(53)
Since the conjugation does not change the measure of any single interval, i.e. $`\mu _\sigma (I_i^{(n)})=\stackrel{~}{\mu }_{\stackrel{~}{\sigma }}(\stackrel{~}{I}_i^{(n)})`$, the same equation applies for the non-conjugated map, which means
$$\nu _\sigma (\{0\})=1$$
(54)
for any critical measures.
By using this critical measure as conjugating function one can get the equivalent Lebesgue map $`\stackrel{~}{f}^{(\sigma )}`$ where the Lebesgue measure represents the critical state. The density of its corresponding natural measure is $`\delta (x+0)`$. In this Lebesgue map the density of the measure of the coarse grained repeller $`I^{(n)}`$ is given by the $`n`$-th iterate of $`P^{(0)}(x)=1`$ by the adjoint of the Frobenius-Perron equation:
$$L^+g=\{\begin{array}{ccc}g(\stackrel{~}{f}^{(\sigma )}(x))\hfill & \text{ if }\hfill & \stackrel{~}{f}^{(\sigma )}(x)[0,1],\hfill \\ 0\hfill & \text{ if }\hfill & \stackrel{~}{f}^{(\sigma )}(x)[0,1].\hfill \end{array}$$
(55)
This equation has $`\delta (x+0)`$ as eigenfunction with eigenvalue $`e^{\kappa \sigma }=\frac{d}{dx}\stackrel{~}{f}^{(\sigma )}(x)|_{x=0}`$. Our result (52) amounts to proving that $`L_{}^{+}{}_{}{}^{n}P^{(0)}(x)`$ converges to $`\delta (x+0)`$ when $`n\mathrm{}`$. This convergence property has been assumed previously supported by numerical calculations and also some of its consequences have been exploited .
From Equation (54) follows that $`\lambda _0=\lambda `$, where $`\lambda `$ is the average Liapunov exponent, and the Kolmogorov-Sinai entropy $`K`$ is zero for the natural measure in the critical case. The equations
$$\kappa =\sigma \lambda ,K=0$$
(56)
valid for the critical states are the counterparts of the generalized Pesin relation
$$\kappa =\lambda K$$
(57)
valid for noncritical states, obtained by Kantz and Grassberger .
Finally we note, that the map $`f`$ can be considered to be the reduced map of a translationally invariant map of the real axis . Then the diffusion coefficient can be written as an average over the natural measure of the reduced map . This in case of critical state obviously results in a zero diffusion coefficient. In the noncritical state there are important connections between the diffusion and the formula (57) .
## VI Examples
In this section we demonstrate the properties we have found along with further discussion. As an example consider the map whose inverse branches are
$`f_0^1(x)`$ $`=`$ $`{\displaystyle \frac{1+d}{2R}}x{\displaystyle \frac{d}{4R^2}}x^2,`$ (58)
$`f_1^1(x)`$ $`=`$ $`1{\displaystyle \frac{1d}{2R}}x{\displaystyle \frac{d}{4R^2}}x^2,`$ (59)
where $`R>1`$ and $`1<d1`$ must hold. The case $`d=0`$ corresponds to the case of the tent map and the eigenvalue $`\mathrm{\Lambda }_\sigma =(2R)^\sigma `$ has been already given by . Eq. (30) for $`d=0`$ shows in which region one can connect this eigenvalue with the escape rate. The map is conjugated to the symmetric piecewise parabolic map . For the sake of simplicity we limit our discussion to non-negative values of $`d`$. Substituting the inverse branches into the Frobenius-Perron equation, one can immediately see that the Lebesgue measure is a conditionally invariant measure with the escape rate $`\kappa _1=\mathrm{log}R`$, independently of the value of $`d`$. Similarly, the exponent $`\psi `$ is equal to unity for any $`0d1`$. However, $`\omega `$ and consequently $`\beta =\psi \omega `$ have two possible values depending on $`d`$. This makes it sensible to analyse this map in two parts, according to the value of $`\beta `$. Let us start with $`0d<1`$, when $`\beta =\omega =1`$. The value of $`\kappa _\beta =\kappa _1=\mathrm{log}R`$ is exactly known, therefore
$$\sigma _c=\frac{\kappa _\beta }{\lambda _0}=\frac{\mathrm{log}R}{\mathrm{log}R+\mathrm{log}\frac{2}{1+d}}.$$
(60)
Numerical results for $`\kappa _\sigma `$ fit to Eqs. (10), (11), as seen in Fig. 2. Fig. 3 shows some of the numerically obtained conditionally invariant densities.
In the case $`d=1`$ obviously $`\beta =\omega =2`$. Then $`\kappa _\beta `$ is not known exactly, but it can be determined numerically. Numerical calculation for $`R=1.5`$ gave $`\kappa _\beta 0.60`$ and $`\sigma _c=\kappa _\beta /\mathrm{log}R1.48`$. Accordingly to the results of Section 2 conditionally invariant measures were found for $`\sigma <\sigma _c`$ and values of $`\kappa _\sigma `$ fit to Eqs. (19), (20) (see Figs. 2 and 3). However, critical slowing down of convergence is seen near $`\sigma _c`$.
Another map was constructed for that $`\psi =1`$ and $`\beta =\omega =4`$. Its inverse branches are
$`f_0^1(x)`$ $`=`$ $`{\displaystyle \frac{x}{R}}{\displaystyle \frac{x^4}{Q}},`$ (61)
$`f_1^1(x)`$ $`=`$ $`1{\displaystyle \frac{x^4}{Q}},`$ (62)
where $`R>1`$ and $`Q4R`$. In the numerical calculations $`R=1.25`$ and $`Q=40`$ was used. The Lebesgue measure is again one of the conditionally invariant measures with escape rate $`\kappa _1=\mathrm{log}R`$. From the numerical value $`\kappa _\beta 0.730`$ follows that $`\sigma _c=\kappa _\beta /\mathrm{log}R3.27`$. Numerical values of $`\kappa _\sigma `$ are compared to the theoretical values in Fig. 4. Presence of the conditionally invariant measure that is smooth at least in the inside of $`[0,1]`$ was checked numerically at several values of $`\sigma `$ with $`\sigma <\sigma _c`$ and at $`\sigma =\beta `$. Among them the ones with integer $`\sigma `$ have analytic leading term at $`x=0`$. The densities of the latter ones together with a singular one are seen in Fig. 5.
###### Acknowledgements.
This work has been supported in part by the Hungarian National Scientific Research Foundation under Grant Nos. OTKA T017493 and OTKA F17166.
|
no-problem/9907/astro-ph9907359.html
|
ar5iv
|
text
|
# Supernovae Rates: A Cosmic History
## 1 Introduction
The interest in the cosmic history of supernovae stems from several sources. First, core-collapse supernovae (types II and Ib/c) directly follow the star formation history and some of them may be related to gamma-ray bursts. Secondly, Type Ia supernovae (SNe Ia) are being used as the primary standard candle sources for the determination of the cosmological parameters $`\mathrm{\Omega }`$ and $`\mathrm{\Lambda }`$ (e.g. Perlmutter et al. 1999; Riess et al. 1998). Thirdly, a comparison of SNe Ia rates (for the different models of their progenitors) with observations may shed light on both the star formation history and on the nature of the progenitors (e.g. Yungelson & Livio 1998; Madau 1998a). Finally, the counts of distant SNe could be used to constrain cosmological parameters (e.g. Ruiz-Lapuente & Canal 1998). As a consequence of the above, studies of cosmological SNe are among the primary targets for the Next Generation Space Telescope (NGST), which presumably will be able to detect, with proper filters, virtually all the SNe up to a redshift $`z8`$ (see e.g. http://ngst.gsfc.nasa.gov/Images/sn.GIF).
In the present study we combine data on the precursors of SNe Ia in our Galaxy with data on the cosmic star formation rate in an attempt to analyse the frequency of events as a function of redshift.
In view of the uncertainties that still exist concerning the cosmic star formation history, we use two types of inputs to characterize the star formation rate (SFR). In the first, we use profiles inferred from deep observations (e.g. Madau, Panagia & Della Valle 1998). In the second, we use a step-wise SFR which includes a burst of star formation and a subsequent stage of a lower SFR. In the latter case the star formation history is parameterized by the duration of the star burst phase and by the fraction of the total mass of the stellar population that is formed in the burst.
The different scenarios for SNe Ia are briefly discussed in §2. The basic assumptions and model computations are presented in §3 and §4, and a discussion and conclusions follow.
## 2 The Progenitors of SNe Ia
The observed SNe Ia do represent somewhat of a mixture of events, with a majority of “normal” ones and a small minority of “peculiar” ones (see e.g. Branch 1998, and references therein). A more moderate diversity is present even among the “normals.” There exist certain relations between the absolute magnitudes and light curve decline rates and the morphological types of the host galaxies (e.g. Branch, Romanishin, & Baron 1996; Hamuy et al. 1996). This may suggest a possible diversity among the progenitors of SNe Ia (see e.g. review by Livio 1999).
On the theoretical side, SNe Ia are very probably thermonuclear disruptions of accreting white dwarfs. Two classes of explosive events are generally considered in the literature. The first involves central ignition of carbon when the accreting white dwarf reaches the Chandrasekhar mass $`M_{Ch}1.4\mathrm{M}_{}.`$ In the second, the ignition of the accreted helium layer on top of the white dwarf induces a compression of the core which leads to the ignition of carbon at sub-Chandrasekhar masses (these are known as edge-lit detonations or ELDs). Parameterized models for the events in the former class are able to reproduce most of the typical features of SNe Ia, while ELD models encounter a few serious problems when confronted with observations (see e.g. Höflich & Khokhlov 1996; Nugent et al. 1997; Branch 1998; Livio 1999 for a discussion and references). On the other hand, binary evolution theory clearly predicts situations in which helium may accumulate on top of white dwarfs (see e.g. Branch et al. 1995; Yungelson & Tutukov 1997). It is presently not entirely clear whether ELDS indeed do not occur in nature, or whether they are responsible for a subset of the events (e.g. the subluminous ones). However, both the existing diversity in the observed properties of SNe Ia and the uncertainties still involved in theoretical models, suggest that it is worthwhile to explore all the possible options.
The occurence rate of SNe Ia inferred for our Galaxy is $`10^3`$ yr<sup>-1</sup> (Cappellaro et al. 1997). There are three evolutionary channels in which according to population synthesis calculations the realization frequency of potentially explosive configurations in the disk of the Milky Way is at least at the level of $`10^4\text{yr}\text{-1}.`$ These are:
A. Mergers of double degenerates resulting in the formation of a $`M\stackrel{>}{}M_{Ch}`$ object and central C ignition. The channel involves the accretion of carbon-oxygen.
B. Accretion of helium from a nondegenerate helium-rich companion at a rate of $`\dot{M}10^8\text{ }\mathrm{M}_{}\mathrm{yr}^1`$, resulting in the accumulation of a He layer of $`(0.100.15)\mathrm{M}_{}`$ and an ELD.
C. Accretion of hydrogen from a (semidetached ) main-sequence or evolved companion. The burning of H may result either in the accumulation of $`M_{Ch}`$ and central C ignition or in the accumulation of a critical layer of He for an ELD.
The positive aspects and draw-backs of these channels were discussed in detail elsewhere and also by other authors (e.g. Tutukov, Iben, & Yungelson 1992; Branch et al. 1995; Iben 1997; Ruiz-Lapuente et al. 1997; Yungelson & Livio 1998; Hachisu, Kato & Nomoto 1999; Livio 1999). Here we present for “pedagogical” purposes a simplified flow-chart which illustrates some of the evolutionary scenarios which may result in SNe Ia (Fig. 1). Other channels may definitely contribute to the total SNe Ia rate but they are either less productive or they involve large uncertainties (see also §5).
In a typical scenario, one starts with a main-sequence binary in which the mass of the primary component is in the range $`(410)`$$`\mathrm{M}_{}`$. The initial system has to be wide enough to allow the primary to become an Asymptotic Giant Branch (AGB) star with a degenerate CO core. After the AGB star overfills its Roche lobe a common envelope forms. If the components do not merge inside the common envelope, the core of the former primary becomes a CO white dwarf. The subsequent evolution depends on the separation of the components and on the mass of the secondary. If the latter is higher than $`4`$$`\mathrm{M}_{}`$ and the secondary fills its Roche lobe in the AGB stage, then following a second common envelope phase, a pair of CO white dwarfs forms. The two white dwarfs may merge due to systemic angular momentum losses via gravitational wave radiation. As a result, a Chandrasekhar mass may be accumulated, leading potentially to a SN Ia (scenario A).
If the mass of the secondary is above $`2.5`$$`\mathrm{M}_{}`$ and it fills its Roche lobe before core He ignition it becomes a compact He star. If inside the common envelope the components get sufficiently close, the He star may fill its Roche lobe in the core He burning stage and transfer matter at a rate of $`10^8`$ $`\mathrm{M}_{}\mathrm{yr}^1`$. The accumulation of He on top of the white dwarf may result in an ELD (scenario B).
Finally, if the mass of the companion to the white dwarf is below $`(23)`$$`\mathrm{M}_{}`$, the companion may fill its Roche lobe on the main-sequence or in the subgiant phase. Such a star could stably transfer matter at a rate which allows for the accumulation of $`M_{Ch}`$, or of a critical-mass He layer (scenario C).
Below we refer to all the potentially explosive situations listed above as “SNe Ia,” in spite of the fact that it is not entirely clear whether most of these configurations actually result in a SN (see e.g. Livio 1999). We should note that while the above quoted masses are only approximate, the uncertainties are not such that they can change the expected rates significantly.
A special remark has to be made concerning the exclusion of symbiotic stars. Yungelson et al. (1995) have shown that the accumulation of $`M_{Ch}`$ in these systems occurs at a low rate: $`10^5`$yr<sup>-1</sup> (see however discussion in §5). The accumulation of 0.15 $`\mathrm{M}_{}`$ of He via H burning occurs at a rate of $`10^4`$yr<sup>-1</sup>, but the accretion rate is typically high, and hence, one would normally not expect an ELD to ensue. Rather, weak helium flashes may occur. A cautionary note has also to be made concerning ELDs which under certain sets of parameters have an occurence rate of $`10^3`$yr<sup>-1</sup> in semidetached systems (scenario C, Yungelson & Livio 1998). We assumed that ELDs occur even if the accretion rate of hydrogen was initially high but then dropped to below $`3\times 10^8\text{ }\mathrm{M}_{}\mathrm{yr}^1`$. By this, we neglected the possible influence of hydrogen flashes on the helium layer. The response of the helium layer and the underlying white dwarf to the varying accretion rate (from several times $`10^7`$ $`\mathrm{M}_{}\mathrm{yr}^1`$ to $`3\times 10^8`$ $`\mathrm{M}_{}\mathrm{yr}^1`$) was never treated in detail to the best of our knowledge. One may expect a competition between cooling (due to the expansion of the hydrogen layer) and the inward heat propagation (due to nuclear burning).
Cassisi, Iben, & Tornambè (1998) for example, claim that heating by hydrogen flashes keeps the temperature of the He layer high and may even prevent the explosive ignition of He. Rather, they conclude, quiescent burning may be expected (for accretion rates $`10^8`$$`10^6`$ M yr<sup>-1</sup>) during which the white dwarf expands to giant dimensions and its envelope may be removed by interaction with the companion. If an explosion nevertheless happens, it may produce a powerful nova-type event (a “super nova”). As a result of all of these uncertainties (and others) the issue of ELDs via a channel of hydrogen accretion is not definitively settled (see Livio 1999).
One of the cornerstones of channel C is the assumption of negligible mass loss in the form of a wind during helium flashes (e.g. Kato, Saio, & Hachisu 1989), which allows for the accumulation of $`M_{Ch}`$ despite the flashes. The expansion of the helium layers found by Cassisi et al. and the accompanying mass loss may (in some cases at least) prevent the accumulation of $`M_{Ch}`$.
Thus, the realization frequency of both scenarios for SNe Ia (explosion at $`M_{Ch}`$ or at a sub $`M_{Ch}`$ mass) via channel C is a matter of considerable uncertainty. Nevertheless, we include channel C in our consideration (although see discussion in §5).
The basic difference between the possible progenitor scenarios of SNe Ia is in the “evolutionary clock”—the time interval between the formation of the binary system and the SN explosion. Figure 2 shows the dependence of the supernova rate on time after an instantaneous star formation burst, for the four mechanisms listed above, as computed in the present study. The curves shown were computed for a common envelope efficiency parameter $`\alpha _{ce}=1`$; the dependence on this parameter within reasonable limits on $`\alpha _{ce}`$ between 0.5 and 2 is not too strong. For semidetached systems, we considered the case of mass exchange stabilized by the presence of a thick stellar wind (Hachisu, Kato, & Nomoto 1996, henceforth, HKN) as modified by Yungelson and Livio (1998). Further suggested modifications to the standard evolution will be discussed in §5. The differences in the timespan between the formation of a binary and the SN Ia event, and in the rate of decay of the SNe rates in the different channels, manifests itself in the redshift dependence of the SNe Ia rates.
Our calculations are based on the assumption that the IMF, and the mechanisms of SNe Ia are the same throughout the Hubble time. This assumption may not be valid, for example because of metallicity effects. Stars with lower $`Z`$ develop larger helium and carbon-oxygen cores for the same main-sequence mass (e.g. Umeda et al. 1998), and hence, form more massive white dwarfs. At the same time, the upper mass limit of stars which form CO white dwarfs decreases towards a lower metallicity. However, assuming a power-law IMF both of these effects result in an increased number of potential pre-SNe Ia white dwarfs. On the other hand, a low metallicity can inhibit strong optically thick stellar winds which are essential for the HKN model of SNe Ia (Kobayashi et al. 1998). Assuming for the moment that several channels may contribute (see however Livio 1999), the net effect may be an enhanced rate of SNe Ia from the channels of double-degenerates and ELDs from systems with nondegenerate He donors, and a reduction in the rate from the channel of hydrogen-donor systems.
## 3 SNe and the Star Formation rate
### 3.1 Supernovae Rates
The rest-frame frequency of SNe of a certain type at any time $`t`$, $`n(t)`$, may be derived by convolving the star formation rate $`\mathrm{\Psi }(\tau )`$ with the function $`f(t)`$ giving the rate of SNe after an instantaneous burst of star formation:
$$n(t)=_0^tf(t\tau )\mathrm{\Psi }(\tau )𝑑\tau .$$
(1)
Two approaches for the evaluation of $`f(t)`$ are encountered in the literature. The first is not to consider any specific mechanisms of SNe (which are still a matter of some debate), but rather to parameterize $`f`$ by the fraction of exploding stars in the binary star population (the “explosion efficiency”) and the delay between formation and explosion, or the “evolutionary clock” (e.g. Madau et al. 1998; Dahlén & Fransson 1998; Sadat et al. 1998).
For core-collapse supernovae (SN II and SN Ib/c) it is natural to assume that the shape of $`f`$ follows the SFR and the delay between the formation of the star and the SN event is negligible, since the lifetime of stars more massive than 10 M is $`\stackrel{<}{}20`$Myr.
For SNe Ia Madau et al. considered a parameterized $`f(t)`$ with timescales of 0.3, 1, and 3 Gyr between the formation of the WD and the explosion. These authors reproduce the ratio of $`\mathrm{SN}_{\mathrm{II}}/\mathrm{SN}_{\mathrm{Ia}}3.5`$ in the local Universe, if the explosion efficiency is 5% to 10%.
A similar parameterization was adopted by Dahlén & Fransson (1998), who estimated the number of core-collapses and type Ia SNe which may be detected by NGST in different filters for different limiting stellar magnitudes.
Sadat et al. (1998) considered a power-law $`ft^s`$, and explored a range of $`s`$ from 1.4 to 1.8. Another parameter of Sadat et al. is the rise time of the SNe Ia rate from 0 to a maximum which was fixed at 0.75 Gyr. The ranges of $`s`$ and rise times were derived from models of the chemical evolution of Fe in elliptical galaxies and in clusters of galaxies. Concerning the explosion efficiency, Sadat et al. actually do not exploit this parameter since they additionally normalize their rates, in order to reproduce the local rate of SNe Ia by the adopted SFR.
A different approach to the determination of $`f`$ relies on population synthesis calculations. Using this method, Jørgensen et al. (1997) derived the rates of core-collapse supernovae (SNe II and SNI b), mergers of binary WDs with a total mass exceeding $`M_{Ch}`$, and collapses of Chandrasekhar mass white dwarfs in semidetached systems (in the standard model, without the thick wind of HKN).
Ruiz-Lapuente & Canal (1998; see also Ruiz-Lapuente, Canal, & Burkert 1995; Canal, Ruiz-Lapuente, & Burkert 1996) considered as SNe Ia progenitors merging double degenerates and cataclysmic binaries. For the latter channel, $`n(t)`$ was estimated in two cases. First, the “standard” case which allows only thermally stable mass exchange for $`M_{\mathrm{donor}}/M_{\mathrm{accretor}}\stackrel{<}{}0.78.`$ Second, the case of the “wind” solution of HKN, which allows for mass exchange at rates of up to $`10^4\text{ }\mathrm{M}_{}\mathrm{yr}^1,`$ for systems with $`q\stackrel{<}{}1.15.`$ Ruiz-Lapuente & Canal find a distinct difference between the behavior of the predicted SN Ia rates vs. limiting red stellar magnitude for different families of progenitors. Namely, the $`dN/dm_Rm_R`$ relation for descendants of cataclysmic variables is much steeper than that for merging double degenerates. However, the computations of Dahlén & Fransson (1998) do not show any significant difference in the behavior of the SNe Ia counts for different delays in the 0.3–3 Gyr range (the main difference between double degenerates and cataclysmic variable like systems is in the delay time).
### 3.2 The star formation rate
The star formation rate which is used as an ingredient in calculations of the evolution of the cosmic SNe rate is usually derived from studies which model the observed evolution of the galaxy luminosity density with cosmic time. For example, Madau, Pozetti, & Dickinson (1998) and Madau, Della Valle, & Panagia (1998, hereafter MDVP98) have shown that the observational data can be fitted if one assumes, as an ingredient of the model, a time-dependent star formation rate. However, there exist uncertainties in this model, due to the uncertain amount of dust extinction at early epochs. For example, MDVP98 have shown that the same observational data may be fitted if one assumes a constant $`E_{\mathrm{B}\mathrm{V}}=0.1`$ or a $`z`$-dependent dust extinction which rises rapidly with redshift, $`E_{\mathrm{B}\mathrm{V}}=0.011(1+z)^{2.2}`$. The latter authors provide convenient fitting formulae for the star formation rates for these two cases.
Model 1 (“little dust extinction”) has
$$\mathrm{\Psi }(t)=0.049t_9^5e^{t_9/0.64}+0.2(1e^{t_9/0.64})\mathrm{M}_{}\mathrm{yr}^1\mathrm{Mpc}^3,$$
(2)
where $`t_9`$ is the time in Gyr, $`t_9=13(1+z)^{3/2}`$.
Model 2 (“$`z`$-dependent dust opacity”) has
$$\mathrm{\Psi }(t)=0.336e^{t_9/1.6}+0.0074(1e^{t_9/0.64})+0.0197t_9^5e^{t_9/0.64}\mathrm{M}_{}\mathrm{yr}^1\mathrm{Mpc}^3.$$
(3)
Note that eqs. (2) and (3) give slightly different current SFR and the integrated values are also different by about 10%. Both SFR models predict a similar, rather steep rise, by a factor $`10`$, at $`z\stackrel{<}{}1.5`$. The difference between the two rates is in the behavior at $`z\stackrel{>}{}1.5`$. While in the “little dust extinction” case the rate drops almost linearly by a factor of about 10 to $`z_{}=5`$, in the “$`z`$-dependent dust opacity” case it continuously grows to $`z_{}`$, by a factor of $`2.5`$. Formally, the star formation process switches on discontinuously at $`z_{}`$.
We should note that Model 2 may be a more realistic representation of the global star formation history, since there is growing evidence of a significant effect of dust absorption at high $`z`$ (e.g. Pettini et al. 1998; Calzetti & Heckman 1998; Huges et al. 1998; Steidel et al. 1998; Blain et al. 1998). Also, selection effects due to the low surface brightness of galaxies (e.g. Ferguson 1998) or the shift of typical spectral features to the red (e.g. Hu, Cowie, & McMahon 1998) may result in an underestimate of the SFR at high redshifts.
Equation (2) gives a star formation history which is consistent with expectations from hierarchical clustering cosmologies, while Eq. (3) gives the model prediction for SFR typical for a monolitic collapse scenario (e.g. Madau 1998b).
Ruiz-Lapuente & Canal (1998) used in their computations the star formation rate given by Madau (1997), without corrections for dust extinction. The effect of extinction was considered by Dahlén & Fransson (1998) and by Sadat et al. (1998). In the latter case, the SFR at $`z\stackrel{>}{}1`$ was taken to be several times higher than in the “low-dust” case. Jørgensen et al. (1997) considered two modes of star formation: a “burst” lasting for 500 Myr, and a continuous SFR for a Hubble time, and computed models for a range of relative contributions of both star formation modes.
## 4 Model computations
### 4.1 Models using “observed” star formation rates
The population synthesis code used for the computations of the SNe rates was previously applied by the authors to a number of problems related to the population of galactic binary stars and, in particular, to SNe. Within the range of observational uncertainties, the code reproduces correctly the rates of SNe inferred for our galaxy (Tutukov, Yungelson, & Iben 1992; Yungelson & Livio 1998 and references therein).
Throughout this paper we assume a cosmology with $`\mathrm{\Omega }_0=1`$, $`H_0=50\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$. These values of the cosmological parameters are assumed only for convenience. Our qualitative results and conclusions do not depend on this choice. Star formation is assumed to start at $`z_{}=5`$. The Hubble time in this model was taken to be 13 Gyr.
For the different SNe Ia scenarios listed in §2 and for different star formation histories, we first calculated the rest-frame rates of events $`n_0`$. We then computed differential functions for the number of events observed at redshift $`z`$ and cumulative functions $`n(<z)`$. We use eq. \[3.3.25\] from Zel’dovich & Novikov (1983) for the number of events observed from a layer between redshifts $`z`$ and $`z+dz`$ in an expanding, curved Universe, taking into account time dilation:
$$\frac{dn}{dz}=n_0\frac{4\pi c^3}{H_0^3}\frac{1}{1+z}\xi _z(z,\mathrm{\Omega }_0)z^2dz.$$
(4)
Where, for the particular case of $`\mathrm{\Omega }_0=1`$
$$\xi _z(z,1)=\frac{4[(1+z)^{1/2}1]^2}{z^2(1+z)^{5/2}}.$$
(5)
Notice, that $`\frac{\xi _z}{z}<0`$. Time $`t`$ is related to $`z`$ as
$$t=\frac{2H_0^1}{3(1+z)^{3/2}}.$$
(6)
We operate with the number of events per yr instead of expressing the SNe rates in the more conventional Supernovae Units (SNU), since both the computation of blue luminosities and their observational determination involve additional parameters (expressing the rates in SNU may result in loss of information on both the SNe rates and on the SFR).
Our simulations give the rates of SNe as a function of $`z`$. Clearly, the number of observable events depends on other factors such as the limiting stellar magnitude of the sample, etc. Nevertheless, our results provide the basis for theoretical expectations, which need subsequently to be convolved with observational selection effects. In principle, NGST observations can approach the theoretical limits. Figure 3 compares the values of $`dn/dz`$ for the different channels of SNe and the different assumptions about the SFR given by eqs. (2) and (3). Figure 4 shows the behavior with redshift of the cumulative numbers of SNe.
The behavior of $`dn/dz`$ can be understood as follows. In Model 1 (low dust) as one progresses from $`z=0`$ to $`z_{}`$, the SFR reaches a maximum at $`z1.5.`$ The maxima of the rest-frame SNe rates happen at a slightly lower $`z`$ in order of decreasing delay times: ELDs in systems with subgiant companions, $`M_{Ch}`$ accumulations in the latter, mergers of double degenerates, ELDs in systems with nondegenerate He donors, core-collapse SNe (Fig. 2). The behavior of the $`dn/dz`$ counts depends also on the geometrical $`z`$-dependent factors given by eqs. (4) and (5). In particular, the derivative of the product $`z^2\xi _z`$ changes sign from positive to negative at $`z0.96`$. This factor shifts the maximum in the counts to a lower $`z`$. The steep rise of $`dn/dz`$ at low $`z`$ is entirely due to the expanding horizon.
Similarly, in Model 2 ($`z`$-dependent dust opacity), the behavior of $`dn/dz`$ at low $`z`$ is dominated by the expansion of the comoving volume and the rates suggested by the two models are almost indistinguishable. However, already at $`z0.5`$, the increase in the rates in Model 2 becomes somewhat less steep, reflecting the more moderate growth of the SFR. The rate of core-collapse SNe starts to decrease at $`z1.2`$ despite the continuous growth of the SFR. This is a consequence of the negative $`\frac{\xi _z}{z}`$. The rates of SNe Ia start to decline at a higher $`z`$, as a consequence of the longer delay times. The difference in the time delays between SNe II and the different hypothetical SNe Ia manifests itself in an increase in the SN Ia/SN II ratio at low $`z`$ and its subsequent decline (Fig. 7). This feature was already noticed by Yungelson & Livio (1998) for SNe Ia from double degenerates, but in the present study we find that (i) this effect is less pronounced due to the different approximation to the SFR and (ii) the redshift of the maximum of the ratio is different for different SNe Ia scenarios.
The difference in the rate of decline of $`dn/dz`$ at $`z\stackrel{>}{}1`$ is clearly distinct in Models 1 and 2 and may provide important information about the star formation behavior.
The most pronounced feature of $`dn/dz`$ for both types of dust models is the disappearance of SNe Ia at $`z3`$ for the channels of progenitors with relatively long delays. Thus, in principle, a determination of SNe Ia rates at $`z\stackrel{>}{}3`$ with NGST can unambiguously distinguish between different progenitor models. Long delay times are typical for both modes (Chandrasekhar or sub-Chandrasekhar explosions) of SNe Ia resulting from systems with subgiant donors.
The relative role of different channels for SNe Ia changes with $`z`$. In both models 1 and 2 (for the dust) mergers of double degenerates dominate over ELDs in systems with He nondegenerate donors up to $`z\stackrel{<}{}0.4`$. In Model 1, ELDs in systems with subgiants dominate over He-ELD at $`z\stackrel{<}{}0.8`$ and over DD-Ch at $`z\stackrel{<}{}1.3`$. In Model 2 these limits are at about $`z1`$ and $`z2.2`$. If it were the case that all three channels really contribute to SNe Ia but have somewhat different characteristics, one would expect to find variations in the statistical properties of SNe Ia with redshift. We will return to this point in the discussion.
As expected, the cumulative numbers of SNe grow faster in the “low dust” case than in the model with dust. Only the cumulative counts of SNe II and SNe Ia from the DD-Ch and He-ELD channels grow continuously to high redshifts, while those for SNe involving subgiants saturate at $`z3`$.
To summarize this section: observations of SNe beyond $`z1`$ can provide valuable information on the star formation rate (see also §5). The counts of SNe Ia at $`z\stackrel{>}{}3`$ will indicate the timescale of the delay between births of binaries and SN events and will then provide information on the nature of the progenitors.
### 4.2 Parameterized star formation rates
The main uncertainty in the global SFR is due to the effects of dust obscuration in star-forming galaxies (see e.g. Calzetti & Heckman 1998; Pettini et al. 1998; for a discussion of the fraction of light absorbed by dust). Therefore, it is worthwhile to investigate the cosmic history of SNe for several parameterized SFR.
We consider four parameterized modes of galactic star formation (intended to bracket and cover a range of possibilities):
Model 3—constant star formation rate from $`z_{}=5`$ to $`z=0`$;
Model 4—a star formation burst which begins at $`z_{}`$ and has a constant SFR for 1 Gyr;
Model 5—a star formation burst which begins at $`z_{}`$ and has a constant SFR for 4 Gyr;
Model 6—an initial star formation burst which lasts for 4 Gyr with a constant SFR and converts 50% of the total mass into stars, followed by another stage of a lower constant SFR which produces the remaining 50% of the stars (“step-wise SFR”).
For all the cases we normalize the SFR in such a way that the total amount of matter converted into stars is equal to the integral over time of Eq. (2). The overall normalization is of no real significance however, since we are interested in the qualitative behavior of SNe counts.
The computations provide us with information on the behavior of SNe rates with redshift, for different star formation histories. The results provide insights into the understanding the SNe histories in galaxies of different morphological types, which show a wide variety of star formation patterns both along the Hubble sequence and within particular classes (e.g. Sandage 1986; Hodge 1989; Kennicutt, Tamblyn, & Congdon 1994; Kennicutt 1998). Even among the Local Group dwarf galaxies one encounters very different star formation histories, including early bursts, almost constant SFR, and step-wise ones (e.g. Mateo 1998).
Figures 5 and 6 present the number counts of SNe Ia per unit $`\mathrm{\Delta }z`$ and the cumulative rates of events $`n(<z)`$ for the above models. The results can be summarized as follows.
1. Models 3 and 4, with initial starbursts of different durations $`\mathrm{\Delta }\tau `$, clearly predict an abrupt decline in the SNe II rate when moving from $`z_{}`$ to lower redshifts, reflecting the cessation of star formation. The redshift of this sharp decline in the rate indicates (for given cosmological parameters) the value of $`\mathrm{\Delta }\tau `$. It also depends of course on $`z_{}`$. The behavior of SNe Ia from ELDs in systems with He donors (He-ELD) and Chandrasekhar mass SNe in systems with subgiant donors (SG-CH) shows a similar decline, but shifted to a lower $`z`$ and less abrupt.
For stellar populations with strong initial star formation bursts this means that, if He-ELD and SG-Ch SNe Ia were the only mechanisms for SNe Ia, then as one advances to higher redshifts, first the rate of SNe Ia and then the rate of SNe II would rapidly rise. In the case of SG-Ch SNe Ia the rate would rapidly decline at $`z3.3`$. Such a behavior of the rates of SNe Ia in E-S0 galaxies would provide evidence supporting the SG-Ch mechanism for SNe Ia.
2. Strong initial peaks in the SFR followed by a variation of the SFR on a short timescale manifest themselves in changes in $`d^2n/dz^2`$ for SNe Ia. These changes are delayed (to lower redshifts) compared to the decrease in the SNe II rates.
3. SNe Ia from the mergers of double degenerates (DD-Ch) are the only types of events which may show up close to $`z_{}`$ and which continue to $`z=0`$ irrespective of the star formation mode.
4. Although SNe Ia from ELDs in subgiant systems (SG-ELD) start to explode only at $`z3`$, in the constant SFR and step-wise SFR models, at $`z\stackrel{<}{}2`$ the distribution of their number counts vs$`z`$ becomes very similar to the one from double degenerates (DD-Ch) both in morphology and in amplitude. These SNe, however, follow the variations in the SFR slightly slower. These two types of SNe Ia are the only events which may be present at low $`z`$ even if the star formation process ceased long ago (see however §5).
5. SNe Ia from collapses of Chandrasekhar mass white dwarfs in subgiant systems (SG-Ch) may be observed only if the star formation still continues or ceased less than $`2`$ Gyr ago (see however §5).
The same is true for SNe from ELDs in systems with nondegenerate donors (He-ELD). A fast decline of the SNe Ia rate shortly after (at lower $`z`$) the decline of the SNe II rate would indicate that either He-ELD or SG-Ch occur.
6. In the case of a SFR which was almost constant during the past several Gyr there is no decline in the SNe Ia/SNe II ratio between $`z=0`$ and 1.
7. The difference between the behavior of $`dn/dz`$ for a constant-rate and for step-wise SFR models is not significant. This means that only a very significant increase in the SFR towards high $`z`$ (like in Model 2) may be reflected in the behavior of the differential SNe counts. On the other hand, a rapid decline in the SFR beyond a certain redshift (like in Model 2) can be detected easily.
8. The counts of $`dn/dz`$ in the redshift range $`z\stackrel{<}{}0.2`$–0.4 can hardly provide any information about the SFR since they are dominated by the increase of the comoving volume.
## 5 Discussion and Conclusions
Since observations of SNe Ia are now being used as one of the main methods for the determination of cosmological parameters, the importance of identifying the progenitors of SNe Ia cannot be overemphasized. We have shown that different progenitor models result in different SNe Ia rates (or different ratios of frequencies of SNe Ia to those resulting from massive stars) as a function of redshift. One key difference, for example, is in the fact that in all the models that involve relatively long delays between the formation of the system and the SN event (e.g. models with subgiant donors), the ratio $`R(SNeIa)/R(SNeII,Ia,Ic)`$ decreases essentially to zero at $`z\stackrel{>}{}3`$ (Fig. 7). Thus, future observations with NGST will in principle be able to determine the viability of such progenitor models on the basis of the frequencies of SNe Ia at high redshifts.
Probably the most important question that needs to be answered is the following: assuming that two (or more) different classes of progenitors may produce SNe Ia, is it possible that the rate of SNe Ia is entirely dominated by one class at low redshifts ($`z<0.5`$) and by another at higher redshifts ($`0.5\stackrel{<}{}z\stackrel{<}{}1.2`$)? Clearly, if this were the case, then the suggestion of a cosmological constant would have to be re-examined (SNe Ia at the higher $`z`$ only need to be systematically dimmer by $`0.25`$ mag to mimic the existence of a cosmological constant). An examination of the qualitative behavior of the rates in Fig. 7 reveals that in principle, the rate at low redshifts could be dominated by ELDs, while the rate at higher redshifts by coalescing double-degenerates. However, if ELDs produce SNe Ia at all, these are probably of the underluminous variety (like SN 1991bg; e.g. Nugent et al. 1997; Livio 1999; Ruiz-Lapuente et al. 1997). Therefore, a division of this type would produce exactly the opposite effect to the one required to explain away the need for a cosmological constant (the high redshift ones would be brighter). A second possibility is that the rate of SNe Ia resulting from the accumulation of $`M_{Ch}`$ in systems with giant or subgiant components (SG-Ch) has been underestimated. This is in fact a very likely possibility. A number of potential ways have been suggested to increase the frequency of SNe Ia of this class (e.g. Hachisu, Kato & Nomoto 1999). These ways include: (i) mass stripping from the (sub-)giant companion by the strong wind from the white dwarf (this has the effect of increasing the range of mass ratios which result in stable mass transfer). (ii) An efficient angular momentum removal by the stellar wind in wide systems (where the wind velocity and orbital velocity are comparable; this increases the range of binary separations with result in interaction). While large uncertainties plague both of these suggestions (see Livio 1999 for a discussion), it is definitely possible that some physical processes which have not yet been properly included in the population synthesis calculations, will result in a significant increase in the rates from the channel with giant or subgiant companions. This means that in principle, the curve describing the SG-Ch channel (subgiant donor) in Fig. 7 (with the $`z`$-dependent dust opacity), may have to be shifted upwards (essentially parallel to itself, because of the involved delays). The curve could be shifted just enough for double-degenerates to dominate at redshifts $`z\stackrel{<}{}0.5`$, while SG-Ch dominate at $`z\stackrel{>}{}0.5`$. The question is now, could such a dominance shift be responsible for the apparent need for a cosmological constant? The answer is that this is definitely possible in principle. In particular, it has recently been suggested that the fiducial risetime of nearby SNe Ia is $`2.5`$ days longer than that of high-redshift SNe Ia (Riess et al. 1999a,b). It is far from clear though, whether such a change in the risetime (if real) could be attributed to different progenitor classes or to other evolutionary effects. One possibility could be that because SNe Ia resulting from double-degenerates (if they indeed occur; Livio 1999) may have different surface compositions from those resulting from subgiant donors, this could affect the risetime. We would like to note, however, that we find the possibility of one progenitor class dominating at low redshifts and another at high redshifts rather unlikely (see also Livio 1999). The reason is very simple. As Fig. 7 shows, even if the SG curve were to be shifted upwards, the result would be that the local (low $`z`$) sample would have to contain a significant fraction of the SNe resulting from the SG channel. Therefore, unless SNe Ia from the SG channel conspire to look identical to those from double-degenerates at low $`z`$, but different at high $`z`$, this would result in a much less homogeneous local sample than the observed one (which has 80–90% of all SNe Ia being nearly identical “branch normals”; e.g. Branch 1998 and references therein). Consequently, it appears that the observational indication of the existence of a cosmological constant cannot be the result of us being “fooled” by different progenitor classes (this does not exclude the possibility of other evolutionary effects).
Finally, our models indicate that a careful determination of the rates of SNe Ia as a function of redshift can place significant constraints on the cosmic star formation history, and on the significance of dust obscuration.
This work was supported by the Russian Foundation for Basic Research grant No. 96-02-16351. LRY acknowledges the hospitality of the Space Telescope Science Institute. ML acknowledges support from NASA Grant NAG5-6857. We acknowledge helpful discussions with N. Chugaj, M. Sazhin, and A. Tutukov, and useful comments by D. Branch.
|
no-problem/9907/astro-ph9907414.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
Whether or not nonradial modes play a role in RR Lyrae pulsation has been a matter of speculation for some time. Recently, Olech et al. (1999) presented first circumstantial evidence for nonradial modes presence in three RRc variables in M55. The evidence was based on the power spectra which revealed presence of additional modes whose frequencies could not be attributed to radial modes. Similar power spectra were subsequently found in several other RRc anc RRab stars ( Olech et al. 1999b, Kovacs et al. 1999, Moskalik, P. 1999).
Earlier (Kovacs, 1993; Van Hoolst and Waelkens, 1995) proposed the 1:1 resonant excitation of nonradial modes in a radially pulsating star as an explanation of the Blazkho type modulation. Manifestation of this effect in periodograms is an occurrence of the equally-spaced side peaks around the main frequency. Calculations provided by Van Hoolst et al (1998) confirmed the plausibility of this idea. These authors studied stability of radial pulsation with use of the third amplitude equation formalism. They found that there is a high probability of a resonant excitation of a low $`\mathrm{}`$-degree mode. However, their calculations were done only one stellar model. In this paper we apply the same formalism, with one additional simplification, to investigate stability of radial pulsation in a large sets of RR Lyrae star models. An outline of the formalism and the results are given in section 4.2.
Still earlier (Dziembowski, 1977) instability of static models of RR Lyrae stars to certain nonradial modes has been demonstrated. The driving effect is the same as for radial pulsation. A linear instability, however, is not a sufficient condition for excitation. Nonlinear calculations are required to determine the ultimate outcome of the linear instability. Because of enormous numerical complexity, for the nonradial modes such calculations have never been done. The instability could be saturated with excitation of a single mode, which seems the most common situation among RR Lyrae stars. However, for instance among $`\delta `$ Sct stars typical is excitation of many modes. We do not understand why is it so.
In section 4.1 of this paper we present a survey of the unstable nonradial modes in RR Lyrae stars and we discuss potential identifications of the modes detected by Olech et al. (1999).
## 2 Evolutionary models
All the stellar models adopted in the present investigation, have been computed adopting the latest version of the FRANEC evolutionary code, which includes several upgrades of the input physics. Major improvements are the opacity tables for the stellar interiors as given by Rogers & Iglesias (1992) and low-temperature molecular opacities for outer stellar layers by Alexander & Ferguson (1994). Both high- and low-temperature opacity tables have been computed by adopting the Grevesse (1991) solar chemical mixture. The equation of state is the OPAL one (Rogers et al. 1996), implemented in the temperature-density region not covered by OPAL, with the equation of state of Straniero (1988), plus a Saha Eos in the outer stellar layers (see Cassisi et al. 1998, 1999, for more details). As for the calibration of the superadiabatic envelope convection, the mixing length calibration provided by Salaris & Cassisi (1996) has been adopted.
For the present work, we have computed Horizontal-Branch models for two different assumptions on the heavy element abundance: Z=0.0002 and 0.001 namely. In both cases, an initial Helium abundance equal to Y=0.23 has been adopted. All the HB models have as Red Giant Branch progenitor a structure with mass equal to $`0.8M_{}`$. This means that when computing the Zero Age Horizontal Branch (ZAHB) models we have accounted for the evolutionary values for the size of the He core mass and the surface He abundance ($`Y_{HB}`$) at the He ignition corresponding to a $`0.8M_{}`$ progenitor as provided by our own evolutionary computations for the previous H-burning phases.
In Fig. 1, we show the selected evolutionary tracks in the H-R diagram. The symbols along each track indicate the models adopted for the following pulsational analysis.
## 3 Linear nonadiabatic calculations
Oscillation properties of the selected models were studied with the method developed by one of us (Dziembowski, 1977). Its recent updated description may be found in Van Hoolst et al. (1998). For nonradial modes the equation of nonadiabatic oscillations are solved numerically in the envelope and matched to the asymptotic solution for g-modes, which is valid in the deep interior of RR Lyrae stars. The reason is that beneath the matching point the Brunt-Väisälä frequency is much larger than the oscillation frequencies. The Cowling approximation is assumed, which is well justified for the modes considered. The weakest point in the adopted method is the one related to the treatment of convective transport, whose Lagrangian perturbation is simply ignored. This is certainly a poor approximation but it is not essential for the main aim of this work because effects of convection on radial and nonradial modes are nearly the same..
As an introductory example we plot in Fig. 2 the growth rates, $`\gamma =\mathrm{}(\omega )`$, and frequencies, $`f=\mathrm{}(\omega )/2\pi `$ for modes at the selected degrees $`\mathrm{}`$ for one of the models we chose for the pulsation analysis. The temporal dependence of oscillations is assumed in the form $`\mathrm{exp}(\mathrm{i}\omega t)`$. Effects of rotation has been ignored. Thus, each point represents $`2\mathrm{}+1`$ normal modes.
Let us note two types of unstable modes. There are isolated rapidly unstable modes with the growth rates $`\gamma >0.01`$ d<sup>-1</sup> and sequences of modes with much lower $`\gamma `$’s. In the former group we find radial modes and modes with $`\mathrm{}`$= 6 and 10 which belong to the class of strongly trapped unstable (STU) modes defined by Van Hoolst et al.(1998). These modes have no counterpart in the adiabatic approximation. In the interior the eigenfunction of such modes are – to good approximation – described as inward propagating internal gravity waves with exponentially decreasing amplitude.
The growth rate behavior in the sequences of low degree modes reflect the trapping properties of the acoustic cavity. Still at $`\mathrm{}=1`$ even for the best trapped modes more than 80 percent of the kinetic energy is contributed by the g-mode propagation. The trapping effect is weaker at $`\mathrm{}=2`$ and 3 but then it begins to increase. Note the sharp peak of $`\gamma `$ in the $`\mathrm{}=6`$ sequence near the first overtone frequency. The STU modes occur always between the two best trapped ordinary modes.
For the occurrence of STU mode a sufficient trapping in the evanescent zone separating p- and g-mode propagation zone is needed. In our selected mode the STU fundamental modes appear at $`\mathrm{}=8`$. With increasing $`\mathrm{}`$ they tend to Kelvin (f or surface) modes. The instability continues well above $`\mathrm{}=100`$. We hesitate to give the upper limit because of the increasing uncertainty due to our crude treatment of convection. Near the first overtone the STU modes begin at $`\mathrm{}=5`$ and end at $`\mathrm{}=15`$.
## 4 Survey of pulsational properties
The H-R positions of stellar models selected for this survey were shown in figure 1. The models cover various stages of the central helium burning. This can be seen in figure 3, where central helium content is plotted as function of the effective temperature. In the same figure, the periods of the first two radial modes are plotted. Solid symbols are used to denote linearly unstable modes. Second overtone is also unstable in some of our models. However, because three is no observational evidence for second overtone excitation in RR Lyrae stars, in present survey we consider only modes in the vicinity of the first two radial modes.
### 4.1 Opacity-driven modes
The general property of all models considered is that the trapping effect is weak in the $`\mathrm{}=24`$ range. In the vicinity of an unstable fundamental radial there are always unstable $`\mathrm{}=1`$ modes. In some models, like the one used in Fig. 2, there is a frequency range where the $`\mathrm{}=2`$ modes are unstable as well but with much lower growth rates. Rapid instability occurs only for the STU modes, which began in most of the models at $`\mathrm{}=8`$. The trapping pattern near first overtone is similar to that near the fundamental mode. Again the most unstable are modes of the $`\mathrm{}=1`$ degree and then STU modes which begin at $`\mathrm{}=5`$ or 6. The main difference is instability at all low degrees.
In Figs. 4 and 5 we show, respectively for the fundamental mode and first overtone ranges, the frequency distances to corresponding radial modes and the relative growth rates for most unstable $`\mathrm{}=1`$ modes and for the selected STU modes. The most unstable $`\mathrm{}=1`$ modes as well as all the STU modes have always higher frequencies than the corresponding radial modes. The two plotted parameters vary in rather narrow ranges and their values are determined by the radial mode periods. The dependence on the abundance ($`Z`$) is most easily seen in the distances of the STU modes. The dependence on the evolutionary status ($`Y_c`$), for which one should consult Fig. 3, is not recognizable.
The growth rates of STU modes are similar to those of the radial modes. If the instability is saturated by one of these relatively high degree modes the star would appear as a nonpulsating object. At $`\mathrm{}=510`$ the cancellation of the opposite sign contribution would reduce the disc-averaged light amplitude to at most few millimagnitude. There is no firm evidence for occurrence of nonpulsating stars in RR Lyrae stars in the RR Lyrae domain of the H-R. The linear theory does not yield us a hint why radial modes are so much preferred over the STU modes by stars.
The secondary peaks in the three amplitude-modulated RRc stars discovered by Olech et al.(1999) cannot be explained in terms STU mode excitations. In Table 1 we provide data on the distances between the primary and secondary peaks and the relative V-amplitudes.
The secondary peak amplitudes are still too large and in two cases the frequencies are lower than those of the main peaks. Interpretation in terms of the $`\mathrm{}=1`$ modes is more plausible though not free of difficulties. Also in this case the secondary peaks position present certain problem. However, the problem is not so essential because always we find unstable $`\mathrm{}=1`$ modes on both sides of the radial modes. Furthermore, we do not have arguments why radial modes should always have higher amplitudes.
### 4.2 Resonant modes
The criterion for the instability of radial pulsation to excitation of a resonant nonradial mode may be written in the following form
$$\left(\frac{\delta R}{R}\right)^2>\sqrt{\frac{D^2+\kappa ^2}{C}},$$
(1)
where $`\delta R`$ is the amplitude of radius variations in radial pulsation; $`D`$ denotes the frequency distance between the radial and the nonradial mode; $`\kappa `$ is the damping rate of the nonradial mode in the limit cycle of the radial mode; and $`C`$ is the coupling coefficient. The criterion is from Van Hoolst et al.(1998) only the notation is different.
Evaluation of the quantities occurring in the r.h.s. of Eq.(1) requires, in principle, nonlinear calculations, which we have not done. However, for a crude evaluation of the probability that a radial mode of specified amplitude is unstable to parametric excitation of a nonradial mode, we need only linear mode characteristic provided in Figs. 6 and 7 and certain coupling coefficients. These coefficients were evaluated by Van Hoolst et al.(1998) for the stellar model they selected. Here we rely on a simple scaling of their numbers which we will explain below.
Let $`𝒫_{\mathrm{}}(A)`$ denotes the excitation probability at the radial mode amplitude $`A=\delta R/R`$. Then, if effect of rotation are ignored, we have
$$𝒫_{\mathrm{}}(A)=\{\begin{array}{cc}0\hfill & \text{if }A^4C^2\kappa ^2\hfill \\ \mathrm{Min}(1,\frac{1}{2}\frac{\sqrt{A^4C^2\kappa ^2}}{\mathrm{\Delta }\omega })\hfill & \text{if }A^4C^2>\kappa ^2\hfill \end{array}$$
(2)
where $`\mathrm{\Delta }\omega =\omega _{\mathrm{},n1}\omega _{\mathrm{},n}`$ denotes the frequency distance between consecutive g-modes of degree $`\mathrm{}`$. Note that $`\mathrm{\Delta }\omega /2`$ is the maximum frequency distance between the radial and the nearest nonradial mode and that $`\sqrt{A^4C^2\kappa ^2}`$ is the distance at the onset of the instability.
The coupling coefficients, $`C`$, for various radial & nonradial mode pairs in the model of RR Lyrae star was explicitly calculated by Van Hoolst et al. (1998) . From their numbers we found an approximate relation
$$C_{k,\mathrm{}}=b_k\frac{I_{0,k}}{I_{\mathrm{}}},$$
(3)
with
$$b_0=27\text{ and }b_1=172d^1,$$
where $`k`$ denote radial mode order (here $`k=0`$ and 1 for the fundamental and the first overtone, respectively), $`I`$ denotes mode inertia evaluated assuming the same amplitude at the surface. Our additional simplification consists in adopting the same $`b_k`$ values for all our models.
Final simplification, which we adopted after Van Hoolst et al. (1998), is the assumption that $`\kappa =\gamma _g`$, where -$`\gamma _g`$ is the damping rate due to dissipation in the g-mode propagation zone. This seems well justified because we consider the situation when the opacity driven instability is saturated by the radial mode and the resonant nonradial modes have almost the same properties in the outer layers. Consequently, there should be also an exact balance between the driving and damping also for the nonradial mode. Then -$`\gamma _g`$ is all what remains.
The joint probability of the instability is given by
$$𝒫(A)=1\underset{\mathrm{}}{}\left[1𝒫_{\mathrm{}}(A)\right].$$
(4)
For rotating stars we have to consider modes of different azimuthal numbers and evaluate probabilities $`𝒫_{\mathrm{},m}`$. The effect increases probability of the resonant instability (Dziembowski et al. 1988).
Very much like Van Hoolst et al. (1998) we find the maximum of probability of the resonant excitation of an $`\mathrm{}=1`$ mode, both for the fundamental and the overtone radial modes and then an $`\mathrm{}=5`$ or 6 mode for the fundamental and an $`\mathrm{}=4`$ for the overtone. This is why we selected these $`\mathrm{}`$-values in Figs. 6 and 7. In addition, there are data for $`\mathrm{}=2`$ modes. Excitation of these modes is less likely than the $`\mathrm{}=1`$ modes because of higher inertia. The data on $`I_2/I_0`$ are important for evaluation of effects of rotation and magnetic field on radial pulsation. We will not discuss these effects here.
In Fig. 8 we present results of calculation of the excitation probabilities for modes of selected degrees which yield the dominant contribution to the joint probability. We chose $`A=0.075`$ for the fundamental mode and $`A=0.025`$ for the first overtone. These values correspond to the mean amplitudes of radius variations in, respectively, RRab and RRc stars (e.g. Jones et al. 1988,Capricorni et al. 1989, Liu & Janes, 1990, Jones et al. 1992). The lower probability of the first overtone instability is a direct consequence of the lower value of $`A`$.
The probability of the resonant instability in most cases increases with pulsation period. The exception is the instability of the fundamental mode to higher degree modes. In this case damping in the g-mode propagation zone plays an important role. The value of $`\kappa `$ increases with $`P_0`$ and $`\mathrm{}`$. The increase reduces the chances for the instability and ultimately prevents it (see Eq. 2).
At typical amplitude of RRab the probability of excitation of $`\mathrm{}=1`$ mode is between 0.25 and 0.5. This is not so different from the incidence of Blazkho effect which is estimated to be between 20% and 30%. The joint probability of the instability is always higher than 0.5 and close to 1 in most cases. However, excitation of modes with $`\mathrm{}>2`$ may not lead to amplitude modulation. The incidence of Blazkho effect amongst RRc stars is lower. Kovacs et al.(1999) who analyzed data on a large sample of RRc stars from LMC found the effect in 1.4% of the objects. Our analysis suggest lower chances for the first overtone instability than the fundamental mode but not in such a disproportion.
## 5 Conclusions and discussion
Our survey shows that all RR Lyrae star models share all qualitative properties of the nonradial modes. There is always a large number of unstable low degree modes with frequencies close to unstable radial modes. However, owing to higher mode inertia, for most of nonradial modes the driving rates are much lower than those for radial modes. The exceptions are the strongly trapped (STU) modes which begin with $`\mathrm{}`$ degrees 7 to 10 (depending on the model) at frequencies somewhat above the fundamental radial mode and with $`\mathrm{}=5`$ or 6 with at frequencies somewhat above the radial mode overtones. These modes are characterized by growth rates similar to radial modes. However, we argued that these modes are not likely candidates for identification of oscillation detected in some RR Lyrae stars. More likely candidates are the $`\mathrm{}=1`$ modes. Their driving rates are by nearly an order of magnitude lower than radial modes but it is well known thatthe growth rate is not necessarily a good predictor of the finite amplitude pulsation.
We found also that parameters which determine the chances of the excitation of nonradial radial modes through the 1:1 resonance do not vary much over the range of RR Lyrae stars parameters. According to our estimate the excitation has a high probability. In fact some nonradial modes should be excited in majority of the RRab pulsators and in a significant fraction of $`(\mathrm{\hspace{0.25em}30}\%)`$ of RRc pulsators. The actual number should be greater because we ignored effect of rotation. Our crude estimate, which we did not detail here, shows that the effest is significant already at equatorial velocities of few km/s.
Why then the incidence of the anomalous behavior among RR Lyrae stars is relatively rare? We should stress, that a significant amplitude modulation is not automatically implied by the nonradial mode excitation. If the nonlinear interaction between radial and nonradial modes leads to a steady pulsation with constant amplitude then the presence of the nonradial mode will not be easy detectable. A Blazkho-type amplitude modulation may arise then only if the nonradial mode is not symmetric about the rotation axis and it is of low degree. In this case the Blazkho period is equal to the rotation period. Another possibility is a periodic limit cycle in which the amplitudes of the two modes vary intrinsically with the period determined by the nonlinear interaction.
The ulitmate answer regarding the presence of nonradial modes in RR Lyrae stars may be expected only from spectropy. A signature of such modes should be searched in the line-profile variations. Thus, high-resolution spectroscopic observations of amplitude modulated RR Lyrare stars are encouraged.
Acknowledgements. The paper was supported by the KBN grant 2P03D00814. S.C. warmly thanks for the hospitality during his stay at the Copernicus Astronomical Center. We are grateful to Geza Kovacs for reading a preliminary version of this paper and a number of useful suggestions
## REFERENCES
* Alexander D.R. & Ferguson J.W. 1994 , Astrophys. J., 43, 879.
* Cacciari C., Clementini G., & Busser R. 1989 , Astron. Astrophys., 209, 145.
* Cassisi S., Castellani V., Degl’Innocenti S. & Weiss A. 1998 , Astron. Astrophys. Suppl. Ser., 129, 267.
* Cassisi S., Castellani V., Degl’Innocenti S., Salaris M. & Weiss A. 1999 , Astron. Astrophys. Suppl. Ser., 134, 103.
* Dziembowski, W. 1977 , Acta Astron., 27, 95.
* Dziembowski, W., Królikowska M., Kosovitchev, A. 1988 , Acta Astron., 38, 61.
* Grevesse N., 1991, in IAU Symp. 145 Evolution of stars: the photospheric abundance connection, ed. G. Michaud & A. Tutukov (Dordrecht: Kluwer), 63
* Jones, R.V., Carney, B.W. & Latham, D.W. 1988 , Astrophys. J., 326, 312.
* Jones, R.V., Carney, B.W., Storm, J. & Latham, D.W. 1992 , Astrophys. J., 386, 646.
* Kovács G., 1993, in J.R. Buchler, H.E. Kandrup, eds, Stochastic Processes in Astrophysics, Annals of the New York Academy of Sciences, Vol.706, 327
* Kovács G. & MACHO Collaboration, 1999, private communication
* Liu, T. & Janes, K.A. 1990 , Astrophys. J., 354, 273.
* Moskalik, P. 1999, private communication
* Olech A., Kaluzny J., Thompson I.B., Pych W., Krzeminski W. & Schwarzenberg-Czerny A. 1999 , submitted to, Astron. J., astro-ph:9812302.
* Olech A., Woźniak, Alard, C. Kaluzny J. & Thompson I.B., 1999b , submitted to, MNRAS, astro-ph:990565.
* Rogers F.J. & Iglesias C.A. 1992 , Astrophys. J., 401, 361.
* Rogers F.J., Swenson F.J. & Iglesias C.A. 1996 , , Astrophys. J., 456, 902.
* Salaris M. & Cassisi S. 1996 , Astron. Astrophys., 305, 858.
* Straniero O. 1988 , Astron. Astrophys. Suppl. Ser., 76, 157.
* Van Hoolst T., Dziembowski W. & Kawaler S.D. 1998 , MNRAS, 297, 536.
|
no-problem/9907/cond-mat9907388.html
|
ar5iv
|
text
|
# Ordered phase and scaling in 𝑍_𝑛 models and the three-state antiferromagnetic Potts model in three dimensions
## I Introduction
The symmetry and the dimensionality are important factors to determine the universality class of critical phenomena. The $`O(2)`$ symmetry is the simplest among the continuous symmetry, and statistical models with the $`O(2)`$ symmetry has been studied intensively. A natural question then would be the effect of the symmetry breaking from the continuous $`O(2)`$ to the discrete $`Z_n`$. A simple spin model with $`Z_n`$ symmetry is the $`n`$-state clock model with a Hamiltonian
$$H=\underset{j,k}{}\mathrm{cos}(\theta _j\theta _k),$$
(1)
where $`j,k`$ runs over nearest neighbors, and $`\theta _j`$ takes integral multiples of $`2\pi /n`$. The standard XY model with $`O(2)`$ symmetry is defined by the Hamiltonian of the same form; the only difference is that $`\theta `$ takes continuous values.
The $`Z_n`$ symmetry is fundamentally different from $`O(2)`$ because of its discrete nature. On the other hand, for large $`n`$, it is natural to expect the $`Z_n`$ symmetry to have similar effects to that of the $`O(2)`$ symmetry. Understanding these two apparently contradictory aspects is an interesting problem. Besides the theoretical motivation, there are some possible experimental realizations of the effective $`Z_n`$ symmetry. For example, the stacked triangular antiferromagnetic Ising (STI) model with effective $`Z_6`$ symmetry may correspond to materials such as CsMnI<sub>3</sub>.
In two dimensions, the phase diagram of the $`Z_n`$ model is well understood in the framework of the renormalization group (RG). For $`n5`$, there is an intermediate phase between the low-temperature ordered phase with the spontaneously broken $`Z_n`$ symmetry and the high-temperature disordered phase. The intermediate phase is $`O(2)`$ symmetric and corresponds to the low-temperature phase of the XY model.
On three dimensional (3D) case, Blankschtein et al. in 1984 proposed an RG picture of the $`Z_6`$ models, to discuss the STI model. They suggested that the transition between the ordered and disordered phases belongs to the (3D) XY universality class, and that the ordered phase reflects the symmetry breaking to $`Z_6`$ in a large enough system. It means that there is no finite region of rotationally symmetric phase which is similar to the ordered phase of the XY model. Unfortunately, their paper is apparently not widely known in the related fields. It might be partly because their discussion was very brief and not quite clear.
In fact, there has been a long-standing controversy on the the three-state antiferromagnetic Potts (AFP) model on a simple cubic lattice, defined by the Hamiltonian
$$H=+\underset{j,k}{}\delta _{\sigma _j\sigma _k},$$
(2)
where $`\sigma _j=0,1,2`$ and $`j,k`$ runs over nearest neighbor pairs on a simple cubic lattice. The order parameter of this model is not evident. However, previous studies revealed that the low-temperature ordered phase, which is called as Broken Sublattice Symmetry (BSS) phase, corresponds to a spontaneous breaking of the $`Z_6`$ symmetry. Thus the effective symmetry of this model may be regarded as $`Z_6`$, although it is not apparent in the model. It is now widely accepted that there is a phase transition with critical exponents characterized by the 3D XY universality class, at temperature $`T_c1.23`$ (we set the Boltzmann constant $`k_B=1`$.) On the other hand, according to numerical calculations, there appears to be an intermediate phase below $`T_c`$ and above the low-temperature phase. While there have been various proposals for the intermediate region, most reliable numerical results at present indicates that the intermediate region appears to be rotationally symmetric phase which is similar to the ordered phase of the 3D XY model. However, the “transition” between the intermediate region and the low-temperature phase is not well understood. According to the suggestion in Ref. , the intermediate “phase” would be rather a crossover to the low-temperature massive phase.
On the other hand, there has been a claim of an intermediate phase also in the $`6`$-state clock (6CL) model, which has the manifest $`Z_6`$ symmetry. In a recent detailed numerical study, Miyashita found that the intermediate region appears to have a rotationally symmetric character, as found in the AFP model. However, through a careful examination of the system size dependence, he concluded that it is just a crossover to the massive low-temperature phase, and that the rotationally symmetric XY phase does not exist in the thermodynamic limit. His conclusion is consistent with the suggestion in Ref. .
In this article, based on the RG picture, we derive a scaling law of an order parameter which measures the effect of symmetry breaking from $`O(2)`$ to $`Z_n`$. We demonstrate that the Monte Carlo results on the AFP model in Ref. is consistent with the scaling law, supporting the RG picture with a single phase transition.
## II Renormalization-Group Picture
Since the discussion of the RG picture in Ref. was rather brief, it would be worthwhile to present the RG picture here, with some clarifications and more details. We also make a straightforward extension to general integer $`n`$ from the $`n=6`$ case.
A generic $`Z_n`$ symmetric model may be mapped, in the long-distance limit, to the following $`\mathrm{\Phi }^4`$-type field theory with the Euclidean action
$$S=d^3x\left[|_\mu \mathrm{\Phi }|^2+u|\mathrm{\Phi }|^2+g|\mathrm{\Phi }|^4\lambda _n(\mathrm{\Phi }^n+\overline{\mathrm{\Phi }}^n)\right]$$
(3)
with the complex field $`\mathrm{\Phi }`$ and its conjugate $`\overline{\mathrm{\Phi }}`$. The $`\lambda _n`$-term is the lowest order term in $`\mathrm{\Phi }`$ which breaks the symmetry from $`O(2)`$ to $`Z_n`$. The phase transition corresponds to the vanishing of (the renormalized value of) the parameter $`u`$. The temperature $`T`$ in the $`Z_n`$ statistical system roughly corresponds to $`u`$ as $`uTT_c`$ where $`T_c`$ is the critical temperature.
In the absence of the symmetry breaking $`\lambda _n`$, the transition belongs to the so-called 3D XY universality class. Its stability under the symmetry breaking to $`Z_n`$ is determined by the scaling dimension of $`\lambda _n`$ at the 3D XY fixed point. It may be estimated with the standard $`ϵ`$-expansion method.
The lowest order result in $`ϵ`$ can be easily obtained from the Operator Product Expansion (OPE) coefficients. As a result, we obtain the scaling dimension $`y_n`$ of $`\lambda _n`$ in $`4ϵ`$ dimensions as
$$y_n=4n+ϵ\left(\frac{n}{2}1\frac{n(n1)}{10}\right)+O(ϵ^2).$$
(4)
$`y_n`$ is defined so that the effective strength of the perturbation $`\lambda _n(l)`$ at scale $`l`$ is proportional to $`l^{y_n}`$ near the XY fixed point. The case $`n=4`$ is actually the special case $`N=2`$ of the “cubic anisotropy” on the 3D $`O(N)`$ fixed point. Extrapolating the $`O(ϵ)`$ result to 3D ($`ϵ=1`$), we see that the $`Z_n`$ perturbation is irrelevant at the 3D XY fixed point for $`nn_c`$. The threshold $`n_c`$ is estimated to be $`4`$ in $`O(ϵ)`$ In fact, $`n=2`$ and $`n=3`$ corresponds to the 3D Ising and 3-state (ferromagnetic) Potts model, which do not belong to XY universality class. Thus $`n_c`$ is expected to be at least $`4`$. This is consistent with the above result from $`O(ϵ)`$. However, extrapolating the lowest order result in $`ϵ`$ to 3D ($`ϵ=1`$) is not quite reliable; the true value of $`n_c`$ might be larger than $`4`$. On the other hand, we can make following observation. For $`n6`$, $`\lambda _n`$ is marginal or irrelevant at the 3D Gaussian fixed point ($`g=0`$). Thus it is natural to expect them to be irrelevant at the more stable 3D XY fixed point, namely $`n_c6`$. In fact, the numerical observation of the 3D XY universality class in 6CL and AFP model strongly suggests that $`\lambda _6`$ is irrelevant at the XY fixed point and hence $`n_c6`$. In the following, we restrict the discussion to the irrelevant case $`nn_c`$.
For the $`O(2)`$ symmetric case $`\lambda _n=0`$, low-temperature phase $`u<0`$ is renormalized to the low-temperature fixed point. It describes the massless Nambu-Goldstone (NG) modes on the groundstate with the spontaneously broken $`O(2)`$ symmetry. Let us call the low-temperature fixed point as NG fixed point. In terms of the field theory, it is described by the $`O(2)`$ sigma model (free massless boson field)
$$S=d^3x\frac{K}{2}(_\mu \varphi )^2$$
(5)
where $`\varphi `$ is the angular variable $`\mathrm{\Phi }|\mathrm{\Phi }|e^{i\varphi }`$. Namely, only the angular mode $`\varphi `$ remains gapless as a NG boson. In three dimensions, the coupling constant $`K`$ renormalizes proportional to the scale $`l`$, and goes to infinity in the low-energy limit. The coupling constant may be absorbed by using the rescaled field $`\theta =\sqrt{K}(\varphi \varphi _0)`$ so that the action is always written as $`d^3x(_\mu \theta )^2/2`$.
Now let us consider effects of the symmetry breaking $`\lambda _n`$. The symmetry breaking term can be written as $`\lambda _n(\mathrm{\Phi }^n+\overline{\mathrm{\Phi }}^n)=\lambda _n|\mathrm{\Phi }|^n\mathrm{cos}n\varphi `$. Using the rescaled field $`\theta `$, the total effective action at scale $`l`$ becomes
$$S=d^3x\frac{1}{2}(_\mu \theta )^2\lambda _nK^3d^3x\mathrm{cos}[n(\varphi _0+\frac{\theta }{\sqrt{K}})],$$
(6)
where the factor $`K^3l^3`$ comes from the scale transformation of the integration measure. In the thermodynamic limit, we should take $`K\mathrm{}`$ limit. Physically, it means that the $`O(2)`$ symmetry is spontaneously broken so that the angle is fixed to some value $`\varphi _0`$ in a single infinite system. Then the Taylor expansion of the cosine in $`\theta /\sqrt{K}`$ becomes valid:
$$K^3\mathrm{cos}[n(\varphi _0+\frac{\theta }{\sqrt{K}})]=\underset{j=0}{\overset{\mathrm{}}{}}c_jK^{3j/2}\theta ^j,$$
(7)
where
$`c_{2k}`$ $`=`$ $`(1)^k{\displaystyle \frac{n^{2k}}{(2k)!}}\mathrm{cos}n\varphi _0,`$ (8)
$`c_{2k+1}`$ $`=`$ $`(1)^k{\displaystyle \frac{n^{2k+1}}{(2k+1)!}}\mathrm{sin}n\varphi _0,`$ (9)
for a nonnegative integer $`k`$. The five terms $`j=1,\mathrm{}5`$ are relevant perturbations. For any value of $`\varphi _0`$, some of the coefficients $`c_j`$ of these relevant terms are non-vanishing. We therefore conclude that, unlike the 2D case, the $`Z_n`$ perturbation is always relevant for any value of $`n`$ at the NG fixed point. We emphasize that this conclusion is universal in three dimensions and independent of the microscopic model. Shortly speaking, the $`Z_n`$ perturbation gives mass to the pseudo NG boson $`\theta `$, which would be massless NG boson in the absence of the perturbation. In contrast, in two dimensions the coupling constant $`K`$ of the free boson field theory is dimensionless, and the above argument does not apply. It is related to the absence of a spontaneous breaking of a continuous symmetry.
We now have a global picture of the RG flow as shown in Fig. 1. The phase transition between the ordered phase and the disordered phase is governed by the XY fixed point. This means that the critical exponents are identical to those of the XY model. This is consistent with the numerical results. In the disordered phase above $`T_c`$, there will be no essential effect of the $`Z_n`$ perturbation. However, the nature of the ordered phase is more interesting. The $`Z_n`$ perturbation $`\lambda _n`$ is eventually enhanced in the ordered phase below $`T_c`$. It means that all region below $`T_c`$ belong to the massive phase with the spontaneously broken $`Z_n`$ symmetry. There is no rotationally symmetric intermediate phase, unlike the 2D case. Only a precisely $`O(2)`$ symmetric model with $`\lambda _n=0`$ is renormalized to the NG fixed point below $`T_c`$, corresponding to the rotationally symmetric low-temperature phase.
An interesting aspect of the RG flow diagram is that the $`Z_n`$ perturbation is irrelevant at the 3D XY fixed point but is relevant at the low-temperature NG fixed point. This could be related to a nontrivial system size dependence found in a Monte Carlo Renormalization Group calculation. For $`T`$ slightly less than $`T_c`$, the symmetry breaking perturbation $`\lambda _n`$ is renormalized to a small value by the RG flow, and remains small until the RG flow reaches near the NG fixed point. It means that the mass of the pseudo-NG bosons is suppressed by the fluctuation effect. At a finite scale (for example in a finite size system), the ordered phase near $`T_c`$ is very similar to the low-temperature phase of the XY model. This naturally explains the numerical observation of the apparently rotationally symmetric “phase” in 6CL or the AFP model. For larger $`n`$, the mass is more suppressed, and the low-temperature side of the transition appears to be $`O(2)`$ symmetric until the system size becomes very large. However, for any finite $`n`$, the low-temperature side of the transition $`T<T_c`$ is not truly massless nor $`O(2)`$ symmetric in the thermodynamic limit, as already pointed out.
## III Scaling law in the ordered phase
Based on the RG picture, we derive a scaling law on an order parameter $`𝒪_n`$ which characterizes the symmetry breaking from the $`O(2)`$ to $`Z_n`$ symmetry. There are various possible definitions of $`𝒪_n`$. On the 6CL model, Miyashita numerically measured an order parameter $`\mathrm{\Delta }`$ which corresponds to the effective barrier height. On the AFP model, Heilmann, Wang and Swendsen studied $`\varphi _6`$, which is the Fourier transform of the angle distribution density of average spins. The following consideration apply to the both cases.
For large enough $`L`$ and $`T`$ slightly lower than $`T_c`$ we divide the RG flow to three stages, as shown in Fig. 2
The RG flow near the 3D XY fixed point. The symmetry breaking $`\lambda _n`$ is irrelevant, and is renormalized proportional to $`l^{|y_n|}`$ at length scale $`l`$.
The RG flow from the neighborhood of the 3D XY fixed point to the NG fixed point. For simplicity, we assume that the symmetry breaking $`\lambda _n`$ is unchanged in this stage.
The RG flow near the NG fixed point. $`\lambda _n`$ is relevant, giving a mass to the NG boson.
The length scale $`l_c`$, at which the crossover from Stage (i) to (ii) occurs, is given by $`l_c\text{const.}(T_cT)^\nu `$ , where $`\nu `$ is the correlation length exponent of the 3D XY universality class. Thus, at the crossover,
$$\lambda _n\text{const.}(T_cT)^{\nu |y_n|}.$$
(10)
This also gives the effective value of the perturbation $`\lambda _n`$ at the crossover from Stage (ii) to Stage (iii).
In the presence of the $`Z_n`$ perturbation, the spin configuration would be dominated by the ordered regions which are separated by domain walls in a large system. The free energy costed by the domain walls is proportional to their area, which scales as $`L^2`$ for the system size $`L`$. Therefore the effective “barrier height” is proportional to $`L^2`$. Combining this with eq. (10), we conclude that the order parameter is a function of a single scaling variable:
$$𝒪_n=f(cL^2(T_cT)^{\nu |y_n|}),$$
(11)
where $`c`$ is a constant. The function $`f`$ is universal, but of course depends on the definition of the $`𝒪_n`$. While the scaling by $`L^2`$ was used in Ref. , we find that the temperature dependence of the order parameter is also governed by a scaling. Interestingly, the exponent $`\nu |y_n|`$ is completely determined by the 3D XY fixed point.
## IV Comparison with the numerical results
In the numerical study of the AFP model, they claimed the existence of the intermediate phase, in which the order parameter $`\varphi _6`$ is very small even for relatively large lattice (upto $`L=64`$). However, we re-analyze their data to demonstrate the scaling relation (11), and hence the validity of the RG picture. In Fig. 2, we show the data taken from Fig. 3 of Ref. . We chose $`\nu |y_6|=4.8`$ to give the best scaling. The data for various temperature and various system sizes fall remarkably into a single curve as a function of the scaling variable $`x=cL^2(T_cT)^{\nu |y_n|}`$. This supports the proposed scaling relation (11). Furthermore, if we approximate the effective potential by $`x\mathrm{cos}6\varphi `$, the scaling function is given by
$$f(x)=\frac{𝑑\varphi \mathrm{cos}(6\varphi )e^{x\mathrm{cos}(6\varphi )}}{𝑑\varphi e^{x\mathrm{cos}(6\varphi )}}=\frac{I_1(x)}{I_0(x)},$$
(12)
where $`I_n`$ is the modified Bessel function. Choosing $`c=0.025`$, the scaled data agree with this simple function rather well. We note that the data appear to deviate from the scaling law for small $`x`$. This may be due to the insufficient system size $`L`$ or the relatively large statistical error.
We emphasize that the present scaling relation is a strong evidence of the single phase transition at the temperature $`T_c`$. In contrast, the scaling of the “spontaneous magnetization” $`\rho =|\mathrm{\Phi }|(T_cT)^\beta `$ does not distinguish our picture and the “intermediate phase” scenario of Ref. .
On the other hand, the scaling function $`f(x)`$ for $`\mathrm{\Delta }`$ in Ref. is linear in $`x`$ by definition. He indeed found that $`\mathrm{\Delta }`$ is scaled by $`L^2`$. However he did not discuss the temperature depedence. We have attempted to analyze the data in Figs. 6 and 7 in Ref. , to find that they are roughly consistent with our scaling law (11) with the exponent $`\nu |y_6|4`$. The estimate is difficult because there are only small number of temperature points available in Ref. . According to our picture, the exponent $`\nu |y_6|`$ is a universal quantity determined by the 3D XY universality class. Considering the available data, the above results on AFP and 6CL models are consistent with the universality hypothesis, although not conclusive. It would be interesting to obtain more numerical data on these models to check our scaling law (11).
The exponent $`\nu `$ has been determined as $`\nu 0.67`$ for the 3D XY universality class. Combining with the above estimates of $`\nu |y_6|`$, $`|y_6|`$ is estimated as about $`6`$. Unfortunately, the irrelevant eigenvalue $`y_6`$ has not been much discussed in the literature. The lowest order result (4) in the $`ϵ`$-expansion gives $`|y_6|=3`$, which is not quite consistent with the numerical estimate. However, it is perhaps not surprising to obtain an inaccurate result in the lowest order of the $`ϵ`$-expansion. It would be interesting to carry out the calculation to higher orders in $`ϵ`$, or to estimate $`y_6`$ by other means.
## V Conclusion and Discussions
In this article, we clarified a RG picture of phase structure of 3D $`Z_n`$ symmetric models, which was introduced earlier by Blankschtein et al.. There is no finite region of intermediate phase with a (spontaneously broken) $`O(2)`$ symmetry, but only a crossover to a massive phase where the discreteness of $`Z_n`$ is relevant. Based on the RG picture, we have derived a scaling law of the order parameter in the 3D $`Z_n`$ models The existing Monte Carlo data on the AFP model, which was used to claim the intermediate phase, was shown to be consistent with the scaling law. Thus we conclude that the RG picture is valid on the AFP model, and there is only one transition at $`T_c1.23`$ with the 3D XY universality class.
We would like to make a few final remarks. Firstly, we note that the RG argument used in the present article does not contradict to the transition of other than XY universality class, because only the local stability of the XY fixed point was discussed. It is possible that a lattice model with $`Z_n`$ symmetry is renormalized to another (unknown) RG fixed point. Actually, it appears somewhat controversial whether the transition of the STI model belong to the XY universality class. On the other hand, the available numerical results strongly supports that the 6CL and AFP models at the critical temperature are renormalized into the XY fixed point. Once the transition is known to be XY universality class, the RG picture and the scaling law discussed in this article should apply to the ordered phase, for the temperature slightly below the critical point.
Secondly, as discussed in Ref. , the “bare” value of $`\lambda _n`$ (at a small length scale) may have opposite sign in some circumstances. Namely, the minima and maxima of the potential of $`\varphi `$ are swapped. In such a case, the ordered phase may correspond to the Permutationally Symmetric Sublattice (PSS) phase proposed in Ref. for the AFP model, or the Incompletely Ordered Phase (IOP) proposed in Ref. for the 6CL model. In the vicinity of the critical point, the temperature dependence of the bare $`\lambda _n`$ is not essential because the leading dependence on the temperature is determined by the critical effect, as shown in eq. (11). However, it may be important in a wider temperature range. In particular, if the bare $`\lambda _n`$ changes sign at some temperature $`T_L`$ lower than $`T_c`$, we would have a transition at $`T_L`$. Such a transition would be controlled by the NG fixed point. The existing numerical data indicates that there is no such phase transition in the standard AFP model on the simple cubic lattice or in the standard 6CL model. However, such a transition may be possible in some modified models. In fact, Blankschtein et al. argued it to exist in the STI model.
###### Acknowledgements.
I would like to thank Hikaru Kawamura, Macoto Kikuchi, Ryo Kishi, Seiji Miyashita and Yohtaro Ueno for useful discussions. This work is supported in part by Grant-in-Aid for Scientific Research from the Ministry of Education, Culture and Science of Japan.
|
no-problem/9907/hep-th9907051.html
|
ar5iv
|
text
|
# Conformality from Field-String Duality on Abelian Orbifolds
## Abstract
If the standard model is embedded in a conformal theory, what is the simplest possibility? We analyse all abelian orbifolds for discrete symmetry $`Z_p`$ with $`p7`$, and find that the simplest such theory is indeed $`SU(3)^7`$. Such a theory predicts the correct electroweak unification (sin$`{}_{}{}^{2}\theta 0.231`$). A color coupling $`\alpha _C(M)0.07`$ suggests a conformal scale $`M`$ near to 10 TeV.
preprint: July 1999 IFP-774-UNC BI-TP 99/18 hep-th/9907051
Since, in the context of field-string duality, there has been a shift regarding the relationship of gravity to the standard model of strong and electroweak interactions we shall begin by characterising how gravity fits in, then to suggest more specifically how the standard model fits in to the string framework.
The descriptions of gravity and of the standard model are contained in the string theory. In the string picture in ten spacetime dimensions, or upon compactification to four dimensions, there is a massless spin-two graviton but the standard model is not manifest in the way we shall consider it. In the conformal field theory extension of the standard model, gravity is strikingly absent. The field-string duality does not imply that the standard model already contains gravity and, in fact, it does not.
The situation is not analogous to the Regge-pole/resonance duality (despite a misleading earlier version of this introduction!). That quite different duality led to the origin of string theory and originated from the realization phenomenologically that adding Regge pole and resonance descriptions is double counting and that the two descriptions are dual in that stronger sense. The duality between the field and string descriptions is not analogous because the CFT description does not contain gravity. A first step to combining gravity with the standard model would be adding the corresponding lagrangians.
In the field theory description used in this article, one will simply ignore the massless spin-two graviton. Indeed since we are using the field theory description only below the conformal scale of $`1`$TeV ( or, as suggested later in this paper, 10TeV) and forgoing any requirement of grand unification, the hierarchy between the weak scale and theory-generated scales like $`M_{GUT}`$ or $`M_{PLANCK}`$ is resolved. Moreover, seeking the graviton in the field theory description is possibly resolvable by going to a higher dimension and restricting the range of the higher dimension. Here we are looking only at the strong and weak interactions at accessible energies below, say, 10TeV.
Of course, if we ask questions in a different regime, for example about scattering of particles with center-of-mass energy of the order $`M_{PLANCK}`$ then the graviton will become crucial and a string, rather than a field, description will be the viable one.
It is important to distinguish between the holographic description of the five-dimensional gravity in $`(AdS)_5`$ made by the four-dimensional CFT and the origin of the four-dimensional graviton. The latter could be described holographically only by a lower three-dimensional field theory which is not relevant to the real world. Therefore the graviton of our world can only arise by compactification of a higher dimensional graviton. Introduction of gravity must break conformal invariance and it is an interesting question (which I will not answer!) whether this breaking is related to the mass and symmetry-breaking scales in the low-energy theory. That is all I will say about gravity in the present paper; the remainder is on the standard model and its embedding in a CFT.
An alternative to conformality, grand unification with supersymmetry, leads to an impressively accurate gauge coupling unification. In particular it predicts an electroweak mixing angle at the Z-pole, $`\mathrm{𝚜𝚒𝚗}^2\theta =0.231`$. This result may, however, be fortuitous, but rather than abandon gauge coupling unification, we can rederive $`\mathrm{𝚜𝚒𝚗}^2\theta =0.231`$ in a different way by embedding the electroweak $`SU(2)\times U(1)`$ in $`SU(N)\times SU(N)\times SU(N)`$ to find $`\mathrm{𝚜𝚒𝚗}^2\theta =3/130.231`$. This will be a common feature of the models in this paper.
The conformal theories will be finite without quadratic or logarithmic divergences. This requires appropriate equal number of fermions and bosons which can cancel in loops and which occur without the necessity of space-time supersymmetry. As we shall see in one example, it is possible to combine spacetime supersymmetry with conformality but the latter is the driving principle and the former is merely an option: additional fermions and scalars are predicted by conformality in the TeV range, but in general these particles are different and distinguishable from supersymmetric partners. The boson-fermion cancellation is essential for the cancellation of infinities, and will play a central role in the calculation of the cosmological constant (not discussed here). In the field picture, the cosmological constant measures the vacuum energy density.
What is needed first for the conformal approach is a simple model and that is the subject of this paper.
Here we shall focus on abelian orbifolds characterised by the discrete group $`Z_p`$. Non-abelian orbifolds will be systematically analysed elsewhere.
The steps in building a model for the abelian case (parallel steps hold for non-abelian orbifolds) are:
* (1) Choose the discrete group $`\mathrm{\Gamma }`$. Here we are considering only $`\mathrm{\Gamma }=Z_p`$. We define $`\alpha =\mathrm{exp}(2\pi i/p)`$.
* (2) Choose the embedding of $`\mathrm{\Gamma }SU(4)`$ by assigning $`\mathrm{𝟒}=(\alpha ^{A_1},\alpha ^{A_2},\alpha ^{A_3},\alpha ^{A_4})`$ such that $`_{q=1}^{q=4}A_q=0(\mathrm{mod}p)`$. To break $`𝒩=4`$ supersymmetry to $`𝒩=0`$ ( or $`𝒩=1`$) requires that none (or one) of the $`A_q`$ is equal to zero (mod p).
* (3) For chiral fermions one requires that $`\mathrm{𝟒}\mathrm{𝟒}^{}`$ for the embedding of $`\mathrm{\Gamma }`$ in $`SU(4)`$.
The chiral fermions are in the bifundamental representations of $`SU(N)^p`$
$$\underset{i=1}{\overset{i=p}{}}\underset{q=1}{\overset{q=4}{}}(N_i,\overline{N}_{i+A_q})$$
(1)
If $`A_q=0`$ we interpret $`(N_i,\overline{N}_i)`$ as a singlet plus an adjoint of $`SU(N)_i`$.
* (4) The 6 of $`SU(4)`$ is real 6 = $`(a_1,a_2,a_3,a_1,a_2,a_3)`$ with $`a_1=A_1+A_2`$, $`a_2=A_2+A_3`$, $`a_3=A_3+A_1`$ (recall that all components are defined modulo p). The complex scalars are in the bifundamentals
$$\underset{i=1}{\overset{i=p}{}}\underset{j=1}{\overset{j=3}{}}(N_i,\overline{N}_{i\pm a_j})$$
(2)
The condition in terms of $`a_j`$ for $`𝒩=0`$ is $`_{j=1}^{j=3}(\pm a_j)0(\mathrm{mod}p)`$.
* (5) Choose the $`N`$ of $`_iSU(Nd_i)`$ (where the $`d_i`$ are the dimensions of the representrations of $`\mathrm{\Gamma }`$). For the abelian case where $`d_i1`$, it is natural to choose $`N=3`$ the largest $`SU(N)`$ of the standard model (SM) gauge group. For a non-abelian $`\mathrm{\Gamma }`$ with $`d_i1`$ the choice $`N=2`$ would be indicated.
* (6) The $`p`$ quiver nodes are identified as color (C), weak isospin (W) or a third $`SU(3)`$ (H). This specifies the embedding of the gauge group $`SU(3)_C\times SU(3)_W\times SU(3)_HSU(N)^p`$.
This quiver node identification is guided by (7), (8) and (9) below.
* (7) The quiver node identification is required to give three chiral families under Eq.(1) It is sufficient to make three of the $`(C+A_q)`$ to be W and the fourth H, given that there is only one C quiver node, so that there are three $`(3,\overline{3},1)`$. Provided that $`(\overline{3},3,1)`$ is avoided by the $`(CA_q)`$ being H, the remainder of the three family trinification will be automatic by chiral anomaly cancellation. Actually, a sufficient condition for three families has been given; it is necessary only that the difference between the number of $`(3+A_q)`$ nodes and the number of $`(3A_q)`$ nodes which are W is equal to three.
* (8) The complex scalars of Eq. (2) must be sufficient for their vacuum expectation values (VEVs) to spontaneously break $`SU(3)^pSU(3)_C\times SU(3)_W\times SU(3)_HSU(3)_C\times SU(2)_W\times U(1)_YSU(3)_C\times U(1)_Q`$.
Note that, unlike grand unified theories (GUTs) with or without supersymmetry, the Higgs scalars are here prescribed by the conformality condition. This is more satisfactory because it implies that the Higgs sector cannot be chosen arbitrarily, but it does make model building more interesting
* (9) Gauge coupling unification should apply at least to the electroweak mixing angle $`\mathrm{sin}^2\theta =g_Y^2/(g_2^2+g_Y^2)0.231`$. For trinification $`Y=3^{1/2}(\lambda _{8W}+2\lambda _{8H})`$ so that $`(3/5)^{1/2}Y`$ is correctly normalized. If we make $`g_Y^2=(3/5)g_1^2`$ and $`g_2^2=2g_1^2`$ then $`\mathrm{sin}^2\theta =3/130.231`$ with sufficient accuracy.
In the remainder of this paper we answer all these steps for the choice $`\mathrm{\Gamma }=Z_p`$ for successive $`p=2,3\mathrm{}`$ up to $`p=7`$, then add some concluding remarks.
* p = 2
In this case $`\alpha =1`$ and therefore one cannot costruct any complex 4 of $`SU(4)`$ with $`\mathrm{𝟒}\mathrm{𝟒}^{}`$. Chiral fermions are therefore impossible.
* p = 3
The only possibilities are $`A_q=(1,1,1,0)`$ or $`A_q=(1,1,1,1)`$. The latter is real and leads to no chiral fermions. The former leaves $`𝒩=1`$ supersymmetry and is a simple three-family model by the quiver node identification C - W - H. The scalars $`a_j=(1,1,1)`$ are sufficient to spontaneously break to the SM. Gauge coupling unification is, however, missing since $`\mathrm{sin}^2\theta =3/8`$, in bad disagreement with experiment.
* p = 4
The only complex $`𝒩=0`$ choice is $`A_q=(1,1,1,1)`$. But then $`a_j=(2,2,2)`$ and any quiver node identification such as C - W - H - H has 4 families and the scalars are insufficient to break spontaneously the symmetry to the SM gauge group.
* p = 5
The two inequivalent complex choices are $`A_q=(1,1,1,2)`$ and $`A_q=(1,3,3,3)`$. By drawing the quiver, however, and using the rules for three chiral families given in (7) above, one finds that the node identification and the prescription of the scalars as $`a_j=(2,2,2)`$ and $`a_j=(1,1,1)`$ respectively does not permit spontaneous breaking to the standard model.
* p = 6
Here we can discuss three inequivalent complex possibilities as follows:
(6A) $`A_q=(1,1,1,3)`$ which implies $`a_j=(2,2,2)`$.
Requiring three families means a node identification C - W - X - H - X - H where X is either W or H. But whatever we choose for the X the scalar representations are insufficient to break $`SU(3)^6`$ in the desired fashion down to the SM. This illustrates the difficulty of model building when the scalars are not in arbitrary representations.
(6B) $`A_q=(1,1,2,2)`$ which implies $`a_j=(2,3,3)`$.
Here the family number can be only zero, two or four as can be seen by inspection of the $`A_q`$ and the related quiver diagram. So (6B) is of no phenomenological interest.
(6C) $`A_q=(1,3,4,4)`$ which implies $`a_j=(1,1,4)`$.
Requiring three families needs a quiver node identification which is of the form either C - W - H - H - W - H or C - H - H - W - W - H. The scalar representations implied by $`a_j=(1,1,4)`$ are, however, easily seen to be insufficient to do the required spontaneous symmetry breaking (S.S.B.) for both of these identifications.
* p =7
Having been stymied mainly by the rigidity of the scalar representation for all $`p6`$, for $`p=7`$ there are the first cases which work. Six inequivalent complex embeddings of $`Z_7SU(4)`$ require consideration.
(7A) $`A_q=(1,1,1,4)a_j=(2,2,2)`$
For the required nodes C - W - X - H - H - X - H the scalars are insufficient for S.S.B.
(7B) $`A_q=(1,1,2,3)a_j=(2,3,3)`$
The node identification C - W - H - W - H - H - H leads to a successful model.
(7C) $`A_q=(1,2,2,2)a_j=(3,3,3)`$
Choosing C - H - W - X - X - H - H to derive three families, the scalars fail in S.S.B.
(7D) $`A_q=(1,3,5,5)a_j=(1,1,3)`$
The node choice C - W - H - H - H - W - H leads to a successful model. This is Model A of .
(7E) $`A_q=(1,4,4,5)a_j=(1,2,2)`$
The nodes C - H - H - H - W - W - H are successful.
(7F) $`A_q=(2,4,4,4)a_j=(1,1,1)`$
Scalars insufficient for S.S.B.
The three successful models (7B), (7D) and (7E) lead to an $`\alpha _3(M)0.07`$. Since $`\alpha _3(1\mathrm{T}\mathrm{e}\mathrm{V})0.10`$ this suggest a conformal scale $`M10`$ TeV . The above models have less generators than an $`E(6)`$ GUT and thus $`SU(3)^7`$ merits further study. It is possible, and under investigation, that non-abelian orbifolds will lead to a simpler model.
For such field theories it is important to establish the existence of a fixed manifold with respect to the renormalization group. It could be a fixed line but more likely, in the $`𝒩=0`$ case, a fixed point. It is known that in the $`N\mathrm{}`$ limit the theories become conformal, but although this ’t Hooft limit is where the field-string duality is derived we know that finiteness survives to finite N in the $`𝒩=4`$ case and this makes it plausible that at least a conformal point occurs also for the $`𝒩=0`$ theories with $`N=3`$ derived above.
The conformal structure cannot by itself predict all the dimensionless ratios of the standard model such as mass ratios and mixing angles because these receive contributions, in general, from soft breaking of conformality. With a specific assumption about the pattern of conformal symmetry breaking, however, more work should lead to definite predictions for such quantities.
The hospitality of Bielefeld University is acknowledged while this work was done. The work was supported in part by the US Department of Energy under Grant No. DE-FG02-97ER-41036.
|
no-problem/9907/cond-mat9907331.html
|
ar5iv
|
text
|
# Exotic ground states and impurities in multiband superconductors
## Abstract
We consider the effect of isotropic impurity scattering on the exotic superconducting states that arise from the usual BCS mechanism in substances of cubic and hexagonal symmetry where the Fermi surface contains inequivalent but degenerate pockets (e.g. around several points of high symmetry). As examples we look at CeCo<sub>2</sub>, CeRu<sub>2</sub>, and LaB<sub>6</sub>; all of which have such Fermi surface topologies and the former exhibits unconventional superconducting behavior. We find that while these non $`s`$-wave states are suppressed by non-magnetic impurities, the suppression is much weaker than would be expected for unconventional superconductors with isotropic non-magnetic impurity scattering.
Recently it was shown that in substances with cubic or hexagonal symmetry, exotic superconducting states can arise from conventional BCS mechanisms . This can occur when these metals have Fermi surface (FS) pockets that are centered at or around some three-fold or higher degenerate symmetry points of the Brillouin zone (BZ). The resulting superconducting state is either a conventional $`s`$-wave state or corresponds to a multidimensional superconducting representation for which the ground state breaks time reversal symmetry. Such FS topologies exist in many superconductors. Examples are CeRu<sub>2</sub> , LaB<sub>6</sub> , and CeCo<sub>2</sub> ; the latter exhibits a low temperature behavior specific heat that appears to vary as a power law with temperature which is consistent with unconventional superconductivity. Here we investigate the stability of these exotic superconducting states in the presence of non-magnetic isotropic impurity scattering. In particular we examine in detail a model that applies both to CeCo<sub>2</sub> and to the FS pockets centered at the six N-points of a body-centered cubic (BCC) lattice and discuss the results of the analogous theory for CeRu<sub>2</sub> and LaB<sub>6</sub>. We show that the exotic superconducting states are less susceptible to non-magnetic impurities than might be expected from earlier theories of isotropic impurity scattering in single band unconventional superconductors .
A BCS approximation for the multi-band case will be used (see e.g. ). The form of the interaction parameters describing the two electron scattering on and between the different degenerate pockets of the FS is fixed by symmetry. Consequently, the resulting superconducting state need not be $`s`$-wave when the interaction does not depend on the direction of the Fermi momenta of an individual pocket. In our earlier work we considered three cases in detail a) three FS pockets centered about the X-points of a simple cubic lattice; b) three FS pockets at the M-points of the hexagonal lattice; c) four FS pockets at the L-points in the face-centered cubic (FCC) lattice. Case a) may describe the superconducting states in LaB<sub>6</sub> and in CeRu<sub>2</sub> since these materials both have FS pockets centered at the X-point.
It has recently been demonstrated that the specific heat exhibits a power law temperature dependence at low temperatures in CeCo<sub>2</sub> . For this reason we consider here FS pockets centered at the six N-points of the BCC lattice which is formally identical to that in FCC CeCo<sub>2</sub> for the FS pockets which are located along the $`(1,1,0)`$ and equivalent directions (note that in this material there also exist FS sheets centered at the $`\mathrm{\Gamma }`$ point).
The Hamiltonian for several separate pieces of the FS can be written in the following form:
$$H=\underset{\alpha \sigma 𝐩}{}ϵ(𝐩)a_{\alpha \sigma }^{}(𝐩)a_{\alpha \sigma }(𝐩)+\frac{1}{2}\underset{𝐤,𝐤^{},𝐪}{}\underset{\alpha \beta \sigma \sigma ^{}}{}\lambda _{\alpha \beta }(𝐪)a_{\alpha \sigma }^{}(𝐤+𝐪)a_{\beta \sigma ^{}}^{}(𝐤^{}𝐪)a_{\alpha \sigma ^{}}(𝐤^{})a_{\beta \sigma }(𝐤),$$
(1)
where $`\sigma `$ and $`\sigma ^{}`$ are spin indices, $`\lambda _{\alpha \beta }(𝐪)`$ includes the interaction for scattering two electrons from the pocket $`\alpha `$ into pocket $`\beta `$ which is due to both Coulomb and electron-phonon terms. For simplicity we will take $`\lambda _{\alpha ,\beta }(𝐪)`$ to be independent of the direction of $`𝐪`$. Scattering by impurities is described by the Hamiltonian
$$H_{imp}=\underset{\sigma }{}\underset{k,q}{}\underset{\alpha ,\beta }{}\underset{j}{}\mathrm{exp}[i(𝐪𝐤)𝐑_j]V_{\alpha ,\beta }(𝐤,𝐪)a_{\beta \sigma }^{}(𝐤)a_{\alpha \sigma }(𝐪).$$
(2)
Introducing the anomalous Green’s function $`\widehat{}_\alpha (xx^{})`$ for each FS sheet $`\alpha `$, the corresponding Gor’kov equations can be found for the case of singlet pairing in exactly the same manner as in Ref . Including the isotropic impurity scattering within the Born approximation the following Gor’kov equations are found
$`[i\omega _nϵ_\alpha (𝐤)\overline{𝒢}_\alpha (\omega )]𝒢_\alpha (𝐤,i\omega _n)[\mathrm{\Delta }_\alpha ^{}+\overline{_\alpha ^{}}(\omega )]_\alpha (𝐤,i\omega _n)=1`$ (3)
$`[i\omega _n+ϵ_\alpha (𝐤)+\overline{𝒢}_\alpha (\omega )]_\alpha ^{}(𝐤,i\omega _n)[\mathrm{\Delta }_\alpha ^{}+\overline{_\alpha ^{}}(\omega )]𝒢_\alpha (𝐤,i\omega _n)=0`$ (4)
where
$`\overline{𝒢}_\alpha (\omega )=n_i{\displaystyle \underset{𝐤,\beta }{}}|u_{\alpha ,\beta }|^2𝒢_\beta (𝐤,i\omega _n)`$ (5)
$`\overline{}_\alpha (\omega )=n_i{\displaystyle \underset{𝐤,\beta }{}}|u_{\alpha ,\beta }|^2_\beta (𝐤,i\omega _n)`$ (6)
$`\mathrm{\Delta }_\alpha ^{}=T{\displaystyle \underset{𝐤,\beta }{}}{\displaystyle \underset{n}{}}\lambda _{\beta \alpha }_\beta ^{}(\omega _n,𝐤)`$ (7)
$`|u_{\alpha ,\beta }|^2=\frac{d\mathrm{\Omega }}{4\pi }\frac{d\mathrm{\Omega }^{}}{4\pi }|V_{\alpha ,\beta }(𝐤_F,𝐤_F^{})|^2`$ and $`n_i`$ is the concentration of impurities. Introducing $`\stackrel{~}{\mathrm{\Delta }}_{\alpha ,n}=\mathrm{\Delta }_\alpha +\overline{_\alpha ^{}}(\omega )`$ and $`i\stackrel{~}{\omega }_{\alpha ,n}=i\omega _n\overline{𝒢}_\alpha (\omega )`$, the gap and self-energy equations can be expressed as
$$\mathrm{\Delta }_\alpha =T\pi \underset{\beta ,n}{}\frac{N_0\lambda _{\alpha ,\beta }\stackrel{~}{\mathrm{\Delta }}_{\beta ,n}}{\sqrt{\stackrel{~}{w}_{n,\beta }^2+|\stackrel{~}{\mathrm{\Delta }}_{\beta ,n}|^2}},$$
(8)
$$\stackrel{~}{w}_{n,\alpha }=w_n+\underset{\beta }{}\frac{\mathrm{\Gamma }_{\alpha ,\beta }\stackrel{~}{w}_{n,\beta }}{\sqrt{\stackrel{~}{w}_{n,\beta }^2+|\stackrel{~}{\mathrm{\Delta }}_\beta |^2}},$$
(9)
and
$$\stackrel{~}{\mathrm{\Delta }}_{n,\alpha }=\mathrm{\Delta }_\alpha +\underset{\beta }{}\frac{\mathrm{\Gamma }_{\alpha ,\beta }\stackrel{~}{\mathrm{\Delta }}_{n,\beta }}{\sqrt{\stackrel{~}{w}_{n,\beta }^2+|\stackrel{~}{\mathrm{\Delta }}_{\beta ,n}|^2}}$$
(10)
where $`\mathrm{\Gamma }_{\alpha ,\beta }=\pi N_0|u_{\alpha ,\beta }|^2`$, and $`N_0`$ is the normal density of states on a single pocket. Equations (8, 9), and (10) have been studied in the context of multi-band superconductivity by a variety of authors (see, e.g., ).
We consider here FS pockets centered at the six N points of the BCC lattice. This situation is formally identical to that describing the FS pockets in FCC CeCo<sub>2</sub> located along the (110) and equivalent directions. The N points of the BCC lattice lie at $`\frac{1}{2}𝐛_i`$ and $`\frac{1}{2}(𝐛_i𝐛_j)`$ with $`i>j`$ where the $`𝐛_i`$ are the reciprocal lattice basis vectors \[$`𝐛_1=\frac{2\pi }{a}(0,1,1)`$, $`𝐛_2=\frac{2\pi }{a}(1,0,1)`$, and $`𝐛_3=\frac{2\pi }{a}(1,1,0)`$\]. The interaction between the electrons forming a Cooper pair on Fermi pockets centered at these points takes the form
$$V=\left(\begin{array}{cccccc}\lambda & \nu & \mu & \nu & \nu & \nu \\ \nu & \lambda & \nu & \mu & \nu & \nu \\ \mu & \nu & \lambda & \nu & \nu & \nu \\ \nu & \mu & \nu & \lambda & \nu & \nu \\ \nu & \nu & \nu & \nu & \lambda & \mu \\ \nu & \nu & \nu & \nu & \mu & \lambda \end{array}\right)$$
(11)
where $`\lambda `$ is the interaction on the same pocket, $`\mu `$ couples two pockets connected by a $`\frac{4\pi }{a}(1,0,0)`$, $`\frac{4\pi }{a}(0,1,0)`$, or a $`\frac{4\pi }{a}(0,0,1)`$ translation, and $`\nu `$ characterizes the remaining couplings between nearest neighbor N points. The matrix for the impurity scattering (with elements $`\mathrm{\Gamma }_{\alpha ,\beta }`$) has the same form and will be described by elements $`\mathrm{\Gamma }_0,\mathrm{\Gamma }_1`$, and $`\mathrm{\Gamma }_2`$ replacing $`\lambda `$, $`\nu `$, and $`\mu `$ respectively.
Consider the clean limit, the linearized gap equation is
$$\mathrm{\Delta }_\alpha ^{}=\underset{\beta }{}N_0\lambda _{\alpha \beta }\mathrm{\Delta }_\beta ^{}\mathrm{ln}\left(\frac{2\gamma \omega _D}{\pi T_c}\right).$$
(12)
The six $`\mathrm{\Delta }_\alpha `$ transform among each other under cubic symmetry transformations forming a 6D reducible representation of the cubic group $`O_h`$, which is split into a $`1D`$ $`A_{1g}`$, a $`2D`$ $`E_g`$, and a $`3D`$ $`F_{2g}`$ irreducible representation. These three representations correspond to different order parameters with three critical temperatures:
$`T_{c0,A}`$ $`=`$ $`{\displaystyle \frac{2\gamma \omega _D}{\pi }}\mathrm{exp}\left[{\displaystyle \frac{1}{N_0(\lambda +\mu +4\nu )}}\right]`$ (13)
$`T_{c0,E}`$ $`=`$ $`{\displaystyle \frac{2\gamma \omega _D}{\pi }}\mathrm{exp}\left[{\displaystyle \frac{1}{N_0(\lambda +\mu 2\nu )}}\right]`$ (14)
$`T_{c0,F}`$ $`=`$ $`{\displaystyle \frac{2\gamma \omega _D}{\pi }}\mathrm{exp}\left[{\displaystyle \frac{1}{N_0(\lambda \mu )}}\right]`$ (15)
where the factors in the exponentials must be negative for a non-zero transition temperature. When $`\nu >0`$ or $`2\nu +\mu >0`$ (i.e. for repulsive inter-pocket interactions) the higher dimensional representations have the higher $`T_c`$. The basis wave function for 1D $`A_{1g}`$ identical representation is
$$l=(\mathrm{\Delta }_1+\mathrm{\Delta }_2+\mathrm{\Delta }_3+\mathrm{\Delta }_4+\mathrm{\Delta }_5+\mathrm{\Delta }_6)/\sqrt{6},$$
(16)
the basis wave functions for the 2D $`E_g`$ representation can be chosen as
$`\eta _1`$ $`=`$ $`(\mathrm{\Delta }_1+ϵ\mathrm{\Delta }_2+\mathrm{\Delta }_3+ϵ\mathrm{\Delta }_4+ϵ^2\mathrm{\Delta }_5+ϵ^2\mathrm{\Delta }_6)/\sqrt{6}`$ (17)
$`\eta _2`$ $`=`$ $`(\mathrm{\Delta }_1+ϵ^2\mathrm{\Delta }_2+\mathrm{\Delta }_3+ϵ^2\mathrm{\Delta }_4+ϵ\mathrm{\Delta }_5+ϵ\mathrm{\Delta }_6)/\sqrt{6}`$ (18)
where $`ϵ=\mathrm{exp}(2\pi i/3)`$, and the basis wave functions for the 3D $`F_{2g}`$ representation can be chosen as
$`\eta _x`$ $`=`$ $`(\mathrm{\Delta }_5\mathrm{\Delta }_6)/\sqrt{2}`$ (19)
$`\eta _y`$ $`=`$ $`(\mathrm{\Delta }_1\mathrm{\Delta }_3)/\sqrt{2}`$ (20)
$`\eta _z`$ $`=`$ $`(\mathrm{\Delta }_2\mathrm{\Delta }_4)/\sqrt{2}.`$ (21)
Following Refs. , the Ginzburg Landau functional for the 2D and 3D representations are found to be
$$\delta F_E/N_0=\frac{TT_{c0,E}}{T_{c0,E}}(|\eta _1|^2+|\eta _2|^2)+\frac{7\zeta (3)}{96\pi ^2T_{c0,E}^2}(|\eta _1|^4+|\eta _2|^4+4|\eta _1|^2|\eta _2|^2)$$
(22)
$$\delta F_F/N_0=\frac{TT_{c0,F}}{T_{c0,F}}(\stackrel{}{\eta }\stackrel{}{\eta }^{})+\frac{7\zeta (3)}{32\pi ^2T_{c0,F}^2}(|\eta _x|^4+|\eta _y|^4+|\eta _z|^4).$$
(23)
Eq. (22) for the $`E_g`$ representation implies $`(\eta _1,\eta _2)=(1,0)`$ is a stable ground state. Following the notation of Ref. this corresponds to the superconducting class $`O(D_2)`$. The properties of such a ground state are the same as those found for the analogous calculation for FS pockets centered at the three X points in a simple cubic lattice and this is also the situation that applies to both LaB<sub>6</sub> and CeRu<sub>2</sub> (we refer to Ref. for more details). Eq. (23) for the $`F_{2g}`$ representation implies the ground state solution $`(\eta _x,\eta _y,\eta _z)=[1,\mathrm{exp}(i\varphi _1),\mathrm{exp}(i\varphi _2)]`$ where $`\varphi _1`$ and $`\varphi _2`$ are arbitrary phase factors. The degeneracy arising from the phases $`\varphi _1`$ and $`\varphi _2`$ is an artifact of the BCS theory and is not lifted by higher order terms in free energy functional. In the notation of Ref. the free energy of Eq. (23) places the system on boundary of two phases with superconducting classes $`D_3\times R`$ and $`D_3(E)`$. A similar situation arose for FS pockets centered at the four L points of a face centered cubic lattice where a degeneracy was found between solutions corresponding to the classes $`D_4^{(2)}(D_2)\times R`$ and $`D_4(E)`$ of the $`F_{2g}`$ representation . The presence of a FS centered at the $`\mathrm{\Gamma }`$ point lifts this degeneracy. As a result the solution $`(\eta _x,\eta _y,\eta _z)=(1,ϵ,ϵ^2)`$ corresponding to the magnetic class $`D_3(E)`$ is likely. This class allows ferromagnetism and it will have point nodes on the $`\mathrm{\Gamma }`$-centered FS (the N point centered FS pockets will have no nodes) .
Now consider the effect of including impurity scattering on $`T_{c,A},T_{c,F},T_{c,E}`$. It is convenient to define the following relaxation times
$`(2\tau _A)^1=`$ $`\mathrm{\Gamma }_0+4\mathrm{\Gamma }_1+\mathrm{\Gamma }_2`$ (24)
$`(2\tau _E)^1=`$ $`\mathrm{\Gamma }_0+\mathrm{\Gamma }_22\mathrm{\Gamma }_1`$ (25)
$`(2\tau _F)^1=`$ $`\mathrm{\Gamma }_0\mathrm{\Gamma }_2`$ (26)
The lifetime $`\tau _A`$ determines the elastic mean free path while the lifetimes $`\tau _E`$ and $`\tau _F`$ cannot be easily measured. Solution of the linearized equations indicate that the transition temperatures $`T_{c,i}`$ are given by
$$\mathrm{ln}\frac{T_{c0,i}}{T_{c,i}}=\psi (\frac{1}{2}+\frac{\tau _A^1\tau _i^1}{4\pi T_{c,i}})\psi (\frac{1}{2})$$
(27)
where $`\psi (x)`$ is the digamma function. From this expression it is clear that $`T_{c,A}`$ is not suppressed by non-magnetic impurities while $`T_{c,E}`$ and $`T_{c,F}`$ are suppressed. Note that if $`\mathrm{\Gamma }_1=\mathrm{\Gamma }_2=0`$ (there is only intra-pocket impurity scattering) then all three transition temperatures are not suppressed by impurities. It is only inter-pocket scattering that reduce $`T_{c,E}`$ and $`T_{c,F}`$ from $`T_{c0,E}`$ and $`T_{c0,F}`$. Consequently, the suppression of $`T_c`$ as the impurity concentration is increased is not as rapid as might be expected from previous analysis of unconventional single band superconductors . Furthermore an impurity induced transition is possible from the non $`s`$-wave superconducting states to the $`s`$-wave superconducting states when $`\lambda <4\nu \mu `$ where negative $`\lambda `$ corresponds to an attractive intra-pocket interaction and either $`\nu >0`$ or $`2\nu +\mu >0`$. The observation of an abrupt change in the $`T_c`$ dependence on impurity concentration corresponding to this phase transition (i.e. $`T_c`$ initially decreasing with impurity concentration in the exotic state and then becoming impurity independent in the $`s`$-wave state) may help to identify such exotic superconducting states.
The results of the above analysis may be directly applied to CeCo<sub>2</sub> and predicts that due to the presence of the the $`\mathrm{\Gamma }`$-centered FS sheet, the low temperature specific heat will vary as $`T^3`$ for the exotic states considered (with superconducting classes $`O(D_2)`$ or $`D_3(E)`$). Note that initial measurements appear more consistent with a $`T^2`$ behavior however these measurements only exist down to $`T/T_c0.13`$ which may not yet be sufficiently low to extract the low temperature behavior reliably. For the materials LaB<sub>6</sub> and CeRu<sub>2</sub> the FS pockets are centered at the X points for which the exotic superconducting state belongs to the 2D $`E_g`$ representation which is described in detail in Ref. . Impurity scattering will have qualitatively the same effect on this state as on the exotic states examined above.
In conclusion we have considered the effect of isotropic non-magnetic impurity scattering on the exotic superconducting states that arise in a BCC lattice when the FS forms pockets centered at the six N-points. We have found that increasing the impurity concentration does not suppress these non $`s`$-wave states as rapidly as would be expected from earlier studies of impurity scattering in unconventional superconductors. These results are not specific to FS pockets centered at the reciprocal lattice N points of a BCC lattice but will apply to any degenerate set of FS pockets.
We would like to thank Z.Fisk, D. Khokhlov, J.R. Schrieffer, and the members of the NHMFL condensed matter theory group seminar for useful discussions and comments. This work was supported by the National High Magnetic Field Laboratory through NSF cooperative agreement No. DMR-9527035 and the State of Florida.
|
no-problem/9907/hep-ph9907492.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
Effective field theories have found increasing interest as powerful tools for describing the dynamics of physical systems where global symmetries are spontaneously broken and continuous order parameter fields represent the relevant low-energy degrees of freedom. Depending on space dimensionality and the manifold on which the fields live classical localized static solutions may fall into topologically distinct classes characterized by integer winding numbers, carrying energy, momentum and internal properties which suggest their interpretation as particle-like excitations of the uniform ground state. Their spatial size is determined by scaling properties of different competing terms in the effective lagrangian.
Quantization of effective field theories allows to assign proper quantum numbers to quasi-particle properties, identify excited states of quasi-particles, and calculate loop corrections to the classical results for observable quantities, which may be crucial for experimental verification. The evaluation of loop corrections necessarily brings about the need for renormalization. Although, generally effective field theories might be non–renormalizable in the strict sense, they still may be renormalized order by order in terms of a gradient expansion. A well-known example is chiral perturbation theory (ChPT), applied successfully in hadron physics. In the vacuum sector such an expansion is truncated by allowing only for external momenta small compared to the underlying scale of the theory. However in the soliton sector, the soliton itself constitutes ”external” fields with gradients comparable to the scale of the theory which cannot be made small by definition. Therefore, in dealing with solitons, the problem of truncating the expansion can only be solved by the ad hoc assumption that the renormalized couplings of the higher gradient terms are small.
A prominent example for the application of this program are 3D-$`SU(N_f)`$ skyrmions in $`N_f`$-flavor meson fields, where the topological charge is identified with baryon number, with impressive success for baryonic properties, resonances, and meson-baryon dynamics. Similarly, the conjecture to consider 2D-$`O(3)`$ spin textures as charged quasi-particles in ferromagnetic quantum Hall systems and antiferromagnetic high–temperature superconductors , with topological winding density identified with the deviation of the electron density from its uniform background value, suggests a corresponding investigation. For magnons and vortices with unity charge such attempts have been presented for ferromagnets and antiferromagnets . The field theoretical approach forwarded here is rather related to the antiferromagnet due to its preserved time–reversal invariance which makes the analysis fairly similar to what applies to relativistic systems. The ferromagnet where time–reversal invariance is broken would require the consideration of the Landau–Lifshitz dynamics. The evaluation of the Casimir energies involves bound–state energies and a sum over scattering phase–shifts . This provides also complete information about resonant excited states in the continuum which in the 3D-$`SU(N_f)`$ case successfully describe well-established baryon resonances.
For the general outline of the necessary steps and technique we discuss the case where the second-order exchange energy is taken in the time–reversal invariant form of the non–linear sigma model as it naturally appears in relativistic field theories. In two dimensions it is scale invariant and therefore irrelevant for the spatial extent of the static localized solution. To fix the soliton size two more terms are needed. We use the standard fourth-order Skyrme term, and a symmetry-breaking coupling of the $`O(3)`$-order parameter to an anisotropy field. The static part of the Skyrme term represents a local approximation to the Coulomb energy of the charged excitations; in our field-theoretic example we keep also the time-derivative parts of it. So our discussion will mainly serve a demonstrative purpose as a model field theory.
We shall specifically address the question of the binding energy of doubly charged quasi particles. For 3D-$`SU(N_f)`$ skyrmions this question has a long history since it was discovered that in the winding number $`B=2`$ sector the static lowest-energy solution is bound with respect to decay into two separate $`B=1`$ skyrmions and displays only axial symmetry in contrast to the radially symmetric hedgehog skyrmions. There has been much discussion about the physical relevance (in the deuteron) of such a torus configuration, and it was speculated that quantum corrections might reverse the sign of the binding energy $`E(B=2)2E(B=1)`$. Evaluation of loop corrections to the 3D-$`SU(2)`$ $`B=2`$ torus is a formidable task which to our knowledge has not yet been achieved. It is interesting that the same situation occurs for the 2D-$`O(3)`$ skyrmions: the classical $`n=2`$ solution shows a ring-like density distribution and it is bound with respect to decay into two individual $`B=1`$ skyrmions. In this case, however, the evaluation of loop corrections for both configurations is of comparable complexity, and we will show that they in fact reverse the sign of the binding energy.
Actually, there is an ongoing discussion about the binding of 2-skyrmions also in condensed matter applications. However, it should be stressed that for ferromagnetic systems the time-dependent part of the effective lagrangian has to be replaced by the T-violating Landau-Lifshitz form with only one time derivative. For antiferromagnets the time derivative coupling of the staggered spin to the magnetic field, which does not contribute to the static stabilization should be included . And, in any case, for quantitative conclusions, the non–local character of the Coulomb energy should be respected .
## 2 Static solutions
The Lagrangian of the $`O(3)`$ model in $`2+1`$ dimensions is conveniently written in terms of the $`3`$-component field $`𝚽`$ satisfying the constraint $`𝚽𝚽=1`$,
$$=\frac{f^2}{2}_\mu 𝚽^\mu 𝚽\frac{1}{4e^2}(_\mu 𝚽\times _\nu 𝚽)^2f^2m^2(1\mathrm{\Phi }_3).$$
(1)
The three terms represent the non–linear sigma (N$`\mathrm{}\sigma `$) model, the Skyrme and the potential term. There exists a conserved topological current
$$T^\mu =\frac{1}{8\pi }ϵ^{\mu \nu \rho }𝚽(_\nu 𝚽\times _\rho 𝚽),_\mu T^\mu =0.$$
(2)
The radially symmetric hedgehog ansatz, written in polar coordinates $`(r,\phi )`$
$$𝚽=\left(\begin{array}{c}\mathrm{sin}F(r)\mathrm{cos}n\phi \\ \mathrm{sin}F(r)\mathrm{sin}n\phi \\ \mathrm{cos}F(r)\end{array}\right),$$
(3)
corresponds to winding number
$$d^2rT^0=\frac{n}{2}[\mathrm{cos}F(\mathrm{})\mathrm{cos}F(0)]=n,T^0=\frac{nF^{}\mathrm{sin}F}{4\pi r}$$
(4)
and the soliton’s size may be defined according to
$$<r^2>_n=\frac{1}{n}d^2rr^2T^0=\frac{1}{4\pi }d^2rrF^{}\mathrm{sin}F.$$
(5)
It is convenient to absorb the length $`1/\sqrt{fem}`$ which sets the scale for the size of localized structures into the dimensionless spatial coordinate $`x=\sqrt{fem}r`$. The static energy functional connected with the lagrangian (1) is then given by
$$E_n^{class}=\frac{f^2}{2}d^2x\left[F^2+\frac{n^2s^2}{x^2}+a\frac{n^2F^2s^2}{x^2}+2a(1c)\right]=4\pi f^2E_n^0(a),$$
(6)
with the abbreviations $`s=\mathrm{sin}F`$ and $`c=\mathrm{cos}F`$. Technically, $`a=m/fe`$ is the only nontrivial parameter, while $`4\pi f^2`$ sets the overall energy scale. We shall therefore present results for different values of $`a`$. The limit $`a0`$ describes the pure Belavin–Polyakov (BP) solution . The opposite limit, $`a\mathrm{}`$, is technically also well defined; it describes a system without spin-spin aligning force.
The Euler–Lagrange equation following from the variation of the energy functional
$$\frac{1}{x}\left(xF^{}\right)^{}\frac{n^2sc}{x^2}+a\frac{n^2s}{x}\left(\frac{F^{}s}{x}\right)^{}as=0$$
(7)
is a second order non–linear differential equation which is solved subject to the boundary conditions
$$F(0)=\pi ,F(x\mathrm{})=0.$$
(8)
In principle the boundary condition at the origin according to (4) could be $`F(0)=\nu \pi `$ with $`\nu =1,3,\mathrm{}`$ an odd multiple of $`\pi `$, however it turns out that the solution with $`\nu =1`$ has the smallest static energy.
The hedgehog profiles and the corresponding energy densities for $`a^2=0.1`$ are shown in Fig. 1 for the 1- and 2-skyrmions. It is noticed that the maximum energy density for the $`n=2`$ soliton is not located at the origin but in a spherical shell of radius $`x1`$.
The classical energies and square radii are given in Table 1.
In the two limiting cases, $`a0`$ and $`a\mathrm{}`$, the differential equation (7) may be solved analytically (see appendix). The corresponding static energies and radii are
$`E_n^0=n,`$ $`<x^2>_n=\sqrt{{\displaystyle \frac{2n}{3}}(n^21)}{\displaystyle \frac{\pi /n}{\mathrm{sin}(\pi /n)}}`$ $`a<<1`$
$`E_n^0=4n/3a,`$ $`<x^2>_n=4n/3`$ $`a>>1.`$ (9)
It is noticed that for the BP solution with $`a0`$ the 1-skyrmion’s radius diverges which is due to the particular choice of the potential term in (1). This divergence is perceptible only for extremely small values of $`a`$, for $`a\stackrel{>}{_{}}0.001`$ the radius for the 1-soliton is still smaller than that of the 2-soliton which stays finite in the limit $`a0`$ (for a more thorough discussion of this problem we refer to the appendix). The dependence of the energies on the dimensionless parameter $`a`$ is shown in Fig. 2. Independently from this parameter $`E_2^02E_1^0`$ always holds, the 2-skyrmion is classically stable against decay into two 1-skyrmions.
Hedgehog solutions with higher topological charge, $`n3`$, are not stable (see subsection 3.3 for their stability) and do not represent the minimum energy configuration of the corresponding sector. The 3-soliton’s minimum energy configuration is not rotationally symmetric, it is a linear molecule which consists of three distorted 1-solitons. Higher multi–solitons are then supposedly molecules of stable 2-skyrmions (and 1-skyrmions in case of $`n`$ odd) . This picture may change fundamentally when the Casimir energies are taken into account.
## 3 Small amplitude fluctuations
The evaluation of quantum corrections is based on the normal modes of the classical configurations. With the appropriate boundary conditions these normal modes describe excited localized (bound) states, and the scattering of the vacuum fluctuations (mesons or magnons) off the soliton backgound. (The S-matrix for magnon-vortex scattering has been considered (without the stabilizing Skyrme and potential terms) in for antiferromagnets and in for ferromagnets). Because of the constraint $`𝚽𝚽=1`$ the model effectively possesses two independent fields. We therefore introduce two–component small amplitude fluctuations
$$𝜼=\widehat{𝒓}_n\eta _L+\widehat{𝝋}_n\eta _T,\widehat{𝒓}_n=\left(\begin{array}{c}\mathrm{cos}n\phi \\ \mathrm{sin}n\phi \end{array}\right),\widehat{𝝋}_n=\left(\begin{array}{c}\mathrm{sin}n\phi \\ \mathrm{cos}n\phi \end{array}\right)$$
(10)
which we decompose into components parallel and perpendicular to the soliton configuration with winding number $`n`$. Of course, there are many different ways to parametrize the total time-dependent field $`𝚽`$ which lead to different equations of motion for the corresponding fluctuations. However, the bound–states and the scattering matrix, in particular the phase–shifts are independent of the chosen parametrization. A very convenient choice different from the common Polyakov parametrization , which only at first sight may seem complicated, is given by
$$𝚽=\left(\begin{array}{c}\widehat{𝒓}_n\mathrm{sin}F(1𝜼^2/2)+\widehat{𝒓}_n\mathrm{cos}F\eta _L+\widehat{𝝋}_n\eta _T\\ \mathrm{cos}F(1𝜼^2/2)\mathrm{sin}F\eta _L\end{array}\right)\stackrel{r\mathrm{}}{}\left(\begin{array}{c}𝜼\\ 1𝜼^2/2\end{array}\right).$$
(11)
The main advantage of this parametrization is that it leads to a flat metric for the non–linear sigma model part of the lagrangian. For 3D-$`SU(N)`$ skyrmions this parametrization is well–known as Callan–Klebanov ansatz .
It is now straightforward to write down the e.o.m. for the fluctuating components $`\eta _L`$ and $`\eta _T`$. The Lagrangian (1) has to be expanded to quadratic order in the fluctuations, then the e.o.m. can be read off. We give these coupled linear differential equations explicitely, where we have already exploited the time–dependence as well as the angular dependence of the fluctuations
$$\eta _L=\underset{M}{}f_M(x)e^{iM\phi }e^{i\omega t},\eta _T=i\underset{M}{}g_M(x)e^{iM\phi }e^{i\omega t}.$$
(12)
The e.o.m. decouple in the magnetic quantum number $`M`$, which is the analogon of the phonon- (or grand-) spin familiar from scattering calculations for 3D-$`SU(N)`$ skyrmions
$`{\displaystyle \frac{1}{x}}\left(xb_Lf_M^{}\right)^{}+{\displaystyle \frac{M^2}{x^2}}f_M+{\displaystyle \frac{n^2}{x^2}}(b_T2s^2)f_Ma{\displaystyle \frac{2n^2c}{x}}\left({\displaystyle \frac{F^{}s}{x}}\right)^{}f_M+acf_M`$
$`{\displaystyle \frac{nMc}{x^2}}(1+b_T)g_M+a{\displaystyle \frac{nMF^{}s}{x^2}}g_M^{}+a{\displaystyle \frac{2nM}{x}}\left({\displaystyle \frac{F^{}s}{x}}\right)^{}g_M=\omega ^2b_Lf_M`$
$`{\displaystyle \frac{1}{x}}\left(xg_M^{}\right)^{}+{\displaystyle \frac{M^2}{x^2}}b_Tg_M(F^2{\displaystyle \frac{n^2c^2}{x^2}})g_Ma{\displaystyle \frac{n^2c}{x}}\left({\displaystyle \frac{F^{}s}{x}}\right)^{}g_M+acg_M`$ (13)
$`{\displaystyle \frac{nMc}{x^2}}(1+b_T)f_Ma{\displaystyle \frac{nMF^{}s}{x^2}}f_M^{}+a{\displaystyle \frac{nM}{x}}\left({\displaystyle \frac{F^{}s}{x}}\right)^{}f_M=\omega ^2b_Tg_M.`$
Here we introduced the metric functions $`b_L(x)=1+an^2s^2/x^2`$ and $`b_T(x)=1+aF^2`$. The energies $`\omega `$ are understood in natural units of $`\sqrt{fem}`$ such that the threshold occurs at $`\omega =\sqrt{a}`$. The equations are invariant with respect to the simultaneous replacements $`MM`$ and $`g_Mg_M`$. These equations contain all the information about zero–modes, bound–states, the scattering matrix and the stability of the hedgehog solutions.
### 3.1 Zero-modes
The soliton’s energy (6) is invariant under spatial rotations around the $`z`$-axis (which for the hedgehog is equivalent to an iso–rotation around the internal $`3`$-axis) as well as under a translation in the $`x`$-$`y`$ plane. The infinitesimal rotation (iso–rotation) and translation give rise to zero–modes which are solutions of the e.o.m. (3) for $`\omega ^2=0`$.
The rotational zero–mode is obtained by a shift of the azimuthal angle $`\phi \phi +\alpha `$ in the hedgehog ansatz (3). Comparison with (11) and (12) shows then that this zero–mode
$$f_0(x)=0,g_0(x)=s\text{(rotational zero-mode)}$$
(14)
is located in the $`M=0`$ partial–wave. Similarly, the translational zero–mode $`𝒓𝒓+𝒂`$
$$f_1(x)=F^{},g_1(x)=\frac{ns}{x}\text{(translational zero-mode)}$$
(15)
sits in the $`M=1`$ partial–wave.
With the stability condition (7) it is easily checked that the above modes satisfy the e.o.m. for $`\omega ^2=0`$. As both zero–modes possess a finite norm, they will act as bound–states in the scattering calculation and influence the phase–shifts via Levinson’s theorem. Besides these zero energy bound–states there are also bound–states at finite energy.
### 3.2 Bound-states
In both systems, $`n=1`$ and $`n=2`$, we observe bound–states in the $`M=n`$ partial–wave. With decreasing parameter $`a`$ the energies of these bound–states move towards the threshold, in particular for the $`n=1`$ system this happens very quickly (cf. Table 2). There is a simple explanation of this phenomenon. As is noticed from the e.o.m. the iso–rotation with respect to the $`1`$\- and $`2`$-axes
$$f_n(x)=1,g_n(x)=c(\text{zero-energy solution for}a0)$$
(16)
is a zero–energy solution of the $`M=n`$ partial–wave for $`a0`$. The potential connected with a finite $`a0`$ distorts this continuum solution into a bound–state, which in the limit $`a0`$ is shifted towards the threshold.
In the $`n=1`$ system this is the only bound–state apart from the zero–modes already discussed. However in the $`n=2`$ case some of the low-lying resonances may also become bound for sufficiently strong $`a`$, e.g. for $`a=1`$ we find additional bound–states in the $`M=0`$ and $`M=1`$ partial–waves at energies $`\omega _0=0.923`$ and $`\omega _1=0.793`$ respectively. All these bound–states must be considered for the Casimir energy (compare also the discussion of the phase–shifts in subsection 3.4).
### 3.3 Stability of hedgehog solitons
Formally, eqs. (3) are the e.o.m. for small amplitude fluctuations around a hedgehog soliton with arbitrary topological quantum number $`n`$. For this reason they contain information about the stability of the $`n`$-skyrmion. For the $`n`$-skyrmion to be stable there must not exist solutions to (3) with negative $`\omega ^2`$.
Indeed, we do find solutions of (3) with negative $`\omega ^2`$ for topological quantum numbers $`n3`$. For example for $`n=3`$ with $`a^2=0.1`$ there exist such solutions in the $`M=2`$ ($`\omega ^2=0.14`$), $`M=3`$ ($`\omega ^2=0.30`$) and all higher partial waves. In the $`n=1`$ and $`n=2`$ sectors there exist no negative $`\omega ^2`$ solutions.
We conclude, that the $`n=1`$ and $`n=2`$ hedgehog solitons are stable while those with higher topological charges are unstable. This agrees with the findings in ref. . In what follows we consider the scattering off the stable skyrmions with $`n=1`$ and $`n=2`$.
### 3.4 Phase–shifts
For the $`2`$-dimensional scattering problem in spherical coordinates the incident plane wave is decomposed into partial waves carrying magnetic quantum numbers
$$e^{i𝒑𝒓}=\underset{M=\mathrm{}}{\overset{\mathrm{}}{}}i^MJ_M(pr)e^{iM\phi }=\underset{M=0}{\overset{\mathrm{}}{}}ϵ_Mi^MJ_M(pr)\mathrm{cos}(M\phi ).$$
(17)
The latter transformation with the multiplicities $`ϵ_0=1`$ and $`ϵ_M=2`$ for $`M1`$ follows from the properties of the regular and irregular Bessel functions of the first kind, $`J_M(pr)`$ and $`N_M(pr)`$, when $`MM`$. Thus, it suffices to consider the partial waves $`M0`$. Similarly, for the scattered wave (plane wave plus outgoing radial wave) in the case of a single channel we have
$`\mathrm{\Psi }`$ $`=`$ $`{\displaystyle \underset{M=0}{\overset{\mathrm{}}{}}}ϵ_Mi^M\psi _M(p,r)\mathrm{cos}(M\phi ).`$ (18)
The partial–wave projected scattering waves $`\psi _M(p,r)`$ with appropriate boundary conditions at the origin are integrated according to the e.o.m. and the phase–shifts then are obtained from the asymptotic form
$$\psi _M(p,r)[J_M(pr)\mathrm{cos}\delta _M(p)+N_M(pr)\mathrm{sin}\delta _M(p)]e^{i\delta _M(p)}.$$
(19)
The phase–shifts are related to the cross–section in the usual way, and the property $`\delta _M(p)=\delta _M(p)`$ follows from the corresponding symmetry of the e.o.m.. The generalization to two coupled channels now is straightforward. It is noticed that our scattering equations (3) decouple asymptotically when the functions $`f_\pm =(f_Mg_M)/\sqrt{2}`$ are introduced
$$\frac{1}{r}\left(rf_\pm ^{}\right)^{}+\frac{(M\pm n)^2}{r^2}f_\pm =p^2f_\pm ,\omega ^2=p^2+m^2,$$
(20)
with the solution
$$f_\pm (p,r)=A_\pm (p)J_{|M\pm n|}(pr)+B_\pm (p)N_{|M\pm n|}(pr).$$
(21)
From the coefficients $`A_\pm (p)`$ and $`B_\pm (p)`$ of the asymptotical solution, the $`2\times 2`$ scattering matrix $`S_M`$ is obtained for every partial wave $`M`$. Again, as the e.o.m. are invariant with respect to $`MM`$ so is the scattering matrix and it suffices to consider $`M0`$. It is convenient to diagonalize the scattering matrix
$$S_M=U_M\left(\begin{array}{cc}e^{i\delta _M^{(1)}}& 0\\ 0& e^{i\delta _M^{(2)}}\end{array}\right)(U_M)^{},$$
(22)
in order to obtain the intrinsic eigen phase–shifts $`\delta _M^{(1)}`$ and $`\delta _M^{(2)}`$. Their sum $`\delta _M=\delta _M^{(1)}+\delta _M^{(2)}`$ is plotted for the partial waves with $`M4`$ in Fig. 3, where the $`n=1`$ and $`n=2`$ systems are considered separately.
The picture that emerges looks qualitatively quite similar to what has been obtained for 3D-$`SU(N)`$ skyrmions long ago . For topological charge $`n=1`$ we obtain in detail: The phases for $`M=0`$ and $`M=1`$, where the rotational and translational zero modes are located together with the $`M=1`$ bound–state, start at $`\pi `$ resp. $`2\pi `$ according to Levinson’s theorem. For the smaller parameters $`a^2=0.01`$ and $`0.1`$ the wave–function of the $`M=1`$ bound–state close to threshold (Table 2) is already quite similar to the infinitesimal translation (cf. subsection 3.5 for the limit $`a0`$). As a consequence the phase–shift at threshold reacts with a sudden drop from $`2\pi `$ to $`\pi `$ which in the figures is only noticed because the phase–shift falls below $`\pi `$ before it bends to the right. A weakly pronounced breathing mode is observed at low energies for $`M=0`$. The partial–waves $`M=2,3,\mathrm{}`$ then contain a band of resonances which are flattened out for higher $`M`$ and also with decreasing parameter $`a`$.
Similarly for topological charge $`n=2`$: For the smaller parameters $`a^2=0.01`$ and $`0.1`$ the $`M=0`$ and $`M=1`$ phase–shifts start at $`\pi `$ because of the zero modes. In all but the $`M=2`$ partial–wave with its bound–state (cf. Table 2) sharp resonances occur, which are much more pronounced compared to the $`n=1`$ case. For strong potentials then, the $`M=0`$ and $`M=1`$ resonances become bound (e.g. for $`a=1`$ at $`\omega _0=0.923`$ and $`\omega _1=0.793`$). Also less pronounced secondary resonances do appear.
### 3.5 Belavin-Polyakov soliton
Finally we would like to add a few comments on the pure Belavin–Polyakov solution . As mentioned we recover this solution for small parameters $`a`$ with its size $`R`$ still fixed by the balance of the Skyrme and potential terms (cf. appendix). In the pure N$`\mathrm{}\sigma `$ model without additional stabilizing terms the size $`R`$ of the BP soliton is an undetermined parameter. In contrast to the full lagrangian (1) the pure N$`\mathrm{}\sigma `$ model does not break $`O(3)`$–symmetry and is also conformally invariant. Due to these additional symmetries a larger number of zero–modes is expected.
The e.o.m. for the fluctuations obtained in that case from (3) with $`a=0`$ and $`x=r/R`$ decouple for $`f_\pm =(f_Mg_M)/\sqrt{2}`$ (not only asymptotically) and are easily solved. Still the counting of the zero–modes is a bit tricky. In addition to the rotation (14), the breathing–mode (conformal invariance) becomes a zero–mode in the $`M=0`$ partial–wave as expected. But for the 1–soliton the iso–rotation (16) coincides with the translation (15) giving rise to only $`2\times 1`$ zero–modes in the $`M=1`$ partial–wave. This makes altogether $`4`$ zero–modes for the 1–soliton. Similar counting for the 2–soliton yields $`8`$ zero–modes.
Of course, this discussion is somewhat academic, because in reality there should always be some stabilizing term which fixes the size of the soliton and which at least breaks conformal invariance. Therefore we do not show the corresponding phase–shifts.
## 4 Casimir energy
With the bound–state energies and the phase–shifts provided in the previous section we are in the position to evaluate the 1-loop contribution to the soliton energy, i.e. the Casimir energy. The UV singularities contained in the loop are related to the high momentum behaviour of the phase–shifts
$$\delta (p)=\underset{M=0}{\overset{\mathrm{}}{}}ϵ_M\delta _M(p)\stackrel{p\mathrm{}}{}a_0p^2+a_1.$$
(23)
In the case of a vanishing Skyrme term the coefficients
$$a_0=0,a_1=\frac{1}{4}d^2r\left[F^2+\frac{n^2s^2}{r^2}+2m^2(1c)\right]$$
(24)
are known analytically from the Born terms. For the full model they have to be extracted numerically from the phase–shift sum (23), which for that purpose has to be calculated to a high precision with some $`100`$ partial waves taken into account. Typical values obtained, e.g. for $`a^2=0.1`$, are $`a_0=1.0(fem)^1`$ and $`a_1=7.2`$ for the 1-soliton, and $`a_0=2.0(fem)^1`$ and $`a_1=13.4`$ for the 2-soliton respectively. According to (24) the N$`\mathrm{}\sigma `$ plus potential term contributions to these coefficients are $`a_1=8.3`$ for $`n=1`$ and $`a_1=15.6`$ for $`n=2`$. With this established we may use the phase–shift formula , subtract the troublesome high momentum behaviour from the phase–shifts, and separately add the corresponding counterterms
$`E^{1loop}`$ $`=`$ $`{\displaystyle \frac{1}{2\pi }}\left[{\displaystyle _0^{\mathrm{}}}𝑑p{\displaystyle \frac{p}{\omega }}\delta (p)m\delta (0)\right]+{\displaystyle \frac{1}{2}}{\displaystyle \underset{B}{}}\omega _B`$
$`=`$ $`{\displaystyle \frac{1}{2\pi }}{\displaystyle _0^{\mathrm{}}}𝑑p{\displaystyle \frac{p}{\omega }}\left(\delta (p)a_0p^2a_1\right)+{\displaystyle \frac{1}{2}}{\displaystyle \underset{B}{}}(\omega _Bm){\displaystyle \frac{d^2p}{(2\pi )^2}\frac{a_0p^2+a_1}{\omega }}.`$
This procedure is closely related to what has been employed in the $`3+1`$ dimensional Skyrme model for the calculation of the Casimir energy to the nucleon mass . The counter terms could e.g. be evaluated using a 2-momentum cutoff $`\mathrm{\Lambda }>>m`$
$$\frac{d^2p}{(2\pi )^2}\frac{1}{\omega }=\frac{1}{2\pi }[\mathrm{\Lambda }m],\frac{d^2p}{(2\pi )^2}\frac{p^2}{\omega }=\frac{1}{6\pi }[\mathrm{\Lambda }^3\frac{3}{2}m^2\mathrm{\Lambda }+2m^3],$$
(26)
which makes the linear and cubic divergencies explicit
$$E^{1loop}=\frac{a_0}{6\pi }\left[\mathrm{\Lambda }^3\frac{3}{2}m^2\mathrm{\Lambda }\right]\frac{a_1}{2\pi }\mathrm{\Lambda }+E^{cas}.$$
(27)
While the N$`\mathrm{}\sigma `$ and the potential term contributions located in $`a_1`$ may be absorbed in a renormalized constant
$$f^2=\overline{f}^2\frac{\mathrm{\Lambda }}{4\pi }$$
(28)
($`\overline{f}`$ denotes the bare constant), the divergencies stemming from the Skyrme term renormalize the Skyrme parameter $`e`$ as well as the couplings of all the higher gradient terms not listed in the lagrangian (1). The latter renormalization we do not carry out explicitely, instead we simply assume that the renormalized Skyrme parameter be $`e`$ and the renormalized couplings of the higher gradient terms be zero. To which extent this is a consistent assumption will be discussed in the following section in connection with the scale–dependence of our results. Now all the divergencies residing in (27) are taken care of, leaving the finite Casimir energy
$$E^{cas}=\frac{1}{2\pi }_0^{\mathrm{}}𝑑p\frac{p}{\omega }\left(\delta (p)a_0p^2a_1\right)+\frac{1}{2}\underset{B}{}(\omega _Bm)\frac{a_0m^3}{3\pi }+\frac{a_1m}{2\pi }.$$
(29)
This result is independent of the employed regularization scheme. For instance, using dimensional regularization the counter terms (26) in $`2+1`$ dimensions become finite and equal to the last terms in the brackets, respectively, which yields the same expression for the Casimir energy. However, although dimensional regularization in odd dimensions has been used right from the beginning and is now commonly applied in $`\mathrm{\Phi }_3^4`$, QCD<sub>3</sub> and Chern-Simons theories , we hesitate to apply this scheme for an odd number of dimensions because the fate of the UV singularities remains obscure. As the divergencies do not show up in odd dimensional regularization there seems to be no possibility to introduce a renormalization scale. For strictly (super) renormalizable theories this may be of no further importance, however in theories which are renormalized order by order in terms of a gradient expansion a scale is desirable in order to control the convergence of the series (cf. section 5).
The Casimir energy (29) consists of three parts, i.e. the phase–shift integral, the bound–state contribution and the contribution from the counter–terms.
The subtracted phase–shift sum $`\delta (p)a_0p^2a_1`$ which enters the phase–shift integral is plotted in Fig. 4 for the $`n=1`$ and $`n=2`$ systems ($`a^2=0.1`$). The maximum momentum to which this sum has to be integrated depends sensitively on the parameter $`a`$. The various contributions to the Casimir energy are given in Table 3 for several values of $`a`$.
The dependence of the Casimir energies on this parameter is shown in Fig. 5. Note that for an easier comparison again only half of the 2-skyrmion’s Casimir energy is plotted.
It is noticed that the Casimir energy of the 1-skyrmion stays always smaller compared to half of that of the 2-skyrmion. Therefore, in contrast to the classical energy the Casimir energy favors single skyrmions. These competing effects may be summarized in the formula for the total energy cast into the form
$`E_n^{total}`$ $`=`$ $`4\pi f^2E_n^0(a)+\sqrt{fem}E_n^1(a)`$ (30)
$`=`$ $`4\pi f^2\left[E_n^0(a)+yE_n^1(a)\right],y={\displaystyle \frac{\sqrt{fem}}{4\pi f^2}},`$
where $`E_n^1(a)`$ represents the Casimir energy (29) in natural units $`\sqrt{fem}`$. Thus, apart from the overall scale $`4\pi f^2`$ the model is characterized by the two dimensionless parameters $`a`$ and $`y`$. If this ratio $`y`$ exceeds a critical value (which increases slowly with $`a`$, e.g. $`y=0.3,0.4,0.7`$ for $`a^2=0.01,0.1,1.0`$), then $`E_2^{total}2E_1^{total}`$ becomes positive and the 2-skyrmion decays into two single 1–skyrmions.
### 4.1 Numerical example from hadron physics
Although different dimensionalities may cause qualitative differences, let us illustrate these results by a numerical example using the scale of hadron physics: Take the value $`a=0.1`$, fix the size scale at $`1/\sqrt{fem}=0.2fm`$ ($`\sqrt{fem}=1`$ GeV) and make use of the numbers given in Table 1 and Table 3. With the above parameters the 1-soliton gets a topological radius $`0.27`$ fm and a Casimir energy $`E_1^{cas}=0.45`$GeV, a situation very similar to what has been obtained in the hadronic Skyrme model , if in addition the classical soliton mass is fixed at $`E_1^{class}=1.21(4\pi f^2)=1.39`$ GeV in order to obtain the nucleon mass at $`E_1^{total}=0.94`$ GeV. Then the $`n=2`$ soliton’s total energy $`E_2^{total}=(2.640.53)`$ GeV $`=2.11`$ GeV turns out to be larger than that of two separated 1-skyrmions. The parameter in this example, $`y=0.9`$, lies far above the critical value $`0.3`$. Of course, the $`2+1`$ dimensional model discussed here cannot be taken literally for hadron physics, but exactly this mechanism may shift the undesired torus configuration obtained in hadronic soliton models with baryon number $`B=2`$ to higher energies.
### 4.2 Casimir energy of the pure Belavin-Polyakov soliton
Finally we would like to discuss the Casimir energy of the pure BP soliton with free size parameter $`R`$ in order to make contact to the quantum corrections as obtained in ref. . With $`m=0`$ and the coefficients $`a_0=0,a_1=2\pi n`$ and no additional bound–states present (cf. the discussion about the zero–modes in subsection 3.5) eq.(4) for the 1–loop contribution simplifies
$`E^{1loop}`$ $`\stackrel{BP}{=}`$ $`{\displaystyle \frac{1}{2\pi }}{\displaystyle _0^{(\mathrm{\Lambda })}}𝑑p\delta (p)`$
$`=`$ $`{\displaystyle \frac{1}{2\pi }}{\displaystyle _0^{(\mathrm{\Lambda })}}𝑑p\left(\delta (p)a_1\right){\displaystyle \frac{a_1}{2\pi }}\mathrm{\Lambda }`$
$`=`$ $`{\displaystyle \frac{1}{2\pi R}}{\displaystyle _0^{(\mathrm{\Lambda }R)}}d(pR)\left(\delta (pR)2\pi n\right)n\mathrm{\Lambda }.`$
We indicate here the possibility to limit the momentum integration by a finite Debye momentum $`\mathrm{\Lambda }=p_D`$ related to the lattice constant. Because the soliton should at least cover several lattice sites, $`\mathrm{\Lambda }R>1`$ is large enough to extend the integration over the subtracted phase–shifts to infinity (see Fig. 4) just as in our field theoretical treatment. Through the substitution in the last step the dependence of the subtracted phase–shift integral on the soliton size $`R`$ becomes explicit: the magnitude of the Casimir energy decreases with increasing soliton size.
Numerically, we find
$$E_1^{cas}=\frac{0.5}{R_1},E_2^{cas}=\frac{1.7}{R_2}$$
(32)
for the pure BP solitons with topological charges $`n=1`$ and $`n=2`$ respectively. Their sizes $`R_1`$ and $`R_2`$ are arbitrary and generally different. If the sizes are determined by the stabilizing terms in the limit $`a0`$, we obtain $`R_n=b_n/\sqrt{fem}`$ with $`b_10`$ and $`b_2\sqrt{2}`$ (appendix). For that reason the Casimir energy of the 1-soliton plotted in Fig. 5 bends to negative values and finally diverges as $`a`$ approaches zero. The Casimir energy of the 2-soliton stays finite, $`E_2^1=1.7/\sqrt{2}+𝒪(\sqrt{a})`$, but with an infinite slope at $`a=0`$.
In ref. the subtracted phase–shift integral was neglected for $`p_DR>>1`$ and the last term in (4.2) then lead to the result $`E^{1loop}\stackrel{BP}{=}np_D`$. Because this is exactly the term we absorb into a renormalized $`f^2`$ (27,28), this result corresponds to $`E^{cas}=0`$ in our notation. The opposite limit $`p_DR<<1`$ where the 1-loop contribution (4.2) vanishes corresponds to soliton sizes much smaller than the lattice spacing and does not make sense. The conclusion, that the quantum corrections lower the soliton energy as soliton size increases, is wrong.
## 5 Scale dependence
In this section we introduce a scale $`\mu `$ which allows to shift finite pieces from the tree to the 1-loop contribution and vice versa. It will be introduced in such a way that we recover the results of the previous section for $`\mu =\mu _0`$. Tentatively, $`\mu _0=4\pi f^2`$ may be identified with the underlying energy scale of the model in analogy to the chiral scale $`\mu _0=4\pi f_\pi 1`$GeV of ChPT . Our results will however be presented in such a way that no fixation of $`\mu _0`$ is required. For this purpose we write the cutoff characterizing the divergencies of the 1-loop contribution (27) as a sum of two parts $`\mathrm{\Lambda }=[\mathrm{\Lambda }(\mu \mu _0)]+(\mu \mu _0)`$. The first part is then renormalized into a scale-dependent strength of the N$`\mathrm{}\sigma `$ and potential term
$$f^2(\mu )=\overline{f}^2\frac{\mathrm{\Lambda }(\mu \mu _0)}{4\pi }=f^2+\frac{\mu \mu _0}{4\pi }.$$
(33)
According to this formula $`\mu \mu _0`$ shifts the energy scale $`4\pi f^2(\mu )`$ away from its original value $`4\pi f^2`$. In order to exhibit the scale–dependence it is now convenient to present our results as function of the ratio $`f^2(\mu )/f^2`$ rather than $`\mu `$ itself. The presentation is then independent of $`\mu _0`$ and also robust against changes in the regularisation scheme which may introduce numerical factors to the scale (e.g. in the 3-momentum instead of the 2-momentum cutoff scheme the scale is changed by a factor of $`\pi /2`$). The second part of the above decomposition, $`(\mu \mu _0)`$, remains explicit in the finite Casimir energy
$`E^{cas}(\mu )`$ $`=`$ $`{\displaystyle \frac{1}{2\pi }}{\displaystyle _0^{\mathrm{}}}𝑑p{\displaystyle \frac{p}{\omega }}\left(\delta (p)a_0p^2a_1\right)+{\displaystyle \frac{1}{2}}{\displaystyle \underset{B}{}}(\omega _Bm)`$
$`+{\displaystyle \frac{a_0}{6\pi }}\left[(\mu \mu _0)^3{\displaystyle \frac{3m^2}{2}}(\mu \mu _0)2m^3\right]{\displaystyle \frac{a_1}{2\pi }}\left[\mu \mu _0m\right].`$
Also for the $`a_0`$ \- term with its cubic divergence we kept all contributions from the second part in the Casimir energy although other prescriptions are possible. However this ambiguity will not influence the results strongly because of the smallness of $`a_0`$ (cf. numbers in the previous section). In this way both, the tree and the 1-loop contributions become scale–dependent. While for the N$`\mathrm{}\sigma `$ and potential terms the scale–dependence in the total soliton energy is exactly compensated, the 1-loop contribution which arises from the Skyrme term would not only yield a scale–dependent Skyrme parameter $`e(\mu )`$, but would also switch on all higher gradient terms assumed to be zero for $`\mu =\mu _0`$. Having assumed all these couplings to be independent of the scale implies that the tree + 1-loop contribution to the soliton energy cannot be strictly scale–invariant. In fact, the resulting scale–dependence measures the magnitude of the higher gradient terms not accounted for. All the more it comes as a surprise, just as in the hadronic case , that for specific parameter choices an almost scale–independent soliton mass may be obtained. Such a case ($`a^2=0.1,y=1`$) is depicted in Fig.6 for the 1-soliton. The rather strong scale–dependence of the tree contribution is nicely compensated over a wide range $`f^2(\mu )=0.5f^2,\mathrm{},2.0f^2`$ when the 1-loop contribution is added. Of course, this statement has to depend on the parameters used. Therefore in Fig. 7 we plotted the total 1-soliton mass for $`a^2=0.1`$ and various values of the ratio $`y`$. It is noticed that weak scale–dependence requires a ratio close to $`y1`$, the value used in the previous figure.
Lower and higher values of $`y`$ enhance the scale–dependence. We do not show the corresponding plots for the 2-soliton because they look quite similar, with the scale–dependence being even somewhat weaker in that case.
We may now pose the question for which parameter combinations $`a`$ and $`y`$ we may expect a modest scale–dependence in accordance with the assumption that the couplings of all higher gradient terms are zero. The answer is illustrated in Fig.8.
Inside the inner contour the average scale–dependence in the intervall $`f^2(\mu )/f^2[0.5,2]`$ is less than $`3\%`$. For comparison the $`6\%`$ contour is also plotted.
All the parameter combinations lying inside the contours yield negative Casimir energies whose absolute values though sizeable do not exceed $`50\%`$ of the classical soliton mass. They also lead to an unstable 2-soliton which decays into two individual 1-solitons. In particular the parameters $`(a=0.1,y=0.9)`$ of our numerical example from hadron physics (section 4.1) lie in the center of the almost scale–independent region.
The weak scale dependence obtained for parameter combinations lying inside the contours in Fig. 8 implies that loop–corrections can be reliably calculated within the framework of the lagrangian (1) with all higher couplings set equal to zero.
## 6 Conclusions
We studied the magnon–vortex system in the $`2+1`$ dimensional $`O(3)`$ model in a field theoretical approach. For that purpose the N$`\mathrm{}\sigma `$ model was augmented by stabilizing standard fourth–order Skyrme and potential terms.
Complete information on bound and scattering states in all partial–waves was established. We find an extremely rich excitation spectrum with pronounced resonances as in the 3D-$`SU(N_f)`$ case with its baryon resonances. In principle these resonances should be accessible by measuring the excitation spectrum of spin–waves in the presence of skyrmions.
Furthermore, the quantum fluctuations will allow to study their influence on the shape of the soliton . Note that in tree approximation the third component of the order parameter field is always tied to $`1`$ at the origin for topological reasons, in variance with microscopic Hartree-Fock calculations for the ferromagnet which suggest a vanishing value in case of small solitons.
Finally with the bound state energies and the scattering phase–shifts the Casimir energies were evaluated. The appearing UV singularities were renormalized under the assumption that the (renormalized) couplings of the higher gradient terms be small. A criterium for the consistence of this assumption is the approximate scale–independence of the results such that the effective action represents an almost 1-loop renormalizable theory. This criterium requires a detailed balance of tree and 1-loop contributions which limits the parameter space where this requirement is met. The following results apply for this restricted parameter space. Parameters lying outside that range may lead to other conclusions, but there the 1-loop contribution cannot be reliably calculated within the model.
* The Casimir energy is negative and generally large leading to a considerable reduction of the total soliton energy. However, it does not exceed $`50\%`$ of the tree contribution and thus may still be considered a (sizeable) correction.
* The magnitude of the Casimir energy decreases with increasing soliton size. But it also decreases as the stabilizing terms become more important relative to the N$`\mathrm{}\sigma `$ model term.
* With the 1-loop corrections included the $`n=2`$ soliton becomes unstable and decays into two individual $`n=1`$ solitons (which may still be weakly bound by a dipole force). We conjecture that the same situation might occur for the 3D-$`SU(2)`$ $`B=2`$ torus in hadron physics.
It is obvious that the program outlined in this paper has to be tailored for specific applications concerning ferromagnets and antiferromagnets. Most importantly for ferromagnets the time derivative part of the lagrangian has to be replaced by the $`T`$ violating Landau-Lifshitz term with one time derivative only. This replacement may change the results for the Casimir energies appreciably. Also, for antiferromagnets an external magnetic field should couple through the time component of the covariant derivative. Finally, proper consideration of the non–local Coulomb interaction complicates the situation: instead of coupled differential equations coupled integro–differential equations have to be solved for the fluctuations.
The evaluation of loop corrections as presented here naturally suggest the inclusion of temperature into the formalism. This opens the possibility to study the properties of 2D spin–textures at finite temperature, which is of particular interest near the symmetry restoring phase–transition.
This work is supported in parts by funds provided by the FCT, Portugal (Contract PRAXIS/4/4.1/BCC/2753).
## Appendix
In this appendix we present analytical soliton solutions for the two limiting cases $`a0`$ and $`a\mathrm{}`$.
### Belavin-Polyakov solution for $`a0`$
The analytical solution of the Euler-Lagrange equation (7) in the limit $`a0`$
$$\frac{1}{x}\left(xF^{}\right)^{}\frac{n^2sc}{x^2}=0$$
$`(\mathrm{A}.1)`$
with boundary conditions (8) is the well–known Belavin-Polyakov soliton
$$\mathrm{tan}\frac{F}{2}=\left(\frac{b_n}{x}\right)^n.$$
$`(\mathrm{A}.2)`$
The size parameter $`b_n^2=\sqrt{2n(n^21)/3}`$ is determined by the interplay of the Skyrme- and potential terms contributing to the static energy functional
$$E_n^0=n+a\frac{\pi }{3b_n^2}\frac{n^21}{\mathrm{sin}\pi /n}+a\frac{b_n^2}{2}\frac{\pi /n}{\mathrm{sin}\pi /n},n1.$$
$`(\mathrm{A}.3)`$
The corresponding square radius follows from its definition (5)
$$<x^2>_n=\sqrt{\frac{2n}{3}(n^21)}\frac{\pi /n}{\mathrm{sin}\pi /n}n1.$$
$`(\mathrm{A}.4)`$
It is noticed that in case of the 1-soliton the contribution from the potential term to (A.3) diverges. For that reason we modify (A.2),
$$\mathrm{tan}\frac{F}{2}=b_n\sqrt{\frac{ϵ}{e^{ϵx^2}1}},$$
$`(\mathrm{A}.5)`$
such that for $`ϵ0`$ the Belavin–Polyakov solution is recovered. The variation of the static energy functional in lowest order $`ϵ`$
$$E_1^0=1+\frac{ϵ}{2}+\frac{2a}{3b_1^2}\frac{ab_1^2}{2}\mathrm{}nϵ$$
$`(\mathrm{A}.6)`$
gives then $`ϵ=ab_1^2`$ and $`\mathrm{}nϵ=4/3b_1^4`$ with the result
$$E_1^0=1+ϵ(\frac{1}{2}\mathrm{}nϵ),a^2=\frac{3}{4}ϵ^2\mathrm{}nϵ.$$
$`(\mathrm{A}.8)`$
Indeed with $`a`$ also $`ϵ`$ tends to zero. The square radius behaves like
$$<x^2>_1=\sqrt{\frac{4}{3}\mathrm{}nϵ}$$
$`(\mathrm{A}.9)`$
and diverges weakly in the limit $`a0`$. For $`a\stackrel{<}{_{}}0.001`$ the radius of the 1-soliton exeeds that of the 2-soliton. Exactly this behaviour may be traced in the numerical solution if the differential equation (7) is solved for very small $`a`$ with great precision. Because the square radius (A.9) equals the derivative of the soliton energy with respect to the parameter $`a`$, this also explains the infinite slope at the origin indicated in the 1-soliton’s curve plotted in Fig. 2.
### Solution for $`a\mathrm{}`$
The Euler-Lagrange equation (7) in the opposite limit $`a\mathrm{}`$
$$\frac{n^2s}{x}\left(\frac{F^{}s}{x}\right)^{}s=0$$
$`(\mathrm{A}.10)`$
may also be solved analytically
$$\mathrm{cos}F(x)=\frac{1}{8n^2}(b_n^2x^2)x^2+2\frac{x^2}{b_n^2}1,xb_n.$$
$`(\mathrm{A}.11)`$
The integration constants are fixed by the condition $`F(b_n)=0`$ connected with topological charge $`n`$. The size parameter $`b_n^2=4n`$ again minimizes the static energy functional
$$E_1^0=na\left[\frac{b_n^2}{4n}+\frac{1}{2}\left(\frac{4n}{b_n^2}\right)\frac{1}{6}\left(\frac{b_n^2}{4n}\right)^3\right]$$
$`(\mathrm{A}.12)`$
with the results quoted in (2)
$$E_n^0=\frac{4n}{3}a,<x^2>_n=\frac{4n}{3}.$$
$`(\mathrm{A}.13)`$
Needless to say, that numerical solutions of the Euler-Lagrange equation (7) reproduce these results for large enough $`a`$.
|
no-problem/9907/hep-ph9907506.html
|
ar5iv
|
text
|
# I Introduction
## I Introduction
Cosmic strings could be formed in consequence of the cosmological phase transition at the very early universe . They are divided into global and local(gauged) strings according to the property of the broken symmetry. Though they have similar properties, such as they are line objects with the false vacuum energy, intercommute at crossing and so on, there are also many differences between them. While a local string has two cores comprised of a magnetic flux core and a scalar field core, a global string has only a scalar field core. For local strings, the gradient energy of the scalar field can be canceled out by the gauge field far from the core so that the core is well localized and the vacuum energy of the core is dominant. Therefore, the Nambu-Goto action is adequate to follow the evolution of the local string network except at crossing. On the other hand, there is no gauge field in the global string model so that the total energy of global strings is dominated by not the vacuum energy of the core but the gradient energy of the scalar field, namely, the NG boson field. So, we need consider not only the core but also the associated NG boson field and the coupling between them in contrast with the case of local strings. Thus we must use the Kalb-Ramond action instead of the Nambu-Goto action as an effective action .
The dynamics of the local string network has been examined by use of the Nambu-Goto action. Kibble first proposed the so-called one scale model, where the behavior of the system can be characterized by only a parameter, namely, the scale length, $`L`$, and an unknown loop-production function. He showed that either the local string network goes into the scaling regime where the scale length, $`L`$, grows with the horizon distance , or, the universe becomes string dominated. In the scaling regime, infinite strings intercommute to produce closed loops so that at any time the number of strings stretching across the horizon distance within each horizon volume is almost constant and produced loops decay through radiating gravitational waves . Bennett developed the Kibble’s one scale model and showed that unless most of produced loops self-intersect and fragment into smaller loops with the typical length smaller than $`L`$, the reconnection rate is large enough to prevent scaling. Mitchell and Turok studied the statistical mechanics of the string network in the flat spacetime and found that the equilibrium distribution of the string network is dominated by the smallest loops allowed, which suggested that strings tend to break into very small pieces in the expanding universe. Albrecht and Turok modeled the network as the hot body radiation where loops are radiated from the long strings and showed that the scaling solution is inevitable. However, the application of the flat spacetime statistical mechanics to the string dynamics in the expanding universe is not necessarily justified. Thus, numerical simulations are unavoidable to decide whether the local string network settles down to the scaling regime or not.
Albrecht and Turok gave the first numerical investigation for the scaling property. Later three groups improved numerical codes and all groups concluded that the large scale behavior of the local string network goes into the scaling regime with the scaling parameter $`\xi `$ equal to $`(50\pm 25)`$ , $`(13\pm 2.5)`$ , and $`(16\pm 4)`$ in the radiation-dominated era. Though Albrecht and Turok found that the loop distribution function also scales, the higher resolution simulations performed by the other two groups showed that long strings have significant small structures and loops are typically produced at scales much smaller than the horizon distance, which is close to the cut-off scale. In response, Austin et al. proposed the three scale model with $`\xi `$, the step length $`\overline{\xi }`$, and the scale $`\zeta `$ describing the small scale structure. They found that $`\xi `$ and $`\overline{\xi }`$ grow with the horizon distance but $`\zeta `$ begin growing only when the gravitational back reaction becomes in effect with $`\zeta /\xi 10^4`$. However, Vincent et al recently claimed from the flat spacetime simulations that instead of loop production due to intercommutation, long strings directly emit massive particles so that the dominant energy loss mechanism of local strings is not gravitational radiation but particle production.
Thus, though there is a consensus that the large scale structure of the local string network obeys the scaling solution, the loop production and the dominant energy loss mechanism are now in dispute. This is mainly because inclusion of gravitational radiation to the numerical simulations is impossible and gravitational radiation is so weak that kinks live for a long time.
On the other hand, the evolution of the global string network has been less studied. Its manifestation, however, is essential to estimation of abundances of relic axions radiated from axionic strings , which may be the cold dark matter. Also, scaling property is indispensable for the scenario where global strings become the seed of the structure formation of the universe and produce the cosmic microwave background anisotropy . Thus, it is important to clarify the dynamics of the global string network, in particular, whether it enters the scaling regime like the local string network.
So far, the result for the local string network has been applied to the global string network because global strings also intercommute with the probability of the order unity . But global strings have the associated Nambu-Goldstone bosons, which lead to long-range forces between strings and become the dominant energy loss mechanism of global strings. Therefore, instead of the Nambu-Goto action we have to use the Kalb-Ramond action , which is comprised of three components, the Nambu-Goto action, the kinetic term of the associated Nambu-Goldstone fields, and the coupling term between them. But the Kalb-Ramond action has difficulty of logarithmic divergence due to self-energy of the string. Dabholkar and Quashnock solved this difficulty by the similar prescription as given by Dirac in the electromagnetic system , where divergence is renormalized into the electron mass. They gave the renormalized equation of motion for a global string comprised of the free part derived from the Nambu-Goto action with the damping term, which becomes negligible for a circular loop in the cosmological scale where $`\mathrm{ln}(t/\delta )𝒪(100)`$. Then they concluded that the global string network can be well approximated by the motion of the Nambu-Goto action. However, as shown by Battye and Shellard though the calculation is done in the flat spacetime, the kinks on long string are substantially rounded due to the backreaction of NG boson radiation, which may significantly affect the small scale structure (if at all) of the global network system. Also, the above approach cannot include the long-range force between long strings, which may decrease the energy density of long strings. Thus, the examination of the dynamics of global strings by use of the Kalb-Ramond action is not yet complete.
In the previous paper , we manipulated the equation of motion of the complex scalar field instead of the Kalb-Ramond action. Then, we showed that the large scale behavior of the global string network goes into the scaling regime and $`\xi `$ for the global string network becomes $`𝒪(1)`$, which is significantly smaller than that for the local string network.
In this paper, we give a comprehensive analysis of the evolution of the global string network based on the model adopted in . We show the scaling of the global string network under more general situations and investigate the dependence of $`\xi `$ on boundary conditions and some parameters. Then, the loop distribution function is given in order to decide whether the small scale structure exists like the local string network.
The paper is organized as follows: In the next section, we give the formulation of numerical simulations. In the section 3, we give the method of the identification of string segments and closed loops. Then, the scaling parameter $`\xi `$ and the loop distribution function are given. Finally, we discuss our results and give conclusions.
## II Formulation of numerical simulations
We consider the following Lagrangian density for a complex scalar field $`\mathrm{\Phi }(x)`$,
$$[\mathrm{\Phi }]=g_{\mu \nu }(^\mu \mathrm{\Phi })(^\nu \mathrm{\Phi })^{}V_{\mathrm{eff}}[\mathrm{\Phi },T],$$
(1)
where $`g_{\mu \nu }`$ is identified with the Robertson-Walker metric and the effective potential $`V_{\mathrm{eff}}[\mathrm{\Phi },T]`$ is given by
$$V_{\mathrm{eff}}[\mathrm{\Phi },T]=\frac{\lambda }{2}(\mathrm{\Phi }\mathrm{\Phi }^{}\eta ^2)^2+\frac{\lambda }{3}T^2\mathrm{\Phi }\mathrm{\Phi }^{}.$$
(2)
For $`T>T_c=\sqrt{3}\eta `$, the potential $`V_{\mathrm{eff}}[\mathrm{\Phi },T]`$ has a minimum at $`\mathrm{\Phi }=0`$, and the $`U(1)`$ symmetry is restored. On the other hand, new minima $`|\mathrm{\Phi }|_{\mathrm{min}}=\eta \sqrt{1(T/T_c)^2}`$ appear and the symmetry is broken for $`T<T_c`$ (Fig. 1). In this case the phase transition is of second order.
The equation of motion is given by
$$\ddot{\mathrm{\Phi }}(x)+3H\dot{\mathrm{\Phi }}(x)\frac{1}{R(t)^2}^2\mathrm{\Phi }(x)=V_{\mathrm{eff}}^{}[\mathrm{\Phi },T],$$
(3)
where the prime represents the derivative $`/\mathrm{\Phi }^{}`$ and $`R(t)`$ is the scale factor. The Hubble parameter $`H=\dot{R}(t)/R(t)`$ and the cosmic time $`t`$ are given by
$`H^2={\displaystyle \frac{8\pi }{3m_{\mathrm{pl}}^2}}{\displaystyle \frac{\pi ^2}{30}}g_{}T^4,t={\displaystyle \frac{1}{2H}}={\displaystyle \frac{\xi }{T^2}},`$ (4)
where $`m_{\mathrm{pl}}=1.2\times 10^{19}`$GeV is the Plank mass, $`g_{}`$ is the total number of degrees of freedom for the relativistic particles, and radiation domination is assumed. We define the dimensionless parameter $`\zeta `$ as
$$\zeta \frac{\xi }{\eta }=\left(\frac{45M_{\mathrm{pl}}^2}{16\pi ^3g_{}\eta ^2}\right)^{1/2}.$$
(5)
In our simulation, we take $`\zeta =10`$, which corresponds to $`\eta (10^{15}10^{16})`$ GeV with $`g_{}=1000`$ and later investigate $`\zeta `$ dependence on the result. The energy density at each lattice point is written as
$$\rho (x)=\dot{\mathrm{\Phi }}(x)\dot{\mathrm{\Phi }}^{}(x)+\frac{1}{R(t)^2}\mathrm{\Phi }(x)\mathrm{\Phi }^{}(x)+V_{\mathrm{eff}}[\mathrm{\Phi },T].$$
(6)
We take the initial time $`t_i=t_c/4`$ and the final time $`t_f=75t_i=18.75t_c`$, where $`t_c`$ is the epoch $`T=T_c`$. Since the $`U(1)`$ symmetry is restored at the initial time $`t=t_i`$, we adopt as the initial condition the thermal equilibrium state with the mass squared,
$$m^2=\frac{d^2V_{\mathrm{eff}}[|\mathrm{\Phi }|,T]}{d|\mathrm{\Phi }|^2}|_{|\mathrm{\Phi }|=0},$$
(7)
which is the inverse curvature of the potential at the origin at $`t=t_i`$. In the thermal equilibrium state, $`\mathrm{\Phi }`$ and $`\dot{\mathrm{\Phi }}`$ are Gaussian distributed with the correlation functions,
$`\beta |\mathrm{\Phi }(𝒙)\mathrm{\Phi }^{}(𝒚)|\beta _{\mathrm{equal}\text{-}\mathrm{time}}`$ $`=`$ $`{\displaystyle \frac{d𝒌}{(2\pi )^3}\frac{1}{2\sqrt{𝒌^2+m^2}}\mathrm{coth}\frac{\beta \sqrt{𝒌^2+m^2}}{2}e^{i𝒌(𝒙𝒚)}},`$ (8)
$`\beta |\dot{\mathrm{\Phi }}(𝒙)\dot{\mathrm{\Phi }}^{}(𝒚)|\beta _{\mathrm{equal}\text{-}\mathrm{time}}`$ $`=`$ $`{\displaystyle \frac{d𝒌}{(2\pi )^3}\frac{\sqrt{𝒌^2+m^2}}{2}\mathrm{coth}\frac{\beta \sqrt{𝒌^2+m^2}}{2}e^{i𝒌(𝒙𝒚)}}.`$ (10)
The functions $`\mathrm{\Phi }(𝒙)`$ and $`\dot{\mathrm{\Phi }}(𝒚)`$ are uncorrelated for $`𝒙𝒚`$. We generate these fields for the initial condition in the momentum space, because the corresponding fields $`\stackrel{~}{\mathrm{\Phi }}(𝒌)`$ and $`\stackrel{~}{\dot{\mathrm{\Phi }}}(𝒌)`$ are uncorrelated there. Then these fields are inverse Fourier transformed into the configuration spaces by the FFT formalism.
Hereafter we measure the scalar field in units of $`t_i^1`$, $`t`$ and $`x`$ in units of $`t_i`$, and the energy density in units of $`t_i^4`$. The equation of motion and the total energy density are given by
$`\ddot{\mathrm{\Phi }}(x)+{\displaystyle \frac{3}{2t}}\dot{\mathrm{\Phi }}(x){\displaystyle \frac{1}{t}}^2\mathrm{\Phi }(x)=\left(|\mathrm{\Phi }|^2+{\displaystyle \frac{\zeta ^2}{36t}}{\displaystyle \frac{\zeta ^2}{144}}\right)\mathrm{\Phi }^{},`$ (12)
$`\rho (x)=\dot{\mathrm{\Phi }}(x)\dot{\mathrm{\Phi }}^{}(x)+{\displaystyle \frac{1}{t}}\mathrm{\Phi }(x)\mathrm{\Phi }^{}(x)+{\displaystyle \frac{1}{2}}\left(|\mathrm{\Phi }|^2{\displaystyle \frac{\zeta ^2}{144}}\right)^2+{\displaystyle \frac{\zeta ^2}{36t}}|\mathrm{\Phi }|^2,`$ (13)
where $`\lambda `$ is set to unity for brevity. The scale factor $`R(t)`$ is normalized as $`R(1)=1`$.
We perform the simulations in four different sets of lattice sizes and spacings (See TABLE I.): (1) $`128^3`$ lattices with the physical lattice spacing $`\delta x_{\mathrm{phys}}=2\sqrt{3}t_iR(t)/25`$. (2) $`64^3`$ lattices with $`\delta x_{\mathrm{phys}}=4\sqrt{3}t_iR(t)/25`$. (3) $`256^3`$ lattices with $`\delta x_{\mathrm{phys}}=\sqrt{3}t_iR(t)/25`$. (4) $`256^3`$ lattices with $`\delta x_{\mathrm{phys}}=2\sqrt{3}t_iR(t)/25`$. In all cases, the time step is taken as $`\delta t=0.01t_i`$. In the case (1), the box size is nearly equal to the horizon volume $`(H^1)^3`$ and the lattice spacing to a typical width $`\delta 1.0/(\sqrt{2}\eta )`$ of a string at the final time $`t_f`$. Furthermore, in order to investigate the dependence of $`\zeta `$, we arrange the case with $`\zeta =5`$, (7) $`128^3`$ lattices with the physical lattice spacing $`\delta x_{\mathrm{phys}}=2\sqrt{6}t_iR(t)/25`$. In this case we follow the time development of the system until the final time, $`t_f=150t_i=37.5t_c`$, when the box size is nearly equal to the horizon volume $`(H^1)^3`$ and the lattice spacing to a typical width of a string. For each case, we simulate the system from 10 ((2), (3), (4), and (7)) or 300 ((1)) different thermal initial conditions. Since the simulation box is larger than the horizon volume even at the final time of the simulation, we adopt the periodic boundary condition. But, under the periodic boundary conditions, there exists no infinite string so that it is possible that string completely disappears in the simulation box. Therefore, in order to verify the dependence of boundary conditions, we also simulate the cases in TABLE II under the reflective boundary condition where $`^2\mathrm{\Phi }(x)`$ on the boundary points disappears <sup>*</sup><sup>*</sup>*Note that this condition is different from the open boundary condition where $`\mathrm{\Phi }(x)`$ on the boundary points disappears. Under the open boundary condition, the string feels attractive forces from the boundary so that the number of the strings tends to decrease than that in the real universe as under the periodic boundary condition..
Using the second order leap-frog method and the Crank-Nicholson scheme, the discretized equation of motion reads
$`\dot{\mathrm{\Phi }}_{𝒊,n+1/2}`$ $`=`$ $`{\displaystyle \frac{1}{1+\frac{3\delta t}{4t}}}\left[\left(\mathrm{\hspace{0.17em}1}{\displaystyle \frac{3\delta t}{4t}}\right)\dot{\mathrm{\Phi }}_{𝒊,n1/2}+{\displaystyle \frac{\delta t}{t}}^2\mathrm{\Phi }_{𝒊,n}\delta t\left\{|\mathrm{\Phi }_{𝒊,n}|^2+{\displaystyle \frac{\zeta ^2}{36t}}{\displaystyle \frac{\zeta ^2}{144}}\right\}\mathrm{\Phi }_{𝒊,n}^2\right],`$ (14)
$`\mathrm{\Phi }_{𝒊,n+1}`$ $`=`$ $`\mathrm{\Phi }_{𝒊,n}+\delta t\dot{\mathrm{\Phi }}_{𝒊,n+1/2},`$ (15)
$`^2\mathrm{\Phi }_{𝒊,n}`$ $``$ $`{\displaystyle \underset{s=x,y,z}{}}{\displaystyle \frac{\mathrm{\Phi }_{i_s+1_s,n}2\mathrm{\Phi }_{i_s,n}+\mathrm{\Phi }_{i_s1_s,n}}{(\delta x)^2}},`$ (16)
where $`𝒊`$ represents spatial index and $`n`$ temporal one.
## III Results
In order to judge whether the global string network relaxes into the scaling regime, we give time development of $`\xi `$, which is defined as
$$\rho =\xi \mu /t^2,$$
(17)
where $`\mu 2\pi \eta ^2\mathrm{ln}(t/(\delta \xi ^{1/2}))`$ is the average energy per unit length of global strings.
### A String identification
Before obtaining $`\xi `$ and the loop distribution function, we must identify the string segment. Since spacetime is discretized in our simulations, a point with $`\mathrm{\Phi }=0`$ corresponding to a string core is not necessarily situated at a lattice point. In the worst case, a point with $`\mathrm{\Phi }=0`$ lies at the center of a plaquette. Therefore, we use a static cylindrically-symmetric solution, which is obtained by solving the equation
$$\frac{d^2f}{dr^2}+\frac{1}{r}\frac{df}{dr}\frac{f}{r^2}V_{\mathrm{eff}}^{}[f,T]=0,$$
(18)
with $`\mathrm{\Phi }(r,\theta )f(r)e^{i\theta }`$ and the winding number $`n=1`$. The boundary conditions are given by
$`f(r)`$ $``$ $`|\mathrm{\Phi }|_{\mathrm{min}},(r\mathrm{}),`$ (19)
$`f(0)`$ $`=`$ $`0.`$ (20)
We require that a lattice is identified with a part of a string core if the potential energy density there is larger than that corresponding to the field value of a static cylindrically-symmetric solution at $`r=\delta x_{\mathrm{phys}}/\sqrt{2}`$. Then only one lattice within a section of a straight string core is identified with a string segment except the case where a point with $`\mathrm{\Phi }=0`$ lies at the center of a plaquette. Of course, the real string is not exactly straight but bent and more complex. But, as seen in Fig. 2, our identification is good and only one lattice within a section of a string core is identified with a part of a string core. In order to solve the above equation of motion, we have to use the standard shooting technique, which is the repeated process so that it is bothersome to follow the above procedure for each time step. Instead, we obtain the solutions every $`500\delta t`$ and make the fitting formula. For intermediate time steps, we judge whether a lattice point belongs to the string segment by comparing the potential energy obtained from the simulation and that from the fitting formula. Thus, by counting the number of the lattices identified with a part of a string core, we can evaluate the total length of strings within the horizon volume, $`(2t)^3`$, from which the energy density can be obtained.
### B Scaling property
Time development of $`\xi `$ in the cases from (1) to (4) is described in Fig. 3. We find that after some relaxation period, $`\xi `$ becomes a constant irrespective of time with (1)$`0.99\pm 0.09`$, (2)$`0.97\pm 0.04`$, (3)$`0.90\pm 0.06`$, and (4)$`0.90\pm 0.05`$. They all are consistent within the standard deviation. Thus we can conclude that a global string network relaxes into scaling regime in the radiation domination. We also show time development of $`\xi `$ in the case (7) in Fig. 4. Then $`\xi `$ asymptotically becomes a constant, $`0.88\pm 0.07`$, which is also consistent with the above all cases with $`\zeta =10`$ within the standard deviation. Hence we can also conclude that $`\zeta `$ does not change the essential result. Note that the standard deviation in the case (2) is much smaller than the other cases because the box in the case (2) includes more horizon volumes at each time. Also, $`\xi `$ seems to oscillate at the early epoch because the homogeneous mode of the field oscillates in the radial direction of the potential, which rapidly decays. In fact, the period of the oscillation coincides with $`2\pi `$ times the inverse mass at the potential minimum.
Fig. 5 represents the results under the reflective boundary condition, where $`\xi `$ becomes a constant irrespective of time with (1)$`1.77\pm 0.03`$, (2)$`1.57\pm 0.04`$, (3)$`2.00\pm 0.05`$, (4)$`1.30\pm 0.02`$, (5)$`1.18\pm 0.03`$, and (6)$`1.03\pm 0.03`$. Though all results are consistent with that under the periodic boundary condition within the factor two, the former tends to become larger than the latter. This is because under the reflective boundary condition, strings are repulsed by the boundary and the string near the boundary intercommutes less often than that near the center of the simulation box because the partner to intercommute only lies in the inner direction of the boundary. The results in large box simulations, (4)-(6), are converging so that it is safe to say that if we take the box size larger than $`2^3`$ times the horizon volume, the reflective boundary condition has no effect on the results.
### C Loop distribution
We also investigate the loop distribution. Since in our simulations it is judged by the potential energy whether a lattice point is a part of the string, the string is not necessarily continuous. Therefore, we identify a closed loop as follows; First we select a lattice point belonging to the string segment. Then we connect it with the nearest neighbor among lattice points belonging to the string segment. We proceed this process one after another until the connection returns to the starting lattice point.
Kibble’s one scale model predicts the loop distribution as
$$n_l(t)=\frac{\nu }{t^{\frac{3}{2}}(l+\kappa t)^{\frac{5}{2}}},$$
(21)
where $`\nu `$ is a constant, $`l`$ is the length of a loop, and the log-dependence of $`\mu `$ is neglected. Different from a local string, the dominant energy loss mechanism for global strings is radiation of the associated Nambu-Goldstone field . We define the radiation power, $`P`$, as $`P=\kappa \mu `$ where $`\kappa `$ is a constant. An example of decay of a closed loop is depicted in Fig. 6.
We determine whether the loop distributions in the simulation coincides with the above predicted loop distribution function. The loop distributions in the case (1) at different times ($`t=45,55,65,75t_i`$) are described in Fig. 7. Since long strings are rare, we cut the length of loops into bins with the width 5$`\times \delta x`$. Also, we divide 300 realizations into 6 groups comprised of 50 realizations and we summed the number of loops over 50 realizations for each groups. The dot represents the number of loops averaged over 6 groups and the dash line represents the standard deviation. They can be simultaneously fitted with the above formula if one takes $`\kappa 0.535`$ and $`\nu 0.0865`$. Fittings for $`\kappa `$ and $`\nu `$ are also given in Fig. 8. Thus, the loop production function as well as the large scale behavior of the string scales together for the global string network.
As an another evidence for the scaling of the loop production, we consider the NG boson spectrum radiated from strings. If loops are formed at scales much smaller than the horizon distance, there should be significant power of radiated NG bosons for modes corresponding to the scale at which loops are formed. For the purpose, we first represent the complex field $`\mathrm{\Phi }(t,𝒙)`$ in terms of the radial mode $`\chi (t,𝒙)`$ and the NG boson field $`\alpha (t,𝒙)`$ as
$$\mathrm{\Phi }(t,𝒙)=\left[\eta +\frac{\chi (t,𝒙)}{\sqrt{2}}\right]\mathrm{exp}\left(\frac{i\alpha (t,𝒙)}{\sqrt{2}\eta }\right).$$
(22)
Then the kinetic energy density of NG bosons is given by
$`{\displaystyle \frac{1}{2}}\dot{\alpha }(t,𝒙)^2`$ $`=`$ $`{\displaystyle \frac{\eta ^2}{|\mathrm{\Phi }(t,𝒙)|^4}}`$ (24)
$`\times \left[\mathrm{Im}\mathrm{\Phi }(t,𝒙)\mathrm{Re}\dot{\mathrm{\Phi }}(t,𝒙)+\mathrm{Re}\mathrm{\Phi }(t,𝒙)\mathrm{Im}\dot{\mathrm{\Phi }}(t,𝒙)\right]^2.`$
One may wonder if the power spectrum obtained by the Fourier transformation of the above kinetic energy density is what we want. But it includes the contribution from NG bosons formed at the symmetry breaking as well as that radiated from strings. Also, the decomposition of the field into the radial and phase modes is inadequate for the lattice point near the string segment.
Then, we evaluate the average energy density of NG bosons radiated in the period between $`t_1`$ and $`t_2`$, $`\overline{\rho }[t_1,t_2]`$. For the purpose, we subtract the redshifted kinetic energy at $`t_1`$ from the kinetic energy at $`t_2`$ since emitted NG bosons damp like radiation. Thus, $`\overline{\rho }[t_1,t_2]`$ is given by
$`\overline{\rho }[t_1,t_2]`$ $`=`$ $`{\displaystyle \frac{1}{V}}{\displaystyle d^3𝒙\rho [t_1,t_2;𝒙]}`$ (25)
$`=`$ $`{\displaystyle \frac{1}{V}}{\displaystyle d^3𝒙\left[\frac{1}{2}\dot{\alpha }(t_2,𝒙)^2\frac{1}{2}\dot{\alpha }(t_1,𝒙)^2\left(\frac{t_1}{t_2}\right)^2\right]}`$ (26)
$`=`$ $`{\displaystyle \frac{1}{V}}{\displaystyle \frac{d^3𝒌}{(2\pi )^3}\left[\frac{1}{2}|\dot{\alpha }_𝒌(t_2)|^2\frac{1}{2}|\dot{\alpha }_𝒌(t_1)|^2\left(\frac{t_1}{t_2}\right)^2\right]}`$ (27)
$``$ $`{\displaystyle \frac{d^3𝒌}{(2\pi )^3}\stackrel{~}{\rho }_𝒌[t_1,t_2]}{\displaystyle _0^{\mathrm{}}}{\displaystyle \frac{dk}{2\pi ^2}}\rho _k[t_1,t_2],`$ (28)
where $`V`$ is the simulation volume and $`\dot{\alpha }(t,𝒙)=\frac{d^3𝒌}{(2\pi )^3}\dot{\alpha }_𝒌(t)\mathrm{exp}(i𝒌𝒙)`$.
Furthermore, to avoid contamination of string cores to the spectrum of emitted axions, we divide the simulation box into 8 cells and stock the field data of a cell if there are no string cores in that cell between $`t_1`$ and $`t_2`$. Over all such cells, we average power spectra of kinetic energy of axions obtained through Fourier transformation. We follow the above procedure between $`t_1=65t_i`$ and $`t_2=75t_i`$ for the case (3) under the periodic boundary condition, which is the highest resolution simulation, but with the zero temperature potential $`V_{\mathrm{eff}}[\mathrm{\Phi },T=0]`$ after $`t=20t_i`$ because the decomposition of the field becomes well-defined. One may suspect that this spectrum is dominated by the kinetic energy of NG bosons associated with global strings (string NG bosons) rather than that of free NG bosons radiated from global strings. But this is incorrect for the following reason; The total energy of string NG bosons is almost as much as that of free NG bosons. However, the contribution to the energy of string NG bosons is dominated by the gradient energy because the kinetic energy of string NG bosons is much smaller that their gradient energy by the factor $`v^2`$ ($`v`$ is the velocity of the string core and $`v1`$ except just before the disappearance of the loop). On the other hand, the gradient and the kinetic energy of free NG bosons is the same. Therefore the kinetic energy of free NG bosons is much larger than that of string NG bosons, that is, our spectrum is dominated by the former. Also, even if it contributed, the kinetic energy of string NG bosons would decay in proportion to $`t^2`$, so that its contribution has been removed from the spectrum in our method as done in Eq. (28).
The result is depicted in Fig. 9, which has already been given in in the different context. The spectrum of emitted NG bosons is highly peaked at the horizon scale. Thus, loops are formed not at much smaller scale than the horizon distance but around the horizon scale.
## IV Discussions and conclusions
In this paper we gave a comprehensive investigation on the evolution of the global string network in the radiation dominated universe by use of the numerical simulation based on the complex scalar field model which spontaneously breaks the U(1) symmetry.
In order to decide whether the large scale behavior of the global string network goes into the scaling regime, we followed time development of the scaling parameter $`\xi `$ which characterizes the average energy density of global strings. We found that $`\xi `$ is almost constant irrespective of cosmic time under both the periodic and the reflective boundary condition. All the results are consistent within the factor two. But $`\xi `$ under the reflective boundary condition tend to become larger than that under the periodic boundary condition. This can be understood as follows; Under the periodic boundary condition, there are no infinite strings and strings with two boundary points on the opposite planes always intercommute with the partner so that $`\xi `$ tends to become small. On the other hand, under the reflective boundary condition, strings are repulsed by the boundary. Furthermore the string near the boundary intercommutes less often than that near the center of the simulation box because the partner to intercommute only lies in the inner direction of the boundary. Thus, $`\xi `$ tends to become large. Therefore, $`\xi `$ in the real world should lie between that under the periodic boundary condition and that under the reflective boundary condition. Considering all the cases under the periodic boundary condition and the cases (4)-(6) of the large box simulations under the reflective boundary condition, it is safe to say that $`\xi (0.91.3)`$.
You should note that $`\xi `$ is much smaller than that of a local string, which is of the order of ten. This is mainly because global strings can intercommute more often than local strings since an attractive force proportional to the inverse separation works between a global string and a global anti-string.
We have also investigated the loop distribution. It can be well fitted to that predicted by the one scale model if we take $`\nu 0.0865`$ and $`\kappa 0.535`$. Thus, the loop production grows with the horizon distance and we did not observe the small scale structure. This is because the Nambu-Goldstone(NG) boson radiation from strings is so efficient that the small scale structures on strings are damped out. The damping scale is typically $`\kappa t`$, which is near the horizon distance in contrast with $`G\mu tt`$ (G : the gravitational constant) if at all for the local string network. Bennett showed that unless produced loops with the length of the horizon distance often self-intersect and fragment into smaller loops with the typical length smaller than the horizon distance, the reconnection rate is large enough to prevent scaling. In the global string network, parent loops do not fragment into smaller loops with the typical length smaller than the horizon distance, instead they rapidly shrink through radiating NG bosons so that the reconnection rate of parent large loops becomes small and scaling should be kept.
### Acknowledgments
The author is grateful to Jun’ichi Yokoyama and Masahiro Kawasaki for useful discussions. This work was partially supported by the Japanese Grant-in-Aid for Scientific Research from the Monbusho, Nos. 10-04558.
|
no-problem/9907/hep-lat9907007.html
|
ar5iv
|
text
|
# Running coupling for Wilson bermions Talk given at the International Symposium on Lattice Field Theory, June 28–July 3, 1999, in Pisa, Italy.
## 1 Introduction and motivation
One of the main goals of the ALPHA collaboration is the non perturbative computation of the strong coupling $`\alpha _S`$ for energy scales ranging from hadronic scales to perturbative high energy scales. To this end a finite volume scale dependent renormalization scheme for QCD has been invented . Its central object is the step scaling function $`\mathrm{\Sigma }`$ which describes the running of the coupling under a discrete change of the scale. Other ingredients include Schrödinger functional boundary conditions, $`\mathrm{O}(a)`$ improvement and non perturbative renormalization. This method has been used successfully in the quenched approximation (see for a review). First results for full QCD with two flavours have been obtained recently .
In figure 1 the results for the step scaling function at the coupling $`\overline{g}^2=0.9793`$ are shown and compared with the quenched approximation and the perturbative continuum limit<sup>1</sup><sup>1</sup>1These data were obtained in collaboration with A. Bode, R. Frezzotti, M. Guagnelli, K. Jansen, M. Hasenbusch, J. Heitger, R. Sommer and P. Weisz.. Here an $`\mathrm{O}(a)`$ improved action was used. The observed cut-off effects are larger than in the quenched approximation. A naive extrapolation of the Monte Carlo data to the continuum limit yields a value which lies 2-3% above the perturbative estimate, although the coupling $`\overline{g}^2`$ is expected to be small enough to be in the perturbative regime. Furthermore, the cut-off effects in the quenched and in the full theory computed in lattice perturbation theory to two loops are of the same size . These observations can be interpreted as follows:
* statistical fluctuation
* true 3% deviation from renormalized perturbation theory
* lattice artifacts too large for continuum extrapolation
The investigation of these possibilities in full QCD would consume a substantial amount of computer time. Therefore we use Yang-Mills theory coupled to a bosonic Wilson spinor field, which corresponds to setting $`N_f=2`$ in the QCD partition function, as a toy model to study the extrapolation to the continuum limit for a system different from pure gauge theory. This model has been called the bermion model in the literature .
## 2 The bermion model in the Schrödinger functional setup
Let the space time be a hypercubic Euclidean lattice with lattice spacing $`a`$ and volume $`L^3\times T`$. In the following we set $`T=L`$. The $`SU(3)`$ gauge field $`U(x,\mu )`$ is defined on the links while the bermion field $`\varphi (x)`$ which is a bosonic spinor field with color and Dirac indices is defined on the sites of the lattice. In the space directions we impose periodic boundary conditions while in the time direction we use Dirichlet boundary conditions. The boundary gauge fields can be chosen such that a constant color electric background field is enforced on the system which can be varied by a dimensionless parameter $`\eta `$ .
As explained above our goal is to continue the exponent of the fermion determinant in the QCD partition function to the negative value $`N_f=2`$. This can be achieved by integrating the bosonic field $`\varphi `$ with the Gaussian action
$`S_B`$ $`=`$ $`{\displaystyle \underset{x}{}}|M\varphi (x)|^2,\text{where}`$ (1)
$`{\displaystyle \frac{1}{2\kappa }}M\varphi (x)`$ $`=`$ $`(𝒟+m_0)\varphi (x).`$ (2)
$`𝒟`$ is the Wilson Dirac operator with hopping parameter $`\kappa =(8+2m_0)^1`$. For the gauge fields we employ the action
$$S_G=\frac{1}{g_0^2}\underset{p}{}w(p)\text{tr}(1U(p)).$$
(3)
The weights $`w(p)`$ are defined to be one for plaquettes $`p`$ in the interior and they equal $`c_t`$ for time like plaquettes attached to the boundary. The choice $`c_t=1`$ corresponds to the standard Wilson action. However, $`c_t`$ can be tuned in order to reduce lattice artifacts. For the bermion case we have in this work always chosen $`c_t=1`$.
Now the Schrödinger functional is defined as the partition function in the above setup:
$`Z={\displaystyle 𝒟U𝒟\varphi 𝒟\varphi ^+e^{S_GS_B}}`$ (4)
$`=`$ $`{\displaystyle 𝒟Ue^{S_G(U)}det(M^+M)^{N_f/2}},`$ (5)
with $`N_f=2`$. The effective action
$$\mathrm{\Gamma }=\mathrm{log}Z$$
(6)
with the perturbative expansion
$$\mathrm{\Gamma }=g_0^2\mathrm{\Gamma }_0+\mathrm{\Gamma }_1+g_0^2\mathrm{\Gamma }_2+\mathrm{}$$
(7)
is renormalizable with no extra counterterms up to an additive divergent constant. That means that the derivative $`\mathrm{\Gamma }^{}=\frac{\mathrm{\Gamma }}{\eta }`$ is a renormalized quantity and
$$\overline{g}^2(L)=\mathrm{\Gamma }_0^{}/\mathrm{\Gamma }^{}|_{\eta =0}$$
(8)
defines a renormalized coupling which depends only on $`L`$ and the bermion mass $`m`$. Note that this coupling can be computed efficiently as the expectation value $`\frac{\mathrm{\Gamma }}{\eta }=\frac{S}{\eta }`$. The mass $`m`$ is defined via the PCAC relation . Here we use the fermionic boundary states of the Schrödinger functional<sup>2</sup><sup>2</sup>2Fermionic observables are constructed independently of the number of dynamical flavours $`N_f`$. to transform this operator relation to an identity (up to $`\mathrm{O}(a)`$) for fermionic correlation functions which can be computed on the lattice. This gives a time dependent mass $`m(x_0)`$. The mass $`m`$ is then defined by
$$m=\{\begin{array}{cc}m({\scriptscriptstyle \frac{T}{2}})\hfill & T\text{ even},\hfill \\ {\scriptscriptstyle \frac{1}{2}}\left(m({\scriptscriptstyle \frac{T1}{2}})+m({\scriptscriptstyle \frac{T+1}{2}})\right)\hfill & T\text{ odd.}\hfill \end{array}$$
(9)
To define the step scaling function $`\sigma (s,u)`$ let $`u=\overline{g}^2(L)`$ and $`m(L)=0`$. Then we change the length scale by a factor $`s`$ and compute the new coupling $`u^{}=\overline{g}^2(sL)`$. The lattice step scaling function $`\mathrm{\Sigma }`$ at the resolution $`L/a`$ is defined as
$$\mathrm{\Sigma }(s,u,a/L)=\overline{g}^2(sL)|_{\overline{g}^2(L)=u,m(L)=0}.$$
(10)
Note that the two conditions fix the two bare parameters $`g_0`$ and $`\kappa `$. The continuum limit $`\sigma (s,u)`$ can be found by an extrapolation in $`a/L`$. That means the computational strategy is as follows:
* Choose a lattice with $`L/a`$ points in each direction.
* Tune the bare parameters $`g_0`$ and $`\kappa `$ such that the renormalized coupling $`\overline{g}^2(L)`$ has the value $`u`$ and $`m(L)=0`$.
* At the same value of $`g_0`$ and $`\kappa `$ simulate a lattice with twice the linear size and compute $`u^{}=\overline{g}^2(2L)`$. This gives $`\mathrm{\Sigma }(2,u,\frac{a}{L})`$.
* Repeat steps 1.-3. with different resolutions $`L/a`$ and extrapolate $`\frac{a}{L}0`$, which yields $`\sigma (2,u)`$.
## 3 Results
We have performed Monte Carlo simulations on APE100/Quadrics parallel computers with SIMD architecture and single precision arithmetics. The size of the machines ranged from 8 to 256 nodes. The gauge fields and the bermion fields have been generated by hybrid overrelaxation including microcanonical reflection steps. While we measured the gauge observables after each update of the fields the fermionic correlation functions were determined only every 100th to 1000th iteration, since their measurement involves the inversion of the Dirac operator. The statistical errors have been determined by a direct computation of the autocorrelation matrix. The largest run has been performed for the lattice size $`L=24`$ which took about 12 days on the largest machine.
In figure 2 our results for the bermion lattice step scaling function
$`\mathrm{\Sigma }(2,0.9793,a/L)`$ are shown for resolutions 4, 5, 6, 8 and 12 and compared with renormalized perturbation theory. The lattice artifacts are consistent with $`\mathrm{O}(a)`$ effects. Included is a linear fit to the data which extrapolates to a continuum limit which lies above the perturbative estimate but is compatible with it. Furthermore a linear plus quadratic fit with the constraint that the continuum limit is given by the perturbative estimate is shown. All the data points are compatible with this fit. Note that $`L/a=4`$ has been ignored in these fits.
In figure 3 we compare these data with results from the quenched approximation with the standard Wilson action and with a perturbatively $`\mathrm{O}(a)`$ improved action. Again the most naive extrapolation of the unimproved data would extrapolate to a value slightly above the perturbative estimate. However, if we use universality, i.e. the agreement of the continuum limit of the two data sets as a constraint, their joint continuum limit is fully compatible with perturbation theory.
Furthermore we observe that in the bermion theory the cut off effects are larger than in the quenched approximation.
Since the extrapolation to the continuum limit in the quenched approximation is much easier in the improved case we plan to study the $`\mathrm{O}(a)`$ improved bermion model. This will also allow us to compare with the fermionic theory on equal footing.
This work is part of the ALPHA collaboration research program. We thank DESY for allocating computer time to this project and the DFG under GK 271 for financial support.
|
no-problem/9907/quant-ph9907095.html
|
ar5iv
|
text
|
# Quantum fluctuations for drag free geodesic motion
## Abstract
The drag free technique is used to force a proof mass to follow a geodesic motion. The mass is protected from perturbations by a cage, and the motion of the latter is actively controlled to follow the motion of the proof mass. We present a theoretical analysis of the effects of quantum fluctuations for this technique. We show that a perfect drag free operation is in principle possible at the quantum level, in spite of the back action exerted on the mass by the position sensor.
PACS: 42.50 Lc; 04.80.Cc; 07.50-e
The Galilean principle of the universality of free fall has for a long time had a limited accuracy due to technical difficulties. The effect of air drag on falling bodies, the main of these difficulties, was already discussed at length by Galileo and Newton . Nowadays this effect may be mastered either by an active control of the falling bodies or by letting the fall take place in a vacuum drop tower. Using an active control greatly reduces the vacuum requirement while allowing a test accuracy of $`5\times 10^{10}`$ . This accuracy does not reach the level of torsion balance experiments but it may do so in the future with the additional use of a vacuum drop chamber . The accuracy of the measurement of relative acceleration of freely falling bodies may thus reach $`10^{12}`$ with a measurement time of $`4.7`$ s corresponding to a fall of $`109`$ m in the Bremen drop tower .
The use of drag free technique has also been proposed for ultra high accuracy satellite tests of the equivalence principle . When such a technique is used, the freely falling proof mass is protected from the non gravitational perturbations like residual air drag, radiation pressure, et cetera by the satellite and its relative motion with respect to the satellite is monitored by a high sensitivity position sensor. The effect of these environmental perturbations on the satellite motion is then finely compensated through the action of thrusters. In the present paper, we evaluate the ultimate limits of the drag free technique. We study the residual motion of a proof mass protected from environmental perturbations by a cage, the motion of which is itself actively controlled to follow the geodesic motion of the proof mass. These questions have already been addressed from a classical point of view which is sufficient for assessing the performance of real existing devices.
It is however very interesting to address the same questions in the context of quantum measurement theory, at least as questions of principle. In a drag free technique, the proof mass is continuously monitored by a position sensor and it is therefore submitted to the back action of the sensor . When the sensibility of the sensor is improved, the back action noise is expected to increase. It seems therefore difficult to have at the same time a highly sensitive tracking and an unperturbed geodesic motion of the proff mass. In fact, this difficulty may be circumvented by the use of active techniques. We show in the present paper that drag free operation may be in principle performed at the quantum level with the proof mass following a nearly ideal geodesic trajectory.
We consider in this paper a cold damped capacitive sensor developed at ONERA for ultrasensitive accelerometry . The present analysis heavily relies on previous studies of quantum and thermal fluctuations in such a device . In particular the analysis of the position sensor will simply be taken from these references. The new result of the present paper will concern the drag free system with the measured acceleration between the proof mass and the cage used as an error signal to servo control the cage motion. The drag free system is sketched on figure $`1`$.
The mechanical response of the system in the absence of servo control can be written in term of a mechanical impedance matrix
$$\left(\begin{array}{cc}\mathrm{\Xi }_\mathrm{p}+\mathrm{\Xi }_\mathrm{s}& \mathrm{\Xi }_\mathrm{s}\\ \mathrm{\Xi }_\mathrm{s}& \mathrm{\Xi }_\mathrm{c}+\mathrm{\Xi }_\mathrm{s}\end{array}\right)\left(\begin{array}{c}V_\mathrm{p}\\ V_\mathrm{c}\end{array}\right)=\left(\begin{array}{c}F_\mathrm{p}^\mathrm{t}\\ F_\mathrm{c}^\mathrm{t}\end{array}\right)$$
(1)
Throughout the paper the descriptions are given in the frequency domain and the quantum convention is used for the Fourier transform. The electronics convention may be recovered by substituting $`j`$ to $`i`$. All physical observables are represented as non commutative quantum operators.
The proof mass velocity $`V_\mathrm{p}`$ and the cage velocity $`V_\mathrm{c}`$ are determined through this equation to the total forces $`F_\mathrm{p}^\mathrm{t}`$ and $`F_\mathrm{c}^\mathrm{t}`$ acting on the two objects
$`F_\mathrm{p}^\mathrm{t}`$ $`=`$ $`F_\mathrm{p}+F_\mathrm{s}+F_\mathrm{t}`$ (2)
$`F_\mathrm{c}^\mathrm{t}`$ $`=`$ $`F_\mathrm{c}F_\mathrm{s}F_\mathrm{t}`$ (3)
Here $`F_\mathrm{p}`$ is external force acting on the proof mass despite the screening (for example the gravitational force) and $`F_\mathrm{c}`$ is the external force acting on the cage (including all the environmental perturbations). $`F_\mathrm{s}`$ is the force exerted on the proof mass and on the cage through the separating space between them (for example the Langevin force associated with residual gas). $`F_\mathrm{t}`$ is the back action force exerted by the electromechanical transducer used to measure the relative position between the proof mass and the cage. The action and reaction forces have been treated as equal, which amounts to neglect the delays at the low mechanical frequencies considered here.
$`\mathrm{\Xi }_\mathrm{p}`$ is the mechanical impedance of the free proof mass, that is a mass which would be neither perturbed by the environmental forces screened by the cage nor disturbed by its coupling to a sensor. It takes in particular into account the inertial mass $`M_\mathrm{p}`$ and the dissipative effects associated with the unscreened forces. In this sense what we call in the following the free motion of the proof mass is described by the equation
$$\mathrm{\Xi }_\mathrm{p}V_\mathrm{p}^{\mathrm{fr}}=F_\mathrm{p}$$
(4)
When quantum fluctuations are taken into account (see for example ) this equation is known to include the fluctuations associated with Schrodinger equation at the limiting case of a vanishingly small dissipation . $`\mathrm{\Xi }_\mathrm{c}`$ is similarly the impedance of cage in the absence of coupling to the proof mass and to the transducer. Finally $`\mathrm{\Xi }_\mathrm{s}`$ corresponds to the interaction between the proof mass and the cage through the restoring force, damping associated with residual gases, back action forces due to the presence of the position sensor et cetera.
The mechanical impedances and force fluctuations are related through the fluctuation dissipation relations initially discovered by Einstein in his analysis of Brownian motion . These relations are known as Nyquist relations for electrical systems and they are written below with quantum fluctuations accounted for . The dissipative part of the susceptibility functions is directly related to the commutator of the quantum force fluctuations . Then force fluctuations $`F`$ are characterized by a noise spectrum $`\sigma _{FF}`$ with its well-known expression for a thermal equilibrium at a temperature
$`F\left[\omega \right]F\left[\omega ^{}\right]=2\pi \delta \left(\omega +\omega ^{}\right)\sigma _{FF}`$ (5)
$`\sigma _{FF}=2k_B\mathrm{\Theta }Re\mathrm{\Xi }\left[\omega \right]`$ (6)
$`k_B\mathrm{\Theta }={\displaystyle \frac{\mathrm{}\left|\omega \right|}{2}}\mathrm{coth}{\displaystyle \frac{\mathrm{}\left|\omega \right|}{2k_BT}}`$ (7)
The symbol ‘$``$’ denotes a symmetrized product for quantum operators (used to get rid of the ordering ambiguity), $`k_B`$ is the Boltzmann constant and $`T`$ the temperature. We have introduced an effective temperature $`\mathrm{\Theta }`$ with $`k_B\mathrm{\Theta }`$ representing exactly the energy per mode of the fluctuations. This energy reproduces the classical result $`k_BT`$ at the high temperature limit and the zero point energy $`\frac{\mathrm{}\left|\omega \right|}{2}`$ at zero temperature.
The motion sensor is a capacitive transducer designed to detect the relative acceleration between the cage and the proof mass. Its operation and performances have been described in detail elsewhere . We recall here the properties of this sensor when the various parameters are optimized .
The sensor measures the velocity difference $`\delta V`$ between the cage and the proof mass
$$\delta V=V_\mathrm{c}V_\mathrm{p}$$
(8)
Its performance can be discussed by introducing an estimator $`\widehat{\delta V}`$ representing the velocity $`\delta V`$ as it may be inferred from the output of the sensor. After an appropriate scaling this estimator is the sum of the velocity signal $`\delta V`$ and of a sensing error $`V_{\mathrm{se}}`$
$$\widehat{\delta V}=\delta V+V_{\mathrm{se}}$$
(9)
The sensing error $`V_{\mathrm{se}}`$ and the back action force $`F_\mathrm{t}`$ exerted by the transducer onto the proof mass are the main parameters of interest for evaluating the performance of the sensor. Here we use their expressions taken from and discuss the drag free system.
In the drag free system the measured velocity $`\widehat{\delta V}`$ is used as an error signal to servo control the motion of the cage. It is worth recalling that this error signal has been obtained after a preamplification stage with a large gain . As shown in , we may greatly simplify the analysis of the whole system by considering the further amplification stages as noiseless. This is true in particular for the amplifications used in the feedback loops. Hence, we may write the servo control force $`F_{\mathrm{df}}`$ acting on the cage as merely proportional to the velocity estimator $`\widehat{\delta V}`$
$$F_{\mathrm{df}}=G\widehat{\delta V}$$
(10)
The factor $`G`$ is the gain of the servo loop and it corresponds to an effective mechanical impedance. With the feedback in operation, the equations of motion are now written
$$\left(\begin{array}{cc}\mathrm{\Xi }_\mathrm{p}+\mathrm{\Xi }_\mathrm{s}& \mathrm{\Xi }_\mathrm{s}\\ \mathrm{\Xi }_\mathrm{s}G& \mathrm{\Xi }_\mathrm{c}+\mathrm{\Xi }_\mathrm{s}+G\end{array}\right)\left(\begin{array}{c}V_\mathrm{p}\\ V_\mathrm{c}\end{array}\right)=\left(\begin{array}{c}F_\mathrm{p}^\mathrm{t}\hfill \\ F_\mathrm{p}^\mathrm{t}GV_{\mathrm{se}}\hfill \end{array}\right)$$
(11)
In the limit of a large servo loop gain, the solution of these equations is read
$`V_\mathrm{p}`$ $`=`$ $`{\displaystyle \frac{1}{\mathrm{\Xi }_\mathrm{p}}}\left(F_\mathrm{p}+F_\mathrm{s}+F_\mathrm{t}\mathrm{\Xi }_\mathrm{s}V_{\mathrm{se}}\right)`$ (12)
$`V_\mathrm{c}`$ $`=`$ $`{\displaystyle \frac{1}{\mathrm{\Xi }_\mathrm{p}}}\left(F_\mathrm{p}+F_\mathrm{s}+F_\mathrm{t}\left(\mathrm{\Xi }_\mathrm{s}+\mathrm{\Xi }_\mathrm{p}\right)V_{\mathrm{se}}\right)`$ (13)
$`=`$ $`V_\mathrm{p}V_{\mathrm{se}}`$ (14)
These quantum equations have exactly the same form as in a classical analysis of fluctuations. As a first result, this proves that the drag free technique is also effective when quantum fluctuations are taken into account. But it is more interesting to consider these equations as describing the ultimate performance of the system as it would be limited by quantum fluctuation processes and to address in this manner the questions asked in the introduction.
As already explained, the free motion of the proof mass corresponds to the unperturbed equation (4). This free motion is recovered in (14) superimposed to perturbations having various origins. $`F_\mathrm{s}`$ represents the Langevin forces associated with the dissipative part of the impedance $`\mathrm{\Xi }_\mathrm{s}`$ characterizing the coupling between the proof mass and the cage. $`F_\mathrm{t}`$ is the back action exerted by the sensor on the proof mass and $`V_{\mathrm{se}}`$ is the sensing error which appears multiplied by the mechanical impedance $`\mathrm{\Xi }_\mathrm{s}`$. If we concentrate on questions of principle these terms can be arbitrarily reduced by tailoring the impedances and the associated fluctuations. Quantum mechanics does not prevent a perfect drag free operation.
In the following we focus our attention on a simple and realistic treatment of the amplifier as a phase independent device characterized by an equivalent noise temperature and noise impedance . In this situation, the back action noise and the sensing error can be written as
$`\sigma _{F_\mathrm{t}F_\mathrm{t}}`$ $`=`$ $`4k_B\mathrm{\Theta }_\mathrm{a}\rho \left|\mathrm{\Xi }_\mathrm{s}\right|`$ (15)
$`\sigma _{V_{\mathrm{se}}V_{\mathrm{se}}}`$ $`=`$ $`4k_B\mathrm{\Theta }_\mathrm{a}{\displaystyle \frac{1}{\rho \left|\mathrm{\Xi }_\mathrm{s}\right|}}`$ (16)
$`k_B\mathrm{\Theta }_\mathrm{a}`$ $`=`$ $`{\displaystyle \frac{\mathrm{}\omega _\mathrm{t}}{2}}\mathrm{coth}{\displaystyle \frac{\mathrm{}\omega _\mathrm{t}}{2k_BT_\mathrm{a}}}`$ (17)
They vary as conjugated noises as functions of the mechanical impedance $`\left|\mathrm{\Xi }_\mathrm{s}\right|`$ and of the dimensionless ratio $`\rho `$ which represents the impedance matching between the amplifier and the electromechanical transducer . $`\mathrm{\Theta }_\mathrm{a}`$ is the effective temperature characterizing the preamplifier noise. It depends on a temperature $`T_\mathrm{a}`$ and the frequency of operation $`\omega _\mathrm{t}`$ of the electrical detection circuit. The latter lies in the 100kHz range that is at much higher frequencies than the mechanical frequencies $`\mathrm{\Omega }`$ of interest. The correlation function of the proof mass velocity (14) is thus
$`\left|\mathrm{\Xi }_\mathrm{p}\right|^2\sigma _{V_\mathrm{p}V_\mathrm{p}}`$ $`=`$ $`2k_B\mathrm{\Theta }_\mathrm{p}Re\mathrm{\Xi }_\mathrm{p}`$ (18)
$`+`$ $`2k_B\mathrm{\Theta }_\mathrm{s}Re\mathrm{\Xi }_\mathrm{s}+4k_B\mathrm{\Theta }_\mathrm{a}\left|\mathrm{\Xi }_\mathrm{s}\right|\left(\rho +{\displaystyle \frac{1}{\rho }}\right)`$ (19)
The first line represents the free motion of the proof mass and the second line the noises added respectively by the three additional terms in (14). The noise added to the free motion of the test mass by the drag free system has a minimum level when the impedances are matched so that $`\rho =1`$. The noise thus reaches its minimum value
$`\left|\mathrm{\Xi }_\mathrm{p}\right|^2\sigma _{V_\mathrm{p}V_\mathrm{p}}`$ $`=`$ $`2k_B\mathrm{\Theta }_\mathrm{p}Re\mathrm{\Xi }_\mathrm{p}`$ (20)
$`+`$ $`2k_B\mathrm{\Theta }_\mathrm{s}Re\mathrm{\Xi }_\mathrm{s}+8k_B\mathrm{\Theta }_\mathrm{a}\left|\mathrm{\Xi }_\mathrm{s}\right|`$ (21)
Equation (21) describes in a quantitative manner the ultimate performance of the drag free system as far as the residual motion of the proof mass is concerned. The added noise contains the fluctuations $`2k_B\mathrm{\Theta }_\mathrm{s}Re\mathrm{\Xi }_\mathrm{s}`$ corresponding to the Langevin force associated with the dissipation between the proof mass and the cage and a second term having the same order of magnitude when the equivalent temperature $`\mathrm{\Theta }_\mathrm{s}`$ and $`\mathrm{\Theta }_\mathrm{a}`$ are equal. In fact it may even be larger in this situation if the impedance $`\mathrm{\Xi }_\mathrm{s}`$ is mainly reactive. Note that this feature makes a difference with the previously studied case where the accelerometer was used for measuring the force acting on the proof mass . In the previous case, the last term was reduced by a factor of the order of the frequency transposition ratio $`\frac{\mathrm{\Omega }}{\omega _\mathrm{t}}`$. Here in contrast, we are concerned with the real motion of the proof mass and not only with the accelerometry signal used as error signal. We do not benefit of this frequency transposition ratio for the motion control. To illustrate this point we rewrite the velocity noise (21) with the assumption of a zero temperature that is to say a temperature small with respect to all frequencies of interest
$$\left|\mathrm{\Xi }_\mathrm{p}\right|^2\sigma _{V_\mathrm{p}V_\mathrm{p}}=\mathrm{}\mathrm{\Omega }Re\left(\mathrm{\Xi }_\mathrm{p}+\mathrm{\Xi }_\mathrm{s}\right)+4\mathrm{}\omega _\mathrm{t}\left|\mathrm{\Xi }_\mathrm{s}\right|$$
(22)
The frequency transposition gives a large weight to the noise added by the servo control. The technique of frequency transposition so well adapted to the measurement of a force with the accelerometer is of no help for a drag free operation.
Coming back to the general case we may also write the velocity noise for the actively controlled cage motion. The formula equivalent to (19) is read as
$`\left|\mathrm{\Xi }_\mathrm{p}\right|^2\sigma _{V_\mathrm{c}V_\mathrm{c}}`$ $`=`$ $`2k_B\mathrm{\Theta }_\mathrm{p}Re\mathrm{\Xi }_\mathrm{p}+2k_B\mathrm{\Theta }_\mathrm{s}Re\mathrm{\Xi }_\mathrm{s}`$ (23)
$`+`$ $`4k_B\mathrm{\Theta }_\mathrm{a}\left|\mathrm{\Xi }_\mathrm{s}\right|\left(\rho +{\displaystyle \frac{\left|\mathrm{\Xi }_\mathrm{p}+\mathrm{\Xi }_\mathrm{s}\right|^2}{\rho \left|\mathrm{\Xi }_\mathrm{s}\right|^2}}\right)`$ (24)
It can be used to discuss the ultimate performance of the drag free system when the emphasis is put on the geodesic motion of the cage. In this case a different value of the impedance matching parameter $`\rho `$ has to be chosen in order to optimize the performance of the system $`\rho =\frac{\left|\mathrm{\Xi }_\mathrm{p}+\mathrm{\Xi }_\mathrm{s}\right|}{\left|\mathrm{\Xi }_\mathrm{s}\right|}`$and this choice leads to a minimum velocity noise for the cage
$`\left|\mathrm{\Xi }_\mathrm{p}\right|^2\sigma _{V_\mathrm{c}V_\mathrm{c}}`$ $`=`$ $`2k_B\mathrm{\Theta }_\mathrm{p}Re\mathrm{\Xi }_\mathrm{p}`$ (25)
$`+`$ $`2k_B\mathrm{\Theta }_\mathrm{s}Re\mathrm{\Xi }_\mathrm{s}+8k_B\mathrm{\Theta }_\mathrm{a}\left|\mathrm{\Xi }_\mathrm{p}+\mathrm{\Xi }_\mathrm{s}\right|`$ (26)
Active techniques are now used in the operation of gravitational wave detection with interferometers , in particular for improving isolation of the mirors from the ground motion. The motion of the first stage of the isolator is measured with accelerometers and compensated with feedback action. In the last stage, servo control is also used to perform the final positioning of the miror. The discussions presented in the present paper could be applied to analyse the limits of these techniques.
Acknowledgments
We wish to thank Francesca Grassia, Pierre Touboul and Philippe Tourrenc for stimulating discussions.
|
no-problem/9907/physics9907017.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
There are two kinds of photon redshift known in the literature, a gravitational and a cosmological one. Though in the General Relativity framework the two shifts can be described very similarly, in the literature they are usually discussed separately. The cosmological redshift is that of light from distant galaxies which recede. It is generally referred to as the Hubble redshift. For the farthest observed galaxies it is quite large, $`\mathrm{\Delta }\lambda /\lambda 5`$. The gravitational redshift arises when light moves away from a static massive object, e.g. the earth or the sun. Its observed magnitudes are generally small. This paper is devoted exclusively to this type of redshift.
The gravitational redshift is a classical effect of Einstein’s General Relativity (GR), one predicted by him well before that theory was created (for the historical background, see e.g., ). Phenomenologically one can simply affirm that the frequency of light emitted by two identical atoms is smaller for the atom which sits deeper in the gravitational potential. A number of ingenious experiments have been performed to measure various manifestations of this effect. They are discussed in a number of excellent reviews whose main goal is to contrast the predictions of GR with those of various non-standard theories of gravity. Explanations of the gravitational redshift per se within the standard framework are however not critically discussed in these reviews.
Most treatises on GR , follow the definitive reasoning of Einstein according to which the gravitational redshift is explained in terms of universal property of standard clocks (atoms, nuclei). The proper time interval between events of emission of two photons as measured by the standard clock at the point of emission is different from the proper time interval between events of absorption of those photons as measured by identical standard clock at the point of absorption (in this way it was first formulated in ).
In the static gravitational potential the picture simplifies because there is a distinguished time – the one on which metric is independent. This time can be chosen as a universal (world) one. Under this choice the energy difference between two atomic levels increases with the distance of the atom from the earth while the energy of the propagating photon does not change. (In what follows we speak of the earth, but it could be any other massive body.) Thus what is called the redshift of the photon is actually a blueshift of the atom. As for the proper times at different points, they are related to the universal time via a multiplier which depends on gravitational potential and hence has different values at different points (see section 4).
Actually, most modern textbooks and monographs derive the redshift by using sophisticated general relativity calculations, e.g. using orthonormal bases (a sequence of proper reference frames) to define photon energy and parallel transporting the photon’s 4-momentum along its world-line. Sometimes this description is loosely phrased as a degradation of the photon’s energy as it climbs out of gravitational potential well. (Some other classical textbooks also use this loose phrasing .) However, the non-experts should be warned that the mathematics underlying this description is radically different from the heuristic (and wrong) arguments presented in many elementary texts, e.g. . These authors claim to deduce the “work against gravity” viewpoint by pretending that the photon is like a normal, low-velocity, massive particle and thus has a “photon mass” and “photon potential energy”. Such derivations are incorrect and should be avoided. They are in fact avoided in exceptional popular texts, e.g. .
## 2 Experiments
The first laboratory measurement of the gravitational redshift was performed at Harvard in 1960 by Robert Pound and Glenn Rebka , (with 10% accuracy) and in 1964 by Pound and Snider (with 1% accuracy). The photons moved in a 22.5-meter tall tower. The source and the absorber of the photons ($`\gamma `$-rays of 14.4 keV energy) were <sup>57</sup>Fe nuclei. The experiment exploited the Mössbauer effect which makes the photon lines in a crystal extremely monochromatic. The redshift was compensated through the Doppler effect, i.e., by slowly moving the absorber and thereby restoring the resonant absorption. The shift measured in this way was $`\mathrm{\Delta }\omega /\omega 10^{15}`$.
As to the interpretation of the result, there is some ambiguity in the papers by Pound et al. Although they mention the clock interpretation by referring to Einstein’s original papers, the absolute reddening of the photon is also implied as can be seen from the title of Pound’s talk in Moscow “On the weight of photons”. From the title of the paper by Pound and Snider , “Effect of gravity on nuclear resonance” one might infer that they did not want to commit themselves to any interpretation.
By contrast, the majority of the reviews of gravitational experiments quote the Harvard result as a test of the behaviour of clocks. In fact the result must be interpreted as a relative shift of the photon frequency with respect to the nuclear one since the experiment does not measure these frequencies independently.
An experiment measuring the relative shift of a photon (radio-wave) frequency with respect to an atomic one was also performed with a rocket flying up to 10,000 km above the earth and landing in the ocean .
Alongside the experiments – special measurements of the dependence of the atomic clock rate on the altitude were done directly by using airplanes , (see also reviews ). In these experiments a clock which had spent many hours at high altitude was brought back to the laboratory and compared with its “earthly twin”. The latter, once corrected for various background effects, lagged behind by $`\mathrm{\Delta }T=(gh/c^2)T`$, where $`T`$ is the duration of the flight at height $`h`$, $`g`$ the gravitational acceleration, and $`c`$ the speed of light.
One of these background effects is the famous “twin paradox” of Special Relativity, which stems from the fact that moving clocks run slower than clocks at rest. It is easy to derive a general formula which includes both the gravitational potential $`\varphi `$ and the velocity $`u`$ (see, e.g., the book by C. Möller ):
$$d\tau =dt\left[1+2\varphi /c^2u^2/c^2\right]^{1/2},$$
(1)
where $`\tau `$ is the proper “physical” time of the clock, while $`t`$ is the so-called world time, which can be introduced in the case of a static potential and which is sometimes called laboratory time.
In his lectures on gravity , Richard Feynman stresses the differences between the effects due to $`u`$ and to $`\varphi `$. He concludes that the centre of the earth must be by a day or two younger than its surface.
## 3 Theory before General Relativity: “elevators”
Since most of the conclusive experiments on the gravitational redshift were earthbound, we shall throughout use that frame in which the earth is at rest (neglecting its rotation).
As is well known, a potential is defined up to a constant. When considering the gravitational potential $`\varphi (r)`$ at an arbitrary distance $`r`$ from the earth’s centre, it is convenient to set $`\varphi (\mathrm{})=0`$; then $`\varphi `$ is negative everywhere.
Near the earth’s surface (at $`h=rRR`$) it is legitimate to approximate $`\varphi `$ linearly:
$$\delta \varphi (h)=\varphi (R+h)\varphi (R)=gh,$$
(2)
where $`g`$ is the usual gravitational acceleration. Note that $`\delta \varphi (h)`$ is positive for $`h0`$. We shall discuss the redshift only to the first order in the parameter $`gh/c^2`$.
(The linear approximation Eq. (2) is valid for the description of experiments . It is obvious, however, that for the high-flying rockets ($`h10^4`$ km) it is not adequate and the newtonian potential should be used, but this is not essential for the dilemma “clocks versus photons” which is the subject of this paper.)
Einstein’s first papers on the gravitational redshift contain many of the basic ideas on the subject which were incorporated (sometimes without proper critical analysis) into numerous subsequent texts. He considered the Doppler effect in the freely falling frame and found the increase of the frequency of an atom (clock) with increasing height (potential). The cornerstone of his considerations was the local equivalence between the behaviour of a physical system in a gravitational field and in a properly accelerated reference frame.
For the potential (2) it is particularly convenient to appeal to Einstein’s freely falling reference frame (“elevator”). In such an elevator an observer cannot detect any manifestation of gravity by any strictly local experiment (equivalence principle). Operationally, “strictly local” means that the device used is sufficiently small not to be sensitive to curvature effects.
Consider from such an elevator falling with the acceleration $`g`$ a photon of frequency $`\omega `$ which is emitted upwards by an atom at rest on the surface of the earth and which is expected to be absorbed by an identical atom fixed at height $`h`$. The frequency of light is not affected by any gravitational field in a freely falling elevator: it keeps the frequency with which it was emitted. Assume that at the moment of emission $`(t=0)`$ the elevator had zero velocity. At the time $`t=h/c`$, when the photon reaches the absorbing atom, the latter will have velocity $`v=gh/c`$ directed upwards in the elevator frame. As a result the frequency of the photon, as seen by the absorbing atom, will be shifted by the linear Doppler effect by $`v/c=gh/c^2`$ towards the red, that is
$$\frac{\mathrm{\Delta }\omega }{\omega }=\frac{gh}{c^2}.$$
(3)
(Minute corrections of higher order in $`gh/c^2`$ to the “elevator formulas” are lucidly discussed on the basis of a metric approach in ref. .) Consider now another situation, when the upper atom (absorber) moves in the laboratory frame with a velocity $`v=gh/c`$ downwards. Then in the elevator frame it will have zero velocity at the moment of absorption and hence it will be able to absorb the photon resonantly in complete agreement with experiments . Obviously, in the elevator frame there is no room for the interpretation of the redshift in terms of a photon losing its energy as it climbs out of the gravitational well.
## 4 Theory in the framework of General Relativity: metric
Up to now we used only Special Relativity and newtonian gravity. As is well known, a consistent relativistic description of classical gravity is given in the framework of GR with its curved space-time metric. One introduces a metric tensor, $`g_{ik}(x),i`$, $`k=0,1,2,3`$, which is, in general, coordinate dependent and transforms by definition under a change of coordinates in such a way that the interval $`ds`$ between two events with coordinates $`x^i`$ and $`x^i+dx^i`$,
$$ds^2=g_{ik}(x)dx^idx^k$$
(4)
is invariant. Setting $`dx^1=dx^2=dx^3=0`$, one obtains the relation between the proper time interval $`d\tau =ds/c`$ and the world time interval $`dt=dx^0/c`$ for an observer at rest
$$d\tau =\sqrt{g_{00}}dt.$$
(5)
For a static case, Eq. (5) integrates to
$$\tau =\sqrt{g_{00}}t,$$
(6)
where $`g_{00}`$ is a function of $`𝒙`$ in the general case while in the case of Eq.(2) it is a function of $`x^3=z=h`$.
The time $`\tau `$ is displayed by a standard clock and can also be viewed on as a time coordinate in the so-called comoving locally inertial frame, i.e. such locally inertial frame which at a given instant has zero velocity with respect to the laboratory frame. If one has a set of standard clocks at different points, then their proper times $`\tau `$ are differently related to the world (laboratory) time $`t`$, due to $`𝒙`$-dependence of $`g_{00}`$ (see Eq. (6)). This explains the airplane experiments , . Let us recall that sometimes the world time is called laboratory time. The former term reflects the fact that it is the same for the whole world, the latter signifies that it can be set with standard clocks in the laboratory. Many authors refer to $`t`$ as the coordinate time.
A weak gravitational field can be described by a gravitational potential $`\varphi `$, and $`g_{00}`$ is related to the gravitational potential:
$$g_{00}=1+2\varphi /c^2.$$
(7)
We shall explain the meaning of this relation a bit later (see Eqs. (8)–(10)). According to Eqs. (6), (7) a clock runs the slower in the laboratory the deeper it sits in the gravitational potential.
Analogous to Eq. (5) is a relation between the rest energy $`E_0`$ of a body in the laboratory frame and in the comoving locally inertial frame,
$$E_0^{lab}=E_0^{loc}\sqrt{g_{00}}$$
(8)
(notice that $`E_0^{lab}dt=E_0^{loc}d\tau `$; this is because the energy $`E`$ is the zero-th component of a covariant 4-vector, while $`dt`$ is the zero-th component of a contravariant 4-vector).
The rest energy in the locally inertial frame is the same as in the special relativity (see e.g. and the book by E. Taylor and J.A. Wheeler , p.246).
$$E_0^{loc}=mc^2,$$
(9)
while the rest energy in the laboratory system $`E_0^{lab}`$ includes also the potential energy of the body in the gravitational field. This is in accordance with the main principle of the general relativity: all effects of gravity arise only via the metric tensor. Eq. (7) for the $`g_{00}`$ component of the metric tensor in a weak gravitational field can be considered as a consequence of Eqs. (8), (9) and of the relation
$$E_0^{lab}=mc^2+m\varphi ,$$
(10)
which generalizes the notion of the rest energy of a free particle to that of a particle of mass $`m`$ in a gravitational weak potential.
Now we are in a position to explain the redshift in the laboratory frame. According to Eq. (8) or Eq. (10) the energy difference $`\epsilon _{lab}`$ of atomic or nuclear levels in that frame depends on the location of the atom. The deeper atom sits in the gravitational potential the smaller is $`\epsilon _{lab}`$. For an absorber atom which is located at height $`h`$ above an identical atom which emits the photon, the relative change in the energy difference is $`gh/c^2`$,
$$\frac{\mathrm{\Delta }\epsilon ^{lab}}{\epsilon ^{lab}}=\frac{gh}{c^2}.$$
(11)
(We use in Eq. (11) the linear approximation of Eq. (2).) One can say that the energy levels of the absorber atoms are shifted towards the blue in the laboratory frame. Eq. (11) is, of course, nothing but a way to describe the difference in the rates of atomic clocks located at a height $`h`$ one above the other. On the other hand, the energy (frequency) of the photon is conserved as it propagates in a static gravitational field. This can, for example, be seen from the wave equation of electromagnetic field in the presence of a static gravitational potential or from the equations of motion of a massless (or massive) particle in a static metric. Clearly, in the laboratory system there is no room for the interpretation in which the photon loses its energy when working against the gravitational field.
Finally, one can discuss the experiment using a sequence of locally inertial frames which are comoving with the laboratory clocks (atoms) at the instants when the photon passes them. As we explained above, in such systems the standard clocks run with the same rates, the rest energy of the atom is equal to its mass, times $`c^2`$, Eq. (9), and the energy levels of the atom are the same as at infinity. On the other hand, the energy of the photon in the laboratory frame $`E_\gamma ^{lab}=\mathrm{}\omega ^{lab}`$ and in the comoving locally inertial frame $`E_\gamma ^{loc}`$ are related as
$$E_\gamma ^{lab}=E_\gamma ^{loc}\sqrt{g_{00}}.$$
(12)
Eq.(12) follows from Eq.(8) by noticing that the photon can be absorbed by a massive body and by considering the increase of the rest energy of that body. Thus, since $`E_\gamma ^{lab}`$ is conserved, $`E_\gamma ^{loc}`$ decreases with height:
$$\frac{\omega ^{loc}(h)\omega ^{loc}(0)}{\omega ^{loc}(0)}=\frac{E_\gamma ^{loc}(h)E_\gamma ^{loc}(0)}{E_\gamma ^{loc}(0)}=\frac{gh}{c^2}$$
(13)
and this is the observed redshift of the photon. But $`E^{loc}`$ decreases not because the photon works against the gravitational field. The gravitational field is absent in any locally inertial frame. $`E_\gamma ^{loc}`$ changes because one passes from one of the locally inertial frames to the other: the one comoving with the laboratory at the moment of emission, the other – at the moment of absorption.
## 5 Pseudoderivation and misinterpretation of gravitational redshift
The simplest (albeit wrong) explanation of the redshift is based on ascribing to the photon both an inertial and a gravitational“ mass” $`m_\gamma =E_\gamma /c^2`$. Thereby a photon is attracted to the earth with a force $`gm_\gamma `$, while the fractional decrease of its energy (frequency) at height $`h`$ is
$$\frac{\mathrm{\Delta }E_\gamma }{E_\gamma }=\frac{\mathrm{\Delta }\omega }{\omega }=gm_\gamma h/m_\gamma c^2=gh/c^2.$$
(14)
Note that (up to a sign) this is exactly the formula for the blueshift of an atomic level. That should not surprise. The atom and the photon are treated here on the same footing, i.e. both non-relativistically! This is of course inappropriate for the photon. If the explanation in terms of gravitational attraction of the photon to the earth were also correct, then one would be forced to expect a doubling of the redshift (the sum of the effects on the clock and on the photon) in the Pound-type experiments.
Some readers may invoke Einstein’s authority to contradict what was said above. In his 1911 paper , he advanced the idea that energy is not only a source of inertia, but also a source of gravity. Loosely speaking, he used the heuristic argument “whenever there is mass, there is also energy and vice versa”. As he realized later, this “vice versa” was not as correct as the direct statement (a photon has energy while its mass is zero). By applying this energy-mass argument, he calculated the energy loss of a photon moving upwards in the potential of the earth as discussed above. With the same heuristic principle he also derived an expression for the deflection of light by the sun which however underestimated the deflection angle by a factor of 2. Subsequently, in the framework of GR, Einstein recovered this factor . The correct formula was confirmed by observation.
## 6 The wavelength measurement
Up to now we have discussed the gravitational redshift in terms of the photon frequency and clocks. Let us now consider the same phenomenon in terms of the photon wavelength and gratings. To do this we consider two identical gratings at different heights, inclined with respect to the $`z`$-axis along which the light propagates between them. We do not go into the details of this gedanken experiment. The $`z`$-projections of grating spacing is used only as a standard of length. The lower grating serves as monochromator, i.e. as the light source. The wavelength of the photon $`\lambda ^{lab}(z)`$ corresponds to its frequency, while the spacings of the grating in vertical $`(z)`$ direction $`l^{lab}(z)`$ correspond to the rates of the clocks.
For the sake of simplicity, one may consider a very small incidence angle on the gratings, i.e. the grazing incidence of the light. In that case, the vertical projection of the spacing is practically the spacing itself. (Recall that, for the grazing incidence, the spacing $`l^{lab}`$ must be of the same order as the wavelength $`\lambda ^{lab}`$.)
While the photon energy $`E^{lab}`$ is conserved in a static gravitational field the photon momentum $`p^{lab}`$ is not. The relation between these quantities is given by the condition that the photon remains massless which in a gravitational field reads
$$g^{ij}p_ip_j=0$$
(15)
where the $`g^{ij},i,j=0,\mathrm{},3`$ are the contravariant components of the metric tensor, $`p_j`$ are the components of the 4-momentum, $`p_0=E^{lab}`$, $`p_3=p^{lab}=2\pi \mathrm{}/\lambda ^{lab}(z)`$ (for the photon moving along the $`z`$-axis). For the cases we are discussing the metric $`g^{ij}`$ can be taken in diagonal form, in particular $`g^{33}=1/g_{33}`$.
From Eq. (15) one readily finds how $`\lambda ^{lab}(z)`$ changes with height:
$$\lambda _{lab}(z)=\sqrt{g^{33}(z)/g^{33}(0)}\sqrt{g^{00}(0)/g^{00}(z)}\lambda _{lab}(0)$$
(16)
On the other hand, the grating spacing in the $`z`$-direction, $`l^{lab}(z)`$, also changes with height. This is just the standard change of scale in the gravitational field, explained e.g. in the book by L. Landau and E. Lifshitz, § 84, :
$$l^{lab}(z)=\sqrt{g^{33}(z)}l^0$$
(17)
where $`l^0`$ is the “proper spacing” in $`z`$-direction, counterpart of the proper period of the standard clock. Thus the spacing $`l^{lab}(z)`$ depends on $`z`$ as follows
$$l^{lab}(z)=\sqrt{g^{33}(z)/g^{33}(0)}l^{lab}(0)$$
(18)
Finally, in the wavelength analogue of the Pound et al. experiment with $`z=h`$ one would measure the double ratio $`(\lambda (h)/l(h))/(\lambda (0)/l(0))`$. The result can be presented in the form
$$\mathrm{\Delta }\lambda ^{lab}/\lambda ^{lab}\mathrm{\Delta }l^{lab}/l^{lab}=\sqrt{g^{00}(0)/g^{00}(h)}=gh/c^2$$
(19)
where $`\mathrm{\Delta }\lambda ^{lab}/\lambda ^{lab}=[\lambda ^{lab}(h)\lambda ^{lab}(0)]/\lambda ^{lab}(0)`$ and analogously for $`\mathrm{\Delta }l^{lab}/l^{lab}`$. Notice that $`g^{33}`$ drops out from the result. This should be so because there is a freedom in the choice of $`z`$-scale, and the observed quantities cannot depend on this choice. The Eq. (19) is analogous to the one describing the Pound et al. experiments:
$$\mathrm{\Delta }\omega /\omega \mathrm{\Delta }ϵ/ϵ=\sqrt{g_{00}(0)/g_{00}(h)}=gh/c^2$$
(20)
where $`\omega `$ is the frequency of the photon, $`ϵ/\mathrm{}`$ is the frequency of the clock (see Eq. (11). A word of explanation should be added about Eq. (20). In the laboratory frame the first term in the left hand side is equal to zero,
$$\mathrm{\Delta }\omega ^{lab}/\omega ^{lab}=0$$
(21)
as discussed in section 3, thus only the second term given by Eq. (11) contributes.
We would however like to stress an important difference as compared to the case of frequency. There one can independently measure the difference in rates of the upper and lower clocks, $`\mathrm{\Delta }ϵ^{lab}/ϵ^{lab}`$, (analogue of the second term in the left hand side of Eq. (19)) and that was done in the airplane experiments. Here the change of the scale, $`\mathrm{\Delta }l^{lab}/l^{lab}`$, cannot be measured independently. This important difference comes from the fact that the metric is static while it is $`z`$-dependent.
One has to realize that such a laboratory experiment with gratings cannot be performed at the present state of the art in experimental physics (recall the importance of Mössbauer effect in the experiments of Pound et al.). However for the measurement of a large value of the redshift, e.g. that of sodium spectral line from the sun, it is feasible. Such a grating experiment was performed by J.W. Brault in 1962 and was described in §38.5 in the monograph by C. Misner, K. Thorne and J.A. Wheeler . In this experiment the wavelength of the emitted light was fixed not by the lower grating, but by the atom on the sun surface.
## 7 Conclusions
The present article contains little original material; it is primarily pedagogical. The gravitational redshift being, both theoretically and experimentally, one of the cornerstones of General Relativity, it is very important that it always be taught in a simple but nevertheless correct way. That way centers on the universal modification of the rate of a clock exposed to a gravitational potential. An alternative explanation in terms of a (presumed) gravitational mass of a light pulse – and its (presumed) potential energy – is incorrect and misleading. We exhibit its fallacy, and schematically discuss redshift experiments in the framework of the correct approach. We want to stress those experiments in which an atomic clock was flown to, and kept at, high altitude and subsequently compared with its twin that never left the ground. The traveller clock was found to run ahead of its earthbound twin. The blueshift of clocks with height has thus been exhibited as an absolute phenomenon. One sees once over again that the explanation of the gravitational redshift in terms of a naive “attraction of the photon by the earth” is wrong.
## Acknowledgements
We would like to thank V.V. Okorokov, who asked the question on compatibility in the framework of General Relativity of experiments by Pound et al., and the airplane experiments. We also thank S.I. Blinnikov, A.D. Dolgov, A.Yu. Morozov, N. Straumann, K. Thorne and G. Veneziano for very interesting discussions. We would like to express our gratitude especially to E.L. Schucking for his help in substantially improving our bibliography and for his insistence on a unified invariant approach to both gravitational and cosmological redshifts based on Killing vectors. We did however not follow his advice wanting to focus the general reader’s attention on the fallacy of the wide-spread naive interpretation of the gravitational redshift. Last but not least we want to thank J.A. Wheeler for his encouragement. One of us (L.O.) would like to thank the Theoretical Physics Division of CERN, where part of this work was done, for their hospitality.
|
no-problem/9907/astro-ph9907131.html
|
ar5iv
|
text
|
# Measurement of the Solar Neutrino Capture Rate by SAGE and Implications for Neutrino Oscillations in Vacuum
\[
astro-ph/9907131
## Abstract
The Russian-American solar neutrino experiment has measured the capture rate of neutrinos on metallic gallium in a radiochemical experiment at the Baksan Neutrino Observatory. Eight years of measurement give the result $`67.2_{7.03.0}^{+7.2+3.5}`$ solar neutrino units, where the uncertainties are statistical and systematic, respectively. The restrictions these results impose on vacuum neutrino oscillation parameters are given.
\]
Although standard solar models (SSM) based on nuclear fusion have had great success in explaining many observed properties of the Sun, their prediction of the solar neutrino flux is not consistent with experimental measurements. The Homestake chlorine experiment , the water Cherenkov detectors Kamiokande and Super-Kamiokande , and the Ga experiments SAGE , and GALLEX have all measured a neutrino detection rate considerably below SSM predictions. In view of the recent very strong evidence for oscillations of atmospheric neutrinos , it seems reasonable to suppose that the deficit of solar neutrinos may also be the result of neutrino oscillations.
In this Letter we present results of the ongoing SAGE experiment and consider its implications on the widely discussed hypothesis of vacuum oscillations.
Ga experiments detect neutrinos by the reaction <sup>71</sup>Ga$`(\nu _e,e^{})`$<sup>71</sup>Ge. They are the only presently operating experiments with a sufficiently low threshold (233 keV) to be able to measure the low-energy neutrinos from proton-proton ($`pp`$) fusion – the major energy producing reaction in the Sun. SSM calculations predict that the total expected capture rate in <sup>71</sup>Ga is 129 solar neutrino units (SNU), of which 69.6 SNU arise from the $`pp`$ neutrinos, with significant contributions from the <sup>7</sup>Be and <sup>8</sup>B neutrinos (34.4 SNU and 12.4 SNU, respectively), and lesser contributions from the CNO and $`pep`$ neutrinos (9.8 SNU and 2.8 SNU, respectively). \[1 SNU = (10<sup>-36</sup> interactions/(s)/target atom).\]
A detailed discussion of the SAGE experimental procedures, including the chemical extraction, low-background counting of <sup>71</sup>Ge, data analysis methods, and systematic effects, is given in . The combined result from 88 separate counting data sets is $`67.2_{7.03.0}^{+7.2+3.5}`$ SNU. The dominant contributions to the systematic uncertainty come from the Ge extraction efficiency and the <sup>71</sup>Ge counting efficiency. The individual measurement results are plotted in Fig. 1.
The SAGE result of 67.2 SNU is approximately $`7\sigma `$ lower than SSM predictions. It is almost impossible to reconcile this discrepancy by an alteration of the astrophysical components of the SSM. If one artificially sets the rate of the <sup>3</sup>He$`(\alpha ,\gamma )`$<sup>7</sup>Be reaction to zero, so that the <sup>7</sup>Be and <sup>8</sup>B neutrinos are eliminated, then solar models predict that the Ga experiment should measure $`88.1_{2.4}^{+3.2}`$ SNU, more than $`2\sigma `$ greater than our result. If, in addition, all the cross sections for the CNO reactions are set to zero, so that the Sun produces only $`pp`$ and $`pep`$ neutrinos, then the Ga experiment should measure $`79.5_{2.0}^{+2.3}`$ SNU, about $`1.5\sigma `$ above our result. Since the $`pp`$ rate is well determined by the solar luminosity, the deficit of solar neutrinos observed in the Ga experiment implies that new physics beyond the standard model of the electroweak interaction is required to understand the solar neutrino spectrum.
A credible explanation of the solar neutrino problem that does not contradict any other known phenomena is to assume that the neutrinos produced in the Sun have changed flavor by the time they reach the Earth. There are several ways in which such neutrino oscillations may occur. In one type, Mikheyev-Smirnov-Wolfenstein (MSW) oscillations, the solar $`\nu _e`$ transforms into other flavor neutrinos or a sterile neutrino as it passes through a thin resonance region near the solar core. In the second type, vacuum oscillations, the neutrino changes flavor in the vacuum between the Sun and the Earth. In another type, resonant spin flavor conversion, the electron neutrino, provided it has a suitably large magnetic moment, transforms into other species undetectable in Ga as it passes through the solar magnetic field.
Oscillations between two neutrino species are characterized by two parameters: $`\mathrm{\Delta }m^2`$, the difference of the eigenstate masses, and $`\theta `$, the mixing angle between the mass eigenstates. The Ga experiments, sensitive to the low-energy $`pp`$ and <sup>7</sup>Be neutrinos, combined with the high-energy response of the Cl and Super-Kamiokande experiments, substantially restrict the allowed range of $`\mathrm{\Delta }m^2`$ and $`\theta `$ for all oscillation scenarios. The regions of parameter space that are consistent with all solar neutrino experiments have been well discussed in the literature – see for a comprehensive review and references to original papers. At the present time there is no evidence that favors any one of the various oscillation solutions over the others.
As an example of neutrino oscillations, we consider in the following the case of vacuum oscillations (VO). Under the VO assumption, a reasonably good fit to the results of all solar neutrino experiments is obtained for $`\mathrm{\Delta }m^26.5\times 10^{11}`$ eV<sup>2</sup> and $`\mathrm{sin}^22\theta 0.75`$ . One predicted consequence of neutrino oscillations for parameters in this range is a seasonal variation in the solar neutrino flux. If such a seasonal variation were observed it would distinguish clearly between the MSW and VO solutions as parameters in most of the MSW range give no detectable time variation in the Ga experiment beyond that expected from the eccentricity of the Earth’s orbit.
To explore this possibility, we give in Table I the results of the combined analysis of subsets of SAGE data that are grouped by the time of year in which the exposure occurred. The bimonthly grouping that combines February and March is shown in Fig. 2. This choice is arbitrary and the qualitative conclusions we draw below are insensitive to it. Approximating the asymmetric statistical uncertainty by a symmetric error, an expedient analysis technique that makes details of the fit easy to elucidate and extends readily to the analysis discussed below, these results fit quite well ($`\chi ^2=4.9`$ with 5 degrees of freedom) to a constant value of 67.2 SNU, the global best fit to the SAGE data.
Since the fit to a constant rate is quite good, there is no need to invoke VO to explain the time dependence of the data. Nonetheless, to see how the neutrino parameter space is constrained by the SAGE time-of-year results, we will fit them to the VO hypothesis. The survival probability $`P_{\nu _e\nu _e}`$ of an electron neutrino of energy $`E`$ which undergoes vacuum oscillations can be written
$$P_{\nu _e\nu _e}=1\mathrm{sin}^22\theta \mathrm{sin}^2(\pi R/L),$$
where $`R`$ is the distance between the neutrino emission point and the detector and $`L`$ is the neutrino oscillation length, given by $`L=2.47E/\mathrm{\Delta }m^2`$, with $`E`$ in MeV, $`\mathrm{\Delta }m^2`$ in eV<sup>2</sup>, and $`L`$ in m. Since perihelion of the Earth’s orbit occurs during the first week of January, the Earth-Sun distance $`R`$ can be approximated by
$$R=1.496\times 10^{11}\left[1.00.0167\mathrm{cos}\frac{2\pi (t3.5)}{365}\right]\text{ m},$$
where $`t`$ is the day of year. Combining these equations leads to
$`P_{\nu _e\nu _e}(\mathrm{\Delta }m^2,\theta ,E,t)=1\mathrm{sin}^22\theta `$ (1)
$`\times \mathrm{sin}^2[1.90\times 10^{11}{\displaystyle \frac{\mathrm{\Delta }m^2}{E}}\left(1.00.0167\mathrm{cos}{\displaystyle \frac{2\pi (t3.5)}{365}}\right)].`$ (2)
For $`\mathrm{\Delta }m^210^{10}`$ eV<sup>2</sup> and $`E1`$ MeV, the 3% change in the Earth-Sun distance during the year can change the phase of the term in square brackets by $`\pi `$. For the <sup>7</sup>Be and $`pep`$ neutrino lines, this can lead to a dramatic variation in the survival probability as $`P_{\nu _e\nu _e}`$ varies from 1 to $`1\mathrm{sin}^22\theta 0`$.
Using this survival probability, the cross section $`\sigma (E)`$ for inverse beta decay on <sup>71</sup>Ga , the flux , and the spectral shape $`F(E)`$ , the capture rate $`C`$ observed in the Ga detector is given by
$$C(\mathrm{\Delta }m^2,\theta ,t)=P_{\nu _e\nu _e}(\mathrm{\Delta }m^2,\theta ,E,t)\sigma (E)F(E)𝑑E.$$
Since the $`pp`$, <sup>8</sup>B, and CNO neutrino sources are not lines, the integration of their survival probability over energy gives a nearly constant contribution that is reduced from the no-oscillation value by the factor $`1\frac{1}{2}\mathrm{sin}^22\theta `$. Thus, VO can cause an overall decrease in $`C`$ with respect to the SSM. Further, because of the large contribution of <sup>7</sup>Be neutrinos to the response of the Ga detector, the rate can depend strongly on the time of year for certain values of the oscillation parameters.
To constrain the range of allowed neutrino oscillation parameters, we average $`C`$ over the two-month measurement period and calculate the sum of $`\chi ^2`$ for the 6 data points in Fig. 2. The systematic uncertainty of $`5\%`$ is neglected as it is negligible compared to the $`33\%`$ statistical uncertainty of each bimonthly measurement. The fluxes predicted by the SSM are uncertain by $`5\%`$, and we ignore that uncertainty also. A plot of contours of $`\mathrm{\Delta }\chi ^2=7.8`$, which defines the region of 90% confidence for 4 degrees of freedom, is shown in Fig. 3. The overall minimum is at $`\mathrm{\Delta }m^2=1.2\times 10^9`$ eV<sup>2</sup> and $`\mathrm{sin}^22\theta =0.94`$ and has $`\chi ^2=0.5`$. The time dependence predicted with these parameters is shown in Fig. 2. Since the $`1/R^2`$ dependence of the flux has been removed in the reported rates, the variation here is solely due to vacuum oscillations. No particular significance should be attached to this best fit point, however, nor should our results be interpreted as favoring any particular region in the VO allowed space. This is because the location of the best fit point changes depending on the way in which runs are grouped in the time average and there are many other points in the parameter space where the fit quality is nearly as good as at the best fit point. Further, for $`\mathrm{\Delta }m^2\stackrel{>}{_{}}5\times 10^{10}`$ eV<sup>2</sup>, because the oscillations are so rapid, the allowed region shown in Fig. 3 is determined mainly by the total observed capture rate, with minor changes to the boundary from the time dependence. We also need to note that since the neutrino fluxes predicted by the SSM were used in this analysis, these results are not model independent.
The best fit to the neutrino energy spectrum measured at Super-Kamiokande, assuming the reduction in flux compared to the SSM is due to VO, is at $`\mathrm{\Delta }m^24.3\times 10^{10}`$ eV<sup>2</sup> and $`\mathrm{sin}^22\theta 0.87`$ . As is evident from Fig. 3, this region of neutrino parameters is compatible with the SAGE measurements. Further running of SAGE will reduce the uncertainties in a two month bin to about $`\pm 15`$ SNU, thus restricting the total region of allowed VO parameter space to approximately 70% of current limits. A further improvement of the limits will occur by combining the measurements of both Ga experiments and additional restriction is to be expected from much higher rate experiments such as Super-Kamiokande, SNO, and Borexino.
In summary, the combined analysis of all experiments strongly indicates that the solar neutrino deficit has a particle physics explanation and is a consequence of neutrino mass. The present experiments are, however, not yet able to establish definitively the oscillation scenario. Reduction of the uncertainties of the existing experiments, and new experiments, particularly those with sensitivity to low-energy neutrinos or to neutrino flavor, are urgently needed. SAGE is currently making regular solar neutrino extractions every six weeks with $`50`$ t of Ga and plans to continue these measurements until 2006. This will further reduce the statistical and systematic uncertainties, thus providing greater sensitivity to the model-independent astrophysical limit of 79.5 SNU in the Ga experiment and further limiting possible oscillation solutions to the solar neutrino problem.
We thank J. N. Bahcall, V. S. Berezinsky, M. Baldo-Ceolin, P. Barnes, G. T. Garvey, W. Haxton, V. A. Kuzmin, V. A. Matveev, L. B. Okun, V. A. Rubakov, R. G. H. Robertson, and A. N. Tavkhelidze for their continued interest and for fruitful and stimulating discussions. We acknowledge the support of the Russian Academy of Sciences, the Institute for Nuclear Research of the Russian Academy of Sciences, the Ministry of Science and Technology of the Russian Federation, the Russian Foundation of Fundamental Research under Grant No. 96-02-18399, the Division of Nuclear Physics of the U.S. Department of Energy, the U.S. National Science Foundation, and the U.S. Civilian Research and Development Foundation under Award No. RP2-159. This research was made possible in part by Grant No. M7F000 from the International Science Foundation and Grant No. M7F300 from the International Science Foundation and the Russian Government.
|
no-problem/9907/nucl-th9907017.html
|
ar5iv
|
text
|
# A Multi-Phase Transport model for nuclear collisions at RHIC
## Abstract
To study heavy ion collisions at energies available from the Relativistic Heavy Ion Collider, we have developed a multi-phase transport model that includes both initial partonic and final hadronic interactions. Specifically, the parton cascade model ZPC, which uses as input the parton distribution from the HIJING model, is extended to include the quark-gluon to hadronic matter transition and also final-state hadronic interactions based on the ART model. Predictions of the model for central Au on Au collisions at RHIC are reported.
The beginning of experiments at the Relativistic Heavy Ion Collider (RHIC) this year will start an exciting new era in nuclear and particle physics. The estimated high energy density in central heavy ion collisions at RHIC is expected to lead to the formation of a large region of deconfined matter of quarks and gluons, the Quark Gluon Plasma (QGP). This gives us an opportunity to study the properties of QGP and its transition to the hadronic matter, which would then shed light on the underlying fundamental theory of strong interactions, the Quantum Chromodynamics (QCD).
Because of the complexity of heavy ion collision dynamics, Monte Carlo event generators are needed to relate the experimental observations to the underlying theory. This has already been shown to be the case in heavy ion collisions at existing accelerators such as the SIS, AGS, and SPS . As minijet production is expected to play an important role at RHIC energies , models for partonic transport have already been developed . Furthermore, transport models that include both partonic and hadronic degrees of freedom are being developed . We have recently also developed such a multi-phase transport (AMPT) model. It starts from initial conditions that are motivated by the perturbative QCD and incorporates the subsequent partonic and hadronic space-time evolution. In particular, we have used the HIJING model to generate the initial phase space distribution of partons and the ZPC model to follow their rescatterings. A modified HIJING fragmentation scheme is then introduced for treating the hadronization of the partonic matter. Evolution of the resulting hadron system is treated in the framework of the ART transport model . In this paper, we shall describe this new multi-phase transport model and show its predictions for central Au on Au collisions at RHIC.
In the AMPT model, the initial parton momentum distribution is generated from the HIJING model, which is a Monte-Carlo event generator for hadron-hadron, hadron-nucleus, and nucleus-nucleus collisions. The HIJING model treats a nucleus-nucleus collision as a superposition of many binary nucleon-nucleon collisions. For each pair of nucleons, the impact parameter is determined using nucleon transverse positions generated from a Wood-Saxon nuclear density distribution. The eikonal formalism is then used to determine the probability for a collision to occur. For a given collision, one further determines if it is an elastic or inelastic collision, a soft or hard inelastic interaction, and the number of jets produced in a hard interaction. To take into account nuclear effects in hard interactions, an impact parameter-dependent parton distribution function based on the Mueller-Qiu parameterization of the nuclear shadowing is used. Afterwards, PYTHIA routines are called to describe hard interactions, while soft interactions are treated according to the Lund model .
In HIJING model, minijets from produced partons are quenched by losing energy to the wounded nucleons close to their straight-line trajectories. In the AMPT model, we replace the parton quenching by their rescatterings. To generate the initial phase space distribution for the parton cascade, the formation time for each parton is determined according to a Lorentzian distribution with a half width $`t_f=E/m_T^2`$ , where $`E`$ and $`m_T`$ are the parton energy and transverse mass, respectively. Positions of formed partons are calculated from those of their parent nucleons using straight-line trajectories. Since partons are considered to be part of the coherent cloud of parent nucleons during the time of formation, they thus do not suffer rescatterings.
The parton cascade in the AMPT model is carried out using the ZPC model . At present, this model includes only gluon-gluon elastic scatterings with cross sections taken to be the leading divergent cross section regulated by a medium generated screening mass. The latter is related to the phase space density of produced partons . In the present study, a constant screening mass of $`\mu =3\mathrm{fm}^1`$ is used.
Once partons stop interacting, they are converted into hadrons using the HIJING fragmentation scheme after an additional proper time of approximate $`1.2`$ fm. In the default HIJING model, a diquark is treated as a single entity, and this leads to an average rapidity shift of about one unit in the net baryon distribution. We modify this fragmentation scheme to allow the formation of diquark-antidiquark pairs. In addition, the $`BM\overline{B}`$ formation probability is taken to be $`80\%`$ for the produced diquark-antidiquark pairs, while the rest are $`B\overline{B}`$’s. This gives a reasonable description of the measured net baryon rapidity distribution in Pb+Pb collisions at 158 GeV/nucleon .
For the evolution of hadrons, we use the ART model, which is a successful hadronic transport model for heavy ion collisions at AGS energies. To extend the model to heavy ion collisions at RHIC, we have further included nucleon-antinucleon annihilation channels, the inelastic interactions of kaons and antikaons, and neutral kaon production. In the ART model, multiparticle production is modeled through the formation of resonances. Since the inverse double resonance channels have smaller cross sections than those calculated directly from the detailed balance, we thus adjust the double resonance absorption cross sections to fit the NA49 data .
In Fig. 1, we show the rapidity distribution of transverse energy in Au+Au central ($`b=0`$) collisions at RHIC. In the default HIJING model, the $`dE_T/dy`$ at central rapidity is about 900 GeV. Using the modified HIJING framentation scheme leads to a decrease of about 100 GeV in $`dE_T/dy`$. After the parton cascade, $`dE_T/dy`$ is reduced by about 15 GeV as shown by the difference in the initial and final gluon $`dE_T/dy`$ distributions. We note that the perturbatively produced gluons account for a significant fraction (about 1/3) of the produced $`dE_T/dy`$. Including hadronic evolution further reduces $`dE_T/dy`$ by about 50 GeV. As the transverse energy rapidity distribution is a sensitive probe of longitudinal expansion and is related to the $`pdV`$ work, these results indicate that both the partonic and hadronic evolution contribute appreciably to longitudinal collective flow.
Fig. 2 shows the baryon rapidity distributions. It is seen that the net baryon distribution from the AMPT model has a peak value of 80 at $`y3.9`$ while that from the default HIJING model has a peak value of 85 at $`y4.5`$. The larger rapidity shift in AMPT is due to the modified fragmentation of diquarks. At central rapidity, AMPT predicts a net baryon number of about 12, which is similar to that from the default HIJING model. Many antiprotons (about $`50\%`$) are seen to survive the absorption in hadronic matter, leading to a value of about 10 at central rapidities. The $`\overline{p}/p`$ ratio at central rapidity is about $`60\%`$, which is much larger than the $`10\%`$ seen in Pb+Pb collisions at 158 GeV/nucleon from SPS .
The final meson rapidity distribution is shown in Fig.3. The prediction from the AMPT model has a distinctive plateau structure around central rapidities. Results using the default HIJING model show instead a peak at central rapidity with a higher rapidity density. Also shown in the figure is the distribution of kaons produced from both string fragmentation and hadronic interaction. The latter is seen to enhance significantly the kaon yield.
In Fig. 4, the AMPT results are compared with those from other models without final-state rescatterings such as the Fritiof1.7 and the Venus4.02 . Although both AMPT and Fritiof show similar peaks at $`y3.9`$ in the net baryon rapidity distributions, the height of the peak is about 80 in AMPT but is about 100 in Fritiof. On the other hand, the net baryon distribution peaks at a smaller rapidity of $`y2.8`$ in Venus. At central rapidities, the net baryon number from AMPT is about 15 and is similar to that from Venus but is much larger than that from Fritiof, which is almost zero at the central rapidity. For mesons, both AMPT and Fritiof have final $`\pi ^+`$ rapidity distributions that peak at the central rapidity with a height of about 350 while Venus gives a much larger height of about 1300 at the central rapidity. The forthcoming RHIC data will allow us to test the different predictions from these models and thus to obtain a better understanding of the collision dynamics.
In conclusion, we have developed for heavy ion collisions at the Relativistic Heavy Ion Collider a multi-phase transport model that includes both partonic and hadronic evolution. The model shows that both partons and hadrons contribute to the longitudinal collective work. Because of the production of diquark-antidiquark pairs, there is a relatively large rapidity shift of net baryons compared to the default HIJING fragmentation scheme. Many anti-protons survive final-state interactions and are expected to be observed at RHIC. Also, our model gives a wider meson rapidity plateau at central rapidities than the prediction from the default HIJING model. Furthermore, kaon production is appreciably enhanced due to production from hadronic interactions. For future studies, we shall compare these predictions with the experimental data soon to be available from RHIC. Also, we shall study if inclusion of parton inelastic scatterings and using different hadronization schemes would affect the results obtained here.
This work is supported by the National Science Foundation under Grant No. PHY-9870038, the Welch Foundation Grant A-1358, and the Texas Advanced Research Program FY97-010366-068.
|
no-problem/9907/patt-sol9907003.html
|
ar5iv
|
text
|
# Curvature-induced symmetry breaking in nonlinear Schrödinger models
## Acknowledgments
We thank A.C. Scott for the helpful discussions. Yu.B.G. and S.F.M. would like to express their thanks to the Department of Mathematical Modelling, Technical University of Denmark where the major part of this work was done.
|
no-problem/9907/hep-th9907062.html
|
ar5iv
|
text
|
# Untitled Document
hep-th/9907062 BROWN-HET-1189
Comments on a Covariant Entropy Conjecture
David A. Lowe
Department of Physics
Brown University
Providence, RI 02912, USA
lowe@het.brown.edu
Abstract
Recently Bousso conjectured the entropy crossing a certain light-like hypersurface is bounded by the surface area. We point out a number of difficulties with this conjecture.
July, 1999
1. Covariant Entropy Conjecture
Recently Bousso made the interesting conjecture that the entropy $`S`$ passing through a certain hypersurface bounded by a two-dimensional spatial surface $`B`$ with area $`A`$ must satisfy the bound
$$SA/4.$$
The hypersurface $`L`$ in question is to be generated by one of the four null congruences orthogonal to $`B`$, with non-positive expansion in the direction away from $`B`$. The matter fields in the theory are required to satisfy the dominant energy condition. This covariant entropy conjecture is motivated by the proposed holographic principle of ’t Hooft and Susskind \[2,,3\], and recent work on attempts to generalize this principle to cosmological backgrounds \[49\].
In this note, we point out a number of difficulties with this proposal. We begin by noting the choice of units is such that the fundamental constants satisfy $`\mathrm{}=c=G=k=1`$. With appropriate factors of $`G`$ and $`\mathrm{}`$ restored the bound becomes $`SA/(4\mathrm{}G)`$, and we see the bound becomes trivial in the classical limit, for fixed gravitational interactions. If the bound is to hold at all, it must hold in the full theory of quantum gravity coupled to matter.
However, as is well-known, the dominant energy condition (and also the weak energy condition) fails even for free quantum fields. One consequence of the dominant energy condition is that the local energy density appears positive definite. If one computes the expectation value of the normal-ordered energy momentum tensor $`\psi |T_{\mu \nu }|\psi `$ in free scalar field theory, for a state $`\psi `$ that is an admixture of the ground state and a two-particle state, interference terms in the expectation value can lead to negative values for the energy density. As it stands therefore, the conjecture is inconsistent. If one insists on taking the classical limit, the bound becomes trivial. As soon as one goes to the quantum theory, the conditions for the bound to hold are violated, except possibly for a theory with no matter content.
One reason for imposing the dominant energy condition was to rule out the possibility of superluminal entropy flow, which would allow for easy violations of the bound, as we see in a moment. In addition, one wants to rule out creating large amounts of entropy with little energy, by simultaneously creating matter with positive and negative energy density. Of course one could try to replace the dominant energy condition with a weaker constraint. However the following example illustrates difficulties with the bound that are independent of this condition.
Fig. 1: Penrose diagram for a black hole. $`B`$ is a surface on the event horizon. $`L`$ is a light-like hypersurface with zero expansion, bounded by $`B`$.
Consider a black hole in equilibrium with thermal radiation at the same temperature as the Hawking temperature. As discussed in , one can take $`B`$ to be the event horizon at some time, and construct the hypersurface $`L`$ using future-directed outgoing null generators, as shown in fig. 1. Since the black hole’s evaporation is supported by the ingoing thermal radiation, the geometry near the horizon is static. One therefore has an infinite amount of time to send a constant flux of entropy across the hypersurface $`L`$, in violation of the bound. To be precise, we define the entropy crossing $`L`$ to be the proper entropy flux, integrated over $`L`$. This problem is independent of the matter content of the theory (and hence any energy conditions one might choose to impose), since even for pure gravity, a quantum black hole will Hawking radiate gravitons. Note also that for a large black hole the geometry is such that caustics need not force the hypersurface $`L`$ to approach the singularity.
It is interesting to consider how this example is consistent with the generalized second law of black hole thermodynamics. While it is true the black hole absorbs an infinite amount of entropy as time goes to infinity, it emits an equal (or larger) amount of entropy in the form of Hawking radiation, in accord with the second law. However the entropy emitted cannot cross the hypersurface $`L`$ in a causal way. The wavelength of Hawking particles is of order the size of the black hole. They are best thought of as originating from outside the black hole, of order the Schwarzschild radius from the event horizon. The Hawking particles themselves do not contribute to the proper entropy passing through $`L`$, since they are undetectable to a freely falling observer as she crosses the horizon.
If an arbitrarily large amount of entropy can cross the hypersurface $`L`$, how can one regard the $`\mathrm{log}`$ of the number of internal states of the black hole as $`A/4`$? We will analyze this question in the scenario for the resolution of the black hole information problem discussed in . Roughly speaking, in this picture information crosses the horizon in a completely causal manner, but is effectively transferred to the Hawking radiation in a non-local way as it hits the singularity. Thus we do indeed have super-luminal propagation of entropy in this picture. This component of the entropy does not contribute to the local entropy flux passing through $`L`$. Although an arbitrarily large amount of entropy does cross $`L`$, a low-energy observer inside the black hole could never detect more entropy than $`A/4`$. In order for an observer inside to live long to detect more entropy than $`A/4`$ she would have to undergo a trans-Planckian acceleration of order $`e^{M^2}`$. One sees this via similar gedanken experiments to ones considered in \[10,,11\]. Likewise an observer entering the horizon at late times will see that most of the entropy has already hit the singularity, preserving the bound on the number of observable internal states.
Fig. 2: Penrose diagram for collapsing spherical dust cloud. $`L`$ is light-like hypersurface that intersects all the dust cloud. In order to avoid caustics, $`L`$ will be deformed to lie along the dotted line.
Having presented perhaps the clearest counterexample to the bound (1.1), we now consider a number of other objections to the arguments Bousso presents as evidence for the bound. Consider a collapsing dust cloud, and construct the hypersurface $`L`$ as indicated in fig. 2, so that it intersects the whole dust cloud. Bousso’s claim is that caustics will force $`L`$ to take a more circuitous route to the singularity, in such a way that the surface $`L`$ does not intersect the whole dust cloud, preserving the bound (1.1). This relies on the fact that a highly entropic system can never be spherically symmetric so many caustics will be present. Here we simply point out that in the semiclassical approximation, one requires only that the expectation value of the energy momentum tensor be spherically symmetric to avoid caustics. This does not provide a significant constraint on the entropy of such configurations. Beyond the semiclassical approximation, the null convergence condition does not hold, so such caustics need not form in the first place. Furthermore, since the construction of the surface $`L`$ is formulated in terms of classical geometric quantities, it is not clear how Bousso’s construction carries over to the full quantum theory.
Much of the other evidence Bousso presents for the bound involves showing consistency with Bekenstein’s conjectured bound
$$S2\pi ER,$$
in a number of different examples. Here $`E`$ is the total energy of the system, and $`R`$ is the circumferential radius, defined as $`R=\sqrt{A/4\pi }`$ with $`A`$ the area of the smallest sphere surrounding the system. This bound has been much discussed since its original proposal . The bound appears to hold for large systems, provided the number of matter species is small \[13,,14\]. It can easily be violated for sufficiently small systems (for example the free scalar field case already mentioned), for a large number of matter species , or for systems at sufficiently low temperature. Likewise, one can take a large number of copies of a small system, to make a large system that violates (1.1). In general, the generalized second law does not imply the bound (1.1) \[16,,17\]. Some special systems for which (1.1) is violated can be used to construct counterexamples to the covariant entropy bound.
For instance consider a normal region (i.e. not trapped or anti-trapped) in a Friedmann-Robertson-Walker cosmology, as discussed in . The covariant entropy bound implies that the entropy on a spatial hypersurface inside the apparent horizon with radius $`r_{AH}`$, should satisfy $`S\pi r_{AH}^2`$. This follows from the bound (1.1) if this region is treated as a Bekenstein system . Thus the bound (1.1) can potentially be violated for systems which violate (1.1). The simplest example of such a system is a gas of $`N`$ species of free particles in a box of size $`R`$ with energy $`E`$. The entropy of this gas will diverge as $`\mathrm{log}N`$. One cannot use this fact to constrain the value of $`N`$, as it simply means the bound (1.1) is not a universal bound for any system of matter coupled to gravity.
2. Conclusions
We have noted a number of difficulties with the current formulation of the covariant entropy conjecture. We propose that in general backgrounds, the only unexpected entropy bounds arise from demanding validity of the generalized second law, as suggested in . This law has passed a number of highly nontrivial consistency checks . However whether the second law gives rise to holographic style bounds is system dependent. It does not constrain the entropy density in the early universe, nor in the final phase of a recollapsing universe. However if an isolated system can collapse to a black hole, the second law implies the entropy satisfies $`SA/4`$.
Acknowledgments
I thank Richard Easther, Sanjaye Ramgoolam and Don Marolf for helpful discussions. The research of D.L. is supported in part by DOE grant DE-FE0291ER40688-Task A.
References
relax R. Bousso, “A Covariant Entropy Conjecture,” hep-th/9905177. relax G. ’t Hooft, “Dimensional reduction in quantum gravity,” gr-qc/9310026. relax L. Susskind, “The world as a hologram,” J. Math. Phys. 36 (1995) 6377, hep-th/9409089. relax W. Fischler and L. Susskind, “Holography and Cosmology,” hep-th/9806039. relax R. Easther and D.A. Lowe, “Holography, cosmology and the second law of thermodynamics,” hep-th/9902088. relax G. Veneziano, “Pre-bangian origin of our entropy and time arrow,” hep-th/9902126. relax D. Bak and S.-J. Rey, “Cosmic holography,” hep-th/9902173. relax N. Kaloper and A. Linde, “Cosmology vs. holography,” hep-th/9904120. relax R. Brustein, “The generalized second law of thermodynamics in cosmology,” gr-qc/9904061. relax D.A. Lowe and L. Thorlacius, “AdS/CFT and the Information Paradox,” hep-th/9903237. relax L. Susskind and L. Thorlacius, “Gedanken Experiments Involving Black Holes,” Phys. Rev. D49 (1994) 966, hep-th/9308100. relax J.B. Bekenstein, Phys. Rev. D23 (1981) 287. relax D.N. Page, “Comment on a universal upper bound on the entropy-to-energy ratio for bounded systems,” Phys. Rev. 26 (1982) 947. relax J.B. Bekenstein, “Non-Archimedian character of quantum buoyancy and the generalized second law,” gr-qc/9906058. relax J.B. Bekenstein, Phys. Rev. D7 (1973) 2333; Phys. Rev. D9 (1974) 3292. relax W.G. Unruh and R.M. Wald, Phys. Rev. D25 (1982) 942; Phys. Rev. D27 (1983) 2271. relax M.A. Pelath and R.M. Wald, “Comment on entropy bounds and the generalized second law,” gr-qc/9901032.
|
no-problem/9907/cond-mat9907302.html
|
ar5iv
|
text
|
# Fermionic Mapping For Eigenvalue Correlation Functions Of Weakly Non-Hermitian Symplectic Ensemble
## I Introduction
Several ensembles of non-Hermitian matrices were given by Ginibre. These are the ensembles of matrices with arbitrary real, complex, or quaternionic entries. Ginibre gave joint probability distributions for the eigenvalues for the complex and quaternionic cases, and succeeded in obtaining correlation functions in the complex case, while correlation functions for the quaternionic case were found later. The purpose of this paper is to extend the correlation functions for the quaternionic problem to the weakly non-Hermitian case, as well as to introduce a fermionic mapping to simplify the computation of these correlation functions. Further, the mapping also permits us to derive the 4-point, and higher, correlation functions, that were only conjectured before.
Although Ginibre’s ensembles are interesting in themselves, they are also closely connected with the chiral random matrix ensembles that appear in QCD and some condensed matter systems. Knowing the eigenvalue correlation functions in the non-Hermitian ensemble, one can easily determine correlation functions in the corresponding chiral ensemble. Further, the weakly non-Hermitian versions of these ensembles are of interest in open quantum systems; there exists a study using supersymmetric techniques of the eigenvalue distribution in the weakly non-Hermitian version of the ensemble considered in this paper.
One interesting feature to observe in the two-level correlation function is the crossover from a non-monotonic correlation function with algebraic tails in the limit of very weak non-Hermiticity to a monotonically decaying correlation function with Gaussian tails in the limit of strong non-Hermiticity.
Another interesting property that the symplectic non-Hermitian ensemble exhibits is a depletion of the eigenvalue density near the real axis; this could be guessed at by looking at the joint probability distribution (j.p.d.) derived originally by Ginibre (given in equation (2) below). The depletion was also found numerically.
Consider an arbitrary $`N`$-by-$`N`$ matrix of quaternions. This is equivalent to a $`2N`$-by-$`2N`$ matrix $`M`$ with complex entries. Let $`M`$ be chosen from an ensemble of such matrices with Gaussian weight
$$P(M)=e^{\frac{1}{2}\mathrm{Tr}(M^{}M)}dM$$
(1)
This defines the strongly non-Hermitian ensemble. The eigenvalues of $`M`$ come in complex conjugate pairs; for every eigenvalue $`z=x+iy`$, there is an eigenvalue $`\overline{z}=xiy`$. Let the matrix $`M`$ have eigenvalues $`z_i,\overline{z}_i`$, $`i=1\mathrm{}N`$. Then the j.p.d. of the eigenvalues is given, up to a constant factor, by
$$\frac{1}{N!}\underset{i}{}e^{\overline{z}_iz_i}|z_i\overline{z}_i|^2\underset{i<j}{}(z_iz_j)(z_i\overline{z}_j)(\overline{z}_iz_j)(\overline{z}_i\overline{z}_j)\underset{i}{}\mathrm{d}\overline{z}_i\mathrm{d}z_i$$
(2)
where $`\mathrm{d}\overline{z}\mathrm{d}z=2\mathrm{d}x\mathrm{d}y`$.
In section II, a fermionic mapping is introduced to write equation (2) as a correlation function in a fermionic field theory. The mapping is then used to calculate the eigenvalue density. In section III, we introduce the weakly non-Hermitian ensemble and calculate eigenvalue density for that ensemble. In section IV, multi-eigenvalue correlation functions are calculated for both strongly and weakly non-Hermitian cases. The calculations in section III and IV are simple extensions of the calculation given in section II. For this reason, the calculation in section II is given in the most detail, while the other calculations are sketched.
For the strongly non-Hermitian case, the main results are equations (20,26) for the eigenvalue density, and equation (44) for the two-level correlation function. For the weakly non-Hermitian case, the main results are equation (42) for the eigenvalue density and equations (48,49) for the two-level correlation function.
## II Fermionic Mapping and Green’s Function
In this section we develop the fermionic mapping for the j.p.d. of the strongly non-Hermitian ensemble. First, we write equation (2) as a correlation function in a fermionic field theory. Then, for convenience we shift to radial coordinates, making a conformal transformation. Finally, we integrate over all but one of the $`z_i`$ to obtain the eigenvalue density. The integral over the $`z_i`$ is done inside the correlation function; only after doing the integral is the correlation function evaluated. This amounts to commuting the order of doing the integral and evaluating the correlation function, and is the essential trick used in this section. In section IV we will demonstrate how to obtain multi-level correlation functions by a simple extension of the procedure of this section.
### A Fermionic Mapping
First, let us show that equation (2) can be written as a correlation function in a two-dimensional fermionic field theory. A similar fermionic mapping was demonstrated previously for the Hermitian orthogonal and symplectic ensembles. Let the field $`\psi (x)`$ have have the action
$$S=\frac{1}{2}d\overline{z}dz\psi ^{}(z)\overline{}\psi (z)$$
(3)
Note that we are using only one chirality of fermionic field. Consider a correlation function of this field, such as
$$\underset{i=1}{\overset{2N}{}}\psi (a_i)\psi ^{}(b_i)$$
(4)
This correlation function is equal to
$$\frac{\left(\underset{i<j}{\overset{2N}{}}(a_ja_i)\right)\left(\underset{i<j}{\overset{2N}{}}(b_jb_i)\right)}{\left(\underset{i,j}{\overset{2N}{}}(a_jb_i)\right)}$$
(5)
Let us consider a specific choice of $`b_j`$, with $`b_j=Le^{2\pi i\frac{j}{2N}}`$. In the limit $`L\mathrm{}`$, we find that equation (5) reduces to
$$L^{2N^2N}\underset{i<j}{\overset{2N}{}}(a_ja_i)$$
(6)
Comparing this to equation (2) we realize that equation (2) can be written as
$$\frac{1}{N!}\underset{L\mathrm{}}{lim}L^{2N^2+N}\left(\underset{j=1}{\overset{2N}{}}\psi ^{}(b_j)\right)\left(\underset{j=i}{\overset{N}{}}U(\overline{z}_j,z_j)\psi (z_j)\psi (\overline{z}_j)\mathrm{d}\overline{z}_j\mathrm{d}z_j\right)$$
(7)
where $`b_j=Le^{2\pi i\frac{j}{2N}}`$. and $`U(\overline{z},z)=e^{\overline{z}z}(z\overline{z})`$. When integrating over $`z_i`$, the limit $`L\mathrm{}`$ must be taken before doing the integral over $`z_i`$.
Now we will make a conformal transformation to radial coordinates. Write $`z=e^w`$ and $`\overline{z}=e^{\overline{w}}`$. Let $`w=t+i\theta `$. The action for the fermionic field is unchanged under this transformation, but we must change equation (7) as the field $`\psi `$ has non-vanishing scaling dimension and conformal spin. Equation (7) gets replaced by
$$\frac{1}{N!}\underset{L\mathrm{}}{lim}L^{2N^2}\left(\underset{j=1}{\overset{2N}{}}\psi ^{}(v_j)\right)\left(\underset{j=1}{\overset{N}{}}e^{t_j}U(e^{\overline{w}_j},e^{w_j})\psi (w_j)\psi (\overline{w}_j)\mathrm{d}\overline{w}_j\mathrm{d}w_j\right)$$
(8)
where $`v_j=\mathrm{log}b_iL+2\pi i\frac{j}{2N}`$.
Now, we will introduce Fourier transforms for the creation and annihilation operators. We will write $`\psi (w_i)=_ke^{kw_i}a(k)`$ and $`\psi ^{}=_ke^{kw_i}a^{}(k)`$. In the limit $`L\mathrm{}`$, the only states involved in equation (8) are those with $`k=1/2,3/2,5/2,\mathrm{},2N1/2`$. If there are excitations in states with higher $`k`$, they will vanish in the large $`L`$ limit.
Then, we can rewrite equation (8), up to factors of order unity, as
$$\frac{1}{N!}\left(\underset{k=1/2}{\overset{2N1/2}{}}a^{}(k)\right)\underset{j=1}{\overset{N}{}}\left(\underset{k,k^{}}{}a(k)a(k^{})e^{w_jk}e^{\overline{w}_jk^{}}e^{t_j}U(e^{\overline{w}_j},e^{w_j})\mathrm{d}\overline{w}_j\mathrm{d}w_j\right)$$
(9)
In equation (9), consider integrating over $`w_j,\overline{w}_j`$ for some given set of $`j=1\mathrm{}M`$. We will do the integral inside the correlation function. The integral
$$\underset{j=1}{\overset{M}{}}\left(\underset{k,k^{}}{}a(k)a(k^{})e^{w_jk}e^{\overline{w}_jk^{}}e^{t_j}U(e^{\overline{w}_j},e^{w_j})\mathrm{d}\overline{w}_j\mathrm{d}w_j\right)$$
(10)
is equal to
$$O^M$$
(11)
where the operator $`O`$ is defined by
$$O=\underset{k}{}a(k)a(k+1)4\pi (k+1/2)!$$
(12)
Therefore, if we integrate over all eigenvalues in equation (9), we obtain
$$Z=\frac{1}{N!}\left(\underset{k=1/2}{\overset{2N1/2}{}}a^{}(k)\right)O^N=\underset{k}{}\left(4\pi (k+1/2)!\right)$$
(13)
where the product extends over $`k=1/2,5/2,9/2,\mathrm{},2N3/2`$.
### B Eigenvalue Density
To calculate the density of eigenvalues, we must integrate over all, except one, of the coordinate pairs $`\overline{w}_i,w_i`$ in equation (9). Using equation (11), and normalizing with equation (13), we wish to compute
$$\frac{U(e^{\overline{w}},e^w)e^t\underset{m,m^{}}{}e^{mw}e^{m^{}\overline{w}}\left(\underset{k=1/2}{\overset{2N1/2}{}}a^{}(k)\right)O^{N1}a(m)a(m^{})\mathrm{d}\overline{w}\mathrm{d}w}{(N1)!Z}$$
(14)
It may be verified that the correlation function appearing in the sum of equation (14) is nonvanishing only if either $`m1/2`$ is even, $`m^{}1/2`$ is odd, and $`m<m^{}`$, or if $`m^{}1/2`$ is even, $`m1/2`$ is odd, and $`m^{}<m`$. In the first case, with $`m<m^{}`$, the contribution to equation (14) is
$$\frac{1}{4\pi }\underset{k}{}\frac{1}{(k+1/2)!}\underset{l}{}(l+1/2)!e^{\overline{z}z}\sqrt{\overline{z}z}(z\overline{z})z^m\overline{z}^m^{}\mathrm{d}\overline{w}\mathrm{d}w$$
(15)
where the product over $`k`$ extends over
$$k=1/2,5/2,9/2,\mathrm{}$$
(16)
and the product over $`l`$ extends over
$$l=1/2,5/2,9/2\mathrm{}m2,m+1,m+3,\mathrm{},m^{}2,m^{}+1,m^{}+3,\mathrm{},2N1/2$$
(17)
This is equal to
$$\frac{1}{4\pi }\frac{1}{(m1/2)!!(m^{}1/2)!!}e^{\overline{z}z}\sqrt{\overline{z}z}(z\overline{z})z^m\overline{z}^m^{}\mathrm{d}\overline{w}\mathrm{d}w$$
(18)
In the second case, with $`m^{}<m`$, the result is
$$\frac{1}{4\pi }\frac{1}{(m1/2)!!(m^{}1/2)!!}e^{\overline{z}z}\sqrt{\overline{z}z}(z\overline{z})z^m\overline{z}^m^{}\mathrm{d}\overline{w}\mathrm{d}w$$
(19)
We can obtain the eigenvalue density $`\rho (\overline{z},z)`$ by adding equations (18,19) and summing over $`m,m^{}`$. Shifting $`m`$ and $`m^{}`$ by one-half, and changing from $`\mathrm{d}\overline{w}\mathrm{d}w`$ to $`\mathrm{d}\overline{z}\mathrm{z}w`$, we find that the final result for the eigenvalue density $`\rho (\overline{z},z)\mathrm{d}\overline{z}\mathrm{d}z`$ is
$$\rho (\overline{z},z)\mathrm{d}\overline{z}\mathrm{d}z=\frac{1}{4\pi }e^{\overline{z}z}(z\overline{z})G(\overline{z},z)\mathrm{d}\overline{z}\mathrm{d}z$$
(20)
where the Green’s function $`G(\overline{z},z)`$ is given by
$$G(\overline{z},z)=\underset{m<m^{};m=0,2,4,\mathrm{};m^{}=1,3,5,\mathrm{}}{}\frac{1}{m!!m^{}!!}(z^m\overline{z}^m^{}\overline{z}^mz^m^{})$$
(21)
### C Discussion
Let us now look at the properties of equation (20). We will discuss in turn the normalization of the density; the way the density depends on $`x`$ and $`y`$ separately, where $`z=x+iy`$; an integral representation for the density; the circular law; and the depletion of density near the real axis.
First, consider the normalization of the single particle density. It is automatic from the above derivation that the eigenvalue density is properly normalized, although one must be careful about defining the normalization depending on whether one is counting total number of eigenvalues or total number of pairs of eigenvalues. The normalization is defined such that $`\rho (\overline{z},z)d\overline{z}dz=N`$.
Next, writing $`z=x+iy`$ and $`\overline{z}=xiy`$, one can show by differentiating the power series in equation (21) that $`_xG(x+iy,xiy)=\frac{+\overline{}}{2}G(x+iy,xiy)=2xG(x+iy,xiy)`$, for large $`N`$. This implies that
$$G(x+iy,xiy)=e^{x^2}f(y)$$
(22)
and therefore $`\rho =\frac{1}{4\pi }2yie^{y^2}f(y)`$, for some function $`f`$, so the interesting properties of the eigenvalue density are contained in $`f(y)`$. Later we will discuss the properties of $`f(y)`$ for small $`y`$, and show that there is a depletion of the density of eigenvalues near the real axis.
Using equations (20,21,22), we can derive an integral representation for $`\rho `$. We can use equation (22) to write
$$G(\overline{z},z)=e^{\overline{z}z}G(\overline{z}z,0)$$
(23)
Then equation (21) implies that
$$G(\overline{z}z,0)=\underset{m=1,3,5,\mathrm{}}{}\frac{1}{m!!}(\overline{z}z)^m$$
(24)
This is equal to
$$\underset{0}{\overset{\mathrm{}}{}}i\mathrm{Sin}(\frac{\overline{z}z}{i}t)e^{t^2/2}dt$$
(25)
Using this integral representation in equation (20) we find that
$$\rho (\overline{z},z)=\frac{1}{4\pi }2y\underset{0}{\overset{\mathrm{}}{}}\mathrm{Sin}(2yt)e^{t^2/2}dt$$
(26)
Let us now consider the circular law. For large $`y`$, equation (26) reduces to
$$\rho (\overline{z},z)\mathrm{d}\overline{z}\mathrm{d}z\frac{1}{4\pi }\mathrm{d}\overline{z}\mathrm{d}z$$
(27)
So, the density tends to a constant for large $`y`$. However, this integral representation is valid only for $`N`$ infinite; for finite $`N`$, the density tends to a constant only within a disc of radius $`\sqrt{2N}`$, and vanishes outside the disc. This is the well-known circular law. The vanishing of the density outside the disc is easy to see from the power series representation. For finite $`N`$ the highest power of $`(\overline{z}z)`$ appearing in equation (21) is roughly $`2N`$ and so $`\rho `$ will be exponentially small for $`(\overline{z}z)>2N`$.
Note that the total density in the disc is correct. The area of a disc of radius R is $`2\pi R^2`$, where we are using the measure $`\mathrm{d}\overline{z}\mathrm{d}z=2\mathrm{d}x\mathrm{d}y`$. The density is $`\frac{1}{4\pi }`$. So, the number of particles in a disc of radius $`\sqrt{2N}`$ is indeed $`N`$, as desired.
For small $`y`$, we find that $`\rho `$ is reduced below the expected result. Such a reduction was found numerically before. In the figure, we graph the eigenvalue density as a function of $`y`$, for $`x=0`$, for a system of 100 particles.
## III Weakly Non-Hermitian Case
Now we will consider the weakly non-Hermitian version of the ensemble given above. In the weakly non-Hermitian random matrix ensemble, we again consider an arbitrary $`N`$-by-$`N`$ matrix of quaternions, $`H`$, but use a different Gaussian weight. Let $`H=H_h+H_a`$, where the $`H_h`$ is Hermitian and $`H_a`$ is anti-Hermitian. Then, we chose the matrix $`H`$ with Gaussian weight
$$e^{\frac{N}{2}\mathrm{Tr}(H_h^{}H_h)\frac{N^2a}{2}\mathrm{Tr}(H_a^{}H_a)}$$
(28)
where $`a`$ is some constant. In the large $`a`$ limit, this reduces to the Gaussian Symplectic Ensemble. For finite $`a`$, the weight in equation (28) is chosen to make sure that the imaginary part of the eigenvalues scales with $`N`$ in the same way as the level spacing.
If the matrix $`H`$ is chosen with weight given by equation (28), then the j.p.d. of equation (2) gets replaced by
$$\frac{1}{N!}\underset{i}{}e^{Nx_i^2N^2ay_i^2}|z_i\overline{z}_i|^2\underset{i<j}{}(z_iz_j)(z_i\overline{z}_j)(\overline{z}_iz_j)(\overline{z}_i\overline{z}_j)\underset{i}{}\mathrm{d}\overline{z}_i\mathrm{d}z_i$$
(29)
The only difference in the weakly non-Hermitian case is that the Gaussian function of eigenvalue position $`e^{\overline{z}_iz_i}`$ is replaced by $`e^{x_i^2Nay_i^2}`$.
We have not found equation (29) previously in the literature. This equation can be derived most easily as follows: write an $`N`$-by-$`N`$ matrix of quaternions, $`H`$, as
$$H=X^1TX$$
(30)
where $`X`$ is a quaternion matrix such that $`X^1=X^{}`$, and $`T`$ is an upper triangular matrix of quaternions. This procedure is a Schur decomposition, and is possible since the field of quaternions, like the field of complex numbers, is algebraically closed.
The eigenvalues, $`z_i`$, can be obtained from the diagonal elements of $`T`$; each diagonal element of $`T`$ is a quaternion, which is associated with a pair of complex conjugate eigenvalues $`z_i,\overline{z}_i`$. If a given diagonal element of $`T`$ is $`T_i=A+Bi+Cj+Dk`$, then $`z_i=A\pm i\sqrt{B^2+C^2+D^2}`$.
The Jacobian associated with this change of variables is $`\underset{i<j}{}(z_iz_j)(z_i\overline{z}_j)`$. Further,
$$e^{\frac{N}{2}\mathrm{Tr}(H_h^{}H_h)\frac{N^2a}{2}\mathrm{Tr}(H_a^{}H_a)}=e^{\frac{N}{2}\mathrm{Tr}(T_h^{}T_h)\frac{N^2a}{2}\mathrm{Tr}(T_a^{}T_a)}$$
(31)
where $`T_h,T_a`$ are Hermitian and anti-Hermitian parts of $`T`$. The integral over the elements of $`T`$ above the diagonal can be done trivially as this integral is Gaussian. The integral over the diagonal elements of $`T`$ includes a Gaussian factor and a factor from the Jacobian. This integral is exactly the integral over the j.p.d. of equation (29).
Given equation (29), we could follow the procedure of the previous section. However, we would run into some difficulties which are purely technical. The problem is that, while in the strongly non-Hermitian case the eigenvalue density is independent of $`x`$, it is not independent of $`x`$ in the weakly non-Hermitian case. This makes the power series expansion very awkward. We will find it convenient to change to a different geometry, given in equation (32) below, for the weakly non-Hermitian case. Let me again stress that the reason for choosing a different geometry is purely technical, to simplify the math.
An analogous simplification is often used in the Hermitian ensembles. For example, consider the Gaussian Symplectic Ensemble in the large $`N`$ limit. The eigenvalue density is a function of energy, but if one appropriately scales all energies by the local level spacing, it is simpler to obtain correlation functions from the Circular Symplectic Ensemble.
Let us introduce new coordinates, $`z=\varphi +ir`$ and $`\overline{z}=\varphi ir`$, where $`\varphi `$ is periodic with period $`2\pi `$. Let us replace equation (29) by
$$\frac{1}{N!}\underset{i}{}e^{aN^2r_i^2}\frac{(e^{iz_i}e^{i\overline{z}_i})^2}{e^{2i\varphi _i}}\underset{i<j}{}\frac{(e^{iz_i}e^{iz_j})(e^{iz_i}e^{i\overline{z}_j})(e^{i\overline{z}_i}e^{iz_j})(e^{i\overline{z}_i}e^{i\overline{z}_j})}{e^{2i\varphi _i2i\varphi _j}}\underset{i}{}\mathrm{d}\overline{z}_i\mathrm{d}z_i$$
(32)
This describes a system of $`N`$ pairs of levels, with average level spacing $`2\pi /N`$. The imaginary part of the level is of order $`1/N`$, so it is of order the level spacing. For large $`a`$ this reduces to the Circular Symplectic Ensemble. For finite $`a`$ and large $`N`$, we expect that the ensemble of equation (32) reproduces the behavior of the ensemble of equation (29) within a small neighborhood of some given energy, just as the Circular Symplectic Ensemble reproduces the results of the Gaussian Symplectic Ensemble within a neighborhood of a given energy.
The next step is to write equation (32) as a correlation function in a fermionic field theory. We will introduce $`N`$ creation operators at $`r=+\mathrm{}`$ and $`N`$ creation operators at $`r=\mathrm{}`$. We find that the desired correlation function is
$$\frac{1}{N!}\underset{L\mathrm{}}{lim}e^{N^2L}\left(\underset{j=1}{\overset{N}{}}\psi ^{}(b_j)\right)\left(\underset{j=1}{\overset{N}{}}\psi ^{}(c_j)\right)\left(\underset{j}{}U(r_j)\psi (\overline{z}_j)\psi (z_j)\mathrm{d}\overline{z}_i\mathrm{d}z_i\right)$$
(33)
where $`b_j=\frac{2\pi j}{N}+iL`$ and $`c_j=\frac{2\pi j}{N}iL`$ and $`U(r_j)=e^{aN^2r_j^2}(e^{r_j}e^{r_j})`$. For large $`N`$, we can write $`U(r_j)=e^{aN^2r_j^2}2r_j`$.
Now, we will introduce Fourier modes for the creation and annihilation operators, writing $`\psi (z)=_ke^{ikz}a(k)`$ and $`\psi ^{}(z)=_ke^{ikz}a(k)`$. In the limit $`L\mathrm{}`$, the only states involved in equation (33) are those with $`k=N+1/2,N+3/2,\mathrm{},N1/2`$. If there are excitations in states with higher $`k`$, they will vanish in the large $`L`$ limit. Then, equation (33) can be written as
$$\frac{1}{N!}\underset{L\mathrm{}}{lim}\left(\underset{k=N+1/2}{\overset{N1/2}{}}a^{}(k)\right)\underset{j}{}\left(\underset{k,k^{}}{}a(k)a(k^{})e^{ik\overline{z}_j}e^{ik^{}z_j}U(r_j)\mathrm{d}\overline{z}_i\mathrm{d}z_i\right)$$
(34)
As in the previous section, we will integrate over some set of $`z_j`$, for $`j=1\mathrm{}M`$, inside the correlation function. The integral
$$\underset{j=1}{\overset{M}{}}\left(\underset{k,k^{}}{}a(k)a(k^{})e^{i\overline{k}z_j}e^{ik^{}z_j}U(r_j)\mathrm{d}\overline{z}_j\mathrm{d}z_j\right)$$
(35)
is equal to
$$O_w^M$$
(36)
where the operator $`O_w`$ is defined by
$$O_w=8\left(\frac{\pi }{aN^2}\right)^{3/2}\underset{k}{}\left(ke^{\frac{k^2}{aN^2}}a(k)a(k)\right)$$
(37)
So, if we integrate over all coordinates $`\overline{z},z`$ in equation (34), we obtain
$$Z=\frac{1}{N!}\left(\underset{k=N+1/2}{\overset{N1/2}{}}a^{}(k)\right)(O_w)^N=\underset{k=1/2}{\overset{N1/2}{}}\left(16(\frac{\pi }{aN^2})^{3/2}ke^{\frac{k^2}{aN^2}}\right)$$
(38)
To obtain the eigenvalue density, we must integrate over all but one of the coordinates in equation (34). Using equation (36), and normalizing with equation (38), we obtain
$$\frac{U(r)\underset{m,m^{}}{}e^{im\overline{z}}e^{im^{}z}\left(\underset{k=N+1/2}{\overset{N1/2}{}}a^{}(k)\right)(O_w)^{N1}a(m)a(m^{})\mathrm{d}\overline{z}\mathrm{d}z}{(N1)!Z}$$
(39)
The correlation function in equation (39) is non-vanishing only if $`m=m^{}`$. Equation (39) is equal to
$$U(r)\underset{m=N+1/2}{\overset{N1/2}{}}e^{im(\overline{z}z)}G_w(m)\mathrm{d}\overline{z}\mathrm{d}z$$
(40)
where
$$G_w(m)=\frac{1}{16m}(\frac{\pi }{aN^2})^{3/2}e^{\frac{m^2}{aN^2}}$$
(41)
In the large $`N`$ limit, we can simplify equation (40) by introducing scaled coordinates. Let us introduce $`k=m/N`$ and let us also scale $`z`$ by a factor of $`N`$ so that $`\varphi `$ now runs from $`0`$ to $`2\pi N`$. Then we can replace the sum by an integral and obtain
$$\rho (\varphi ,r)\mathrm{d}\varphi \mathrm{d}r=\frac{1}{4}(\frac{\pi }{a})^{3/2}re^{ar^2}G_w(\overline{z},z)\mathrm{d}\varphi \mathrm{d}r$$
(42)
where
$$G_w(\overline{z},z)=\underset{1}{\overset{1}{}}e^{ik(\overline{z}z)}e^{k^2/a}\frac{1}{k}dk$$
(43)
As in the previous section, the proper normalization of the above result is automatic from the derivation. It is possible to show that equation (42) is equivalent to equation (26) in the limit of very small $`a`$. The qualitative feature of a depletion of eigenvalues near the real axis is the same for weak and strong non-Hermiticity. Equation (42) may be compared to the results of the SUSY calculation, and found to agree, with some differences in notation between the two calculations.
## IV Multi-Point Correlation Functions
The calculation of multi-level correlation functions is quite easy. In equations (9,34), we must integrate over all except for two, three, or more, of the coordinate pairs $`\overline{w}_i,w_i`$. Since the system is a non-interacting fermion system, the multi-point Green’s functions can be expressed very simply in terms of the Green’s function (21), using Wick’s theorem. This permits the two-point correlation function to be easily generalized to a multi-point correlation function, as conjectured previously. We will not show this in detail, but simply sketch the results, first for the strongly non-Hermitian case and then for the weakly non-Hermitian case.
### A Strongly Non-Hermitian Case
First let us examine the strongly non-Hermitian case, generalizing the results of section II. Consider the two-level correlation function, the probability to find a pair of levels at position $`\overline{z},z`$ given that there is another pair at position $`\overline{z}^{},z^{}`$. Then, the two-level correlation function is
$$\left(\frac{1}{4\pi }\right)^2e^{\overline{z}z\overline{z}^{}z^{}}(z\overline{z})(z^{}\overline{z}^{})\left(G(\overline{z},z)G(\overline{z}^{},z^{})G(\overline{z},z^{})G(\overline{z}^{},z)+G(\overline{z},\overline{z}^{})G(z^{},z)\right)\mathrm{d}\overline{z}\mathrm{d}z\mathrm{d}\overline{z}^{}\mathrm{d}z^{}$$
(44)
This is just an application of Wick’s theorem.
Let us consider the behavior of equation (44) in the limit when both $`z`$ and $`z`$ are far from the real axis. Without loss of generality, assume that $`\mathrm{Re}(z^{})>\mathrm{Re}(z)`$. Then, use the integral representation of the Green’s function to rewrite $`e^{\overline{z}z\overline{z}^{}z^{}}G(\overline{z},z^{})G(\overline{z}^{},z)`$ as
$$e^{|zz^{}|^2}\left(\underset{0}{\overset{\mathrm{}}{}}e^{(\overline{z}z^{})t}e^{t^2/2}dt\frac{1}{2}e^{(\overline{z}z^{})^2/2}\right)\left(\frac{1}{2}e^{(\overline{z}^{}z)^2/2}\underset{\mathrm{}}{\overset{0}{}}e^{(\overline{z}^{}z)t}e^{t^2/2}dt\right)$$
(45)
We can find a similar representation for $`G(\overline{z},\overline{z}^{})G(z^{},z)`$. Now, in the limit with $`z`$ and $`z^{}`$ both far from the real axis, then either $`\mathrm{Im}(zz^{})`$ is large or $`\mathrm{Im}(z\overline{z}^{})`$ is large. In the first case, equation (45) is exponentially small because of the factor of $`e^{|zz^{}|^2}`$. In the second case, the integrals over $`t`$ can be performed in this limit, while $`e^{(\overline{z}z^{})^2/2}`$ and $`e^{(\overline{z}^{}z)^2/2}`$ are small. The integral, $`\underset{0}{\overset{\mathrm{}}{}}e^{(\overline{z}z^{})t}e^{t^2/2}dt`$, is equal to $`\frac{1}{z^{}\overline{z}}`$, for large $`\mathrm{Im}(z^{}\overline{z})`$; here we rely on the fact that $`\mathrm{Re}(z^{}\overline{z})>0`$.
So, up to exponentially small terms, equation (45) is equal to
$$e^{|zz^{}|^2}\frac{1}{\overline{z}z^{}}\frac{1}{\overline{z}^{}z}$$
(46)
Also, in this limit, if equation (46) is not exponentially small, then $`\frac{1}{\overline{z}z^{}}\frac{1}{\overline{z}^{}z}=\frac{1}{\overline{z}z}\frac{1}{\overline{z}^{}z^{}}`$. Inserting this result, and similar results for $`G(\overline{z},\overline{z}^{})G(z^{},z)`$, back into equation (44) we find that, for both $`z`$ and $`z^{}`$ far from the real axis, the two-level correlation function is equal to
$$\left(\frac{1}{4\pi }\right)^2\left(1e^{|zz^{}|^2}e^{|z\overline{z}^{}|^2}\right)\mathrm{d}\overline{z}\mathrm{d}z\mathrm{d}\overline{z}^{}\mathrm{d}z^{}$$
(47)
This is essentially the same as the two-level correlation function found in the complex non-Hermitian case.
For $`z`$ and $`z^{}`$ near the real axis, I have examined the behavior of equation (44) numerically. If $`\mathrm{Im}(z)=\mathrm{Im}(z^{})`$, then the correlation function is a monotonically decaying function of $`\mathrm{Re}(zz^{})`$, with no signs of any oscillation. The correlation function is exponentially small if both $`z`$ and $`z^{}`$ are near the real axis.
### B Weakly Non-Hermitian Case
Now let us consider the two-level correlation function in the weakly non-Hermitian case, the probability to find one pair of levels $`\overline{z},z`$ given that there is another pair at $`\overline{z}^{},z^{}`$. As before, we must integrate over all except for two of the eigenvalue coordinates. Using the scaled coordinates, we find that the two-level correlation function is given by
$$\frac{1}{16}(\frac{\pi }{a})^3e^{ar^2ar^2}rr^{}\left(G_w(\overline{z},z)G_w(\overline{z}^{},z^{})G_w(\overline{z},z^{})G_w(\overline{z}^{},z)+G_w(\overline{z},\overline{z}^{})G_w(z^{},z^{})\right)\mathrm{d}\varphi \mathrm{d}r\mathrm{d}\varphi ^{}\mathrm{d}r^{}$$
(48)
As in the previous subsection, this is just an application of Wick’s theorem.
To examine the behavior of the two-level correlation function, let us integrate over $`r,r^{}`$ in equation (48), to be left with a function of $`\varphi \varphi ^{}`$. The result is
$$\frac{1}{4\pi ^2}\frac{1}{32\pi ^2}\underset{1}{\overset{1}{}}\frac{(k+k^{})^2}{kk^{}}e^{(kk^{})^2/(2a)}e^{ik(\varphi \varphi ^{})}e^{ik^{}(\varphi ^{}\varphi )}dkdk^{}$$
(49)
In the limit $`a\mathrm{}`$, the integral over $`k,k^{}`$ in the above expression can be performed to yield
$$\frac{1}{4\pi ^2}\frac{1}{4\pi ^2}\underset{2}{\overset{2}{}}\left(1\frac{|k|}{2}\frac{|k|}{4}\mathrm{log}(|k|1)e^{ik(\varphi \varphi ^{})}\right)dk$$
(50)
Equation (50) is the known result for the correlation function in the Circular Symplectic Ensemble. It is a non-monotonic function, algebraically decaying for large $`\varphi `$. For sufficiently small $`a`$, equation (49) will describe a monotonically decaying function of $`\varphi ^{}\varphi `$, but for fixed, non-vanishing $`a`$, the function will always decay algebraically for large $`\varphi `$.
## V Conclusion
In conclusion, we have given a simple fermionic mapping for determining the correlation functions of the non-Hermitian symplectic ensemble. Although the eigenvalue density was found previously using SUSY, the present derivation is simpler and can be more easily extended to the two-level correlation function. The two-level correlation function in the strongly non-Hermitian case was found to be similar to that for the ensemble of arbitrary complex matrices. In the weakly non-Hermitian case, the two-level correlation function exhibits an interesting crossover as a function of $`a`$.
|
no-problem/9907/cond-mat9907355.html
|
ar5iv
|
text
|
# Simulation Studies on the Stability of the Vortex-Glass Order
## Abstract
The stability of the three-dimensional vortex-glass order in random type-II superconductors with point disorder is investigated by equilibrium Monte Carlo simulations based on a lattice XY model with a uniform field threading the system. It is found that the vortex-glass order, which stably exists in the absence of screening, is destroyed by the screenng effect, corroborating the previous finding based on the spatially isotropic gauge-glass model. Estimated critical exponents, however, deviate considerably from the values reported for the gauge-glass model.
Due to the enhanced effect of fluctuations, the problem of the phase diagram of high-$`T_c`$ superconductors in applied magnetic fields is highly nontrivial and has attracted much interest recently. For random type-II superconductors with point disorder, possible existence of an equilibrium thermodynamic phase called the vortex-glass (VG) phase, where the vortex was pinned on a long length scale by randomly distributed point-pinning centers, was proposed . In such a VG state, the phase of the condensate wavefunction is frozen in time but randomly in space, with a vanishing linear resistivity $`\rho _L`$. It is a truly superconducting state separated from the vortex-liquid phase with a nonzero $`\rho _L`$ via a continuous VG transition.
This proposal was supported by subsequent experiments. In particular, transport measurements on films and twinned single crystals gave evidence for the occurrence of a continuous transition into the glassy superconducting state. It should be noticed, however, that these samples often contain extended defects with correlated disorder, such as grain boundaries, twins and dislocations. Since these extended defects could pinn the vortex more efficiently than point defects, the possibility still remains that extended defects play a crucial role in the experimentally observed “VG transitions”, and the sample only with point defects behaves differently \[4-7\].
Stability of the hypothetical VG state was also studied by numerical simualtions. Here, it is essential to obtain the data of appropriate thermodynamic quantities in true equilibrium and to carefully analyze its size dependence. So far, such calculations have been limited almost exclusively to a highly simplified model called the gauge-glass model. Previous simulations on the three-dimensional (3D) gauge-glass model have indicated that, while the stable VG phase exists in the absence of screening , the finite screening effect inherent to real superconductors eventually destabiizes it .
Meanwhile, it has been recognized that the gauge-glass model has some obvious drawbacks . First, it is a spatially isotropic model without a net field threading the system, in contrast to the reality. Second, the source of quenched randomness is artificial. The gauge-glass model is a random flux model where the qeunched randomness appears in the phase factor assoicated with the flux. In reality, the quenched component of the flux is uniform, nothing but the external field, and the quenched randomness occurs in the superconducting coupling. It remains unclear whether these simplifications underlying the gauge-glass model really unaffect the basic physics of the VG ordering in 3D.
The purpose of the presnet letter is to introduce a model in which the above limitations of the gauge-glass model are cured, and to examine by extensibe Monte Carlo (MC) simulations the nature of the 3D VG ordering with and without screening beyond the gauge-glass model. It is found that, as in the gauge-glass model, the VG phase is stable in the absence of screening but the finite screening effect destabilizes it.
We consider the dimensionless Hamiltonian,
$`/J={\displaystyle \underset{<ij>}{}}J_{ij}\mathrm{cos}(\theta _i\theta _jA_{ij})`$ (1)
$`+{\displaystyle \frac{\lambda _0^2}{2}}{\displaystyle \underset{p}{}}(\stackrel{}{}\times \stackrel{}{A}\mathrm{\Phi }_{ext})^2,`$ (2)
where $`J`$ is the typical coupling stregth, $`\theta _i`$ is the phase of the condensate at the $`i`$-th site of a simple cubic lattice, $`\stackrel{}{A}`$ is the fluctuating gauge potential at each link of the lattice, the lattice curl $`\stackrel{}{}\times \stackrel{}{A}`$ is the directed sum of $`A_{ij}`$’s around a plaquette with $`A_{ji}=A_{ij}`$, and $`\lambda _0`$ is the bare penetration depth in units of lattice spacing. $`\mathrm{\Phi }_{ext}`$ is an exernal flux threading the elementary plaqutte $`p`$, which is equal to $`h`$ if the plaquette is on the $`xy`$-plane and zero otherwise, i.e., a uniform field is applied along the $`z`$-direction. The first sum in (1) is taken over all nearest-neighbor pairs, while the second sum over all elementary plaquettes. Fluctuating variables to be summed over are the phase variables, $`\theta _i`$, at each site and the gauge variables, $`A_{ij}`$, at each link. In order to allow for the flux penetration into the system, we impose free boundary conditions in all directions for both $`\theta _i`$ and $`A_{ij}`$ . Quenched rondomness occurs only in the superconducting coupling $`J_{ij}`$ which is assumed to be an independent random variable uniformly distributed between . We stress that the aforementioned drawbacks of the gauge-glass model have been cured now: The present model has a uniform field threading the system and the quenched randomness occurs in the superconducting coupling, not in the gauge field.
In addition to the global U(1) gauge symmetry, the Hamiltonian (1) has a local gauge symemtry, i.e., the invariance under the local transformation $`\theta _i\theta _i+\mathrm{\Delta }`$ and $`A_{i,i+\delta }A_{i,i+\delta }+\mathrm{\Delta }`$ for an arbirtary site $`i`$ ($`A_{i,i+\delta }`$’s are all link variables emanating from the site $`i`$). We adopt the Coulomb gauge, imposing the condition, $`\mathrm{div}𝐀_\delta A_{i,i+\delta }=0`$, at every site $`i`$. In the limit of vanishing screening $`\lambda _0\mathrm{}`$, the link variable $`A_{ij}`$ is quenched to the external-field value, and the fluctuating variable becomes the phase variable $`\theta _i`$ only.
Simulation is performed based on the exchange MC method where the systems at neighboring temperatures are occasionally exchanged . We run two independent sequences of systems (replica 1 and 2) in parallel, and compute a complex overlap $`q`$ between the local superconducting order parameters of the two replicas $`\psi _i^{(1,2)}\mathrm{exp}(i\theta _i^{(1,2)})`$,
$$q=\frac{1}{N}\underset{i}{}\psi _i^{(1)}\psi _i^{(2)},$$
(3)
where the summation is taken over all $`N=L^3`$ sites. In terms of the overlap $`q`$, the Binder ratio is calculated by
$$g(L)=2\frac{[|q|^4]}{[|q|^2]^2},$$
(4)
where $`\mathrm{}`$ represents the thermal average and $`[\mathrm{}]`$ represents the average over bond disorder. Note that, thanks to the Coulomb-gauge condition, the superconducting order parameter, which is originally not local-gauge invariant, becomes a nontrivial quantity.
We deal mainly with two cases; \[I\] no screening corresonding to $`\lambda _0=\mathrm{}`$, and \[II\] finite screening corresponding to $`\lambda _0=2`$, whereas some data are taken for the case of stronger screening corresponding to $`\lambda _0=1`$. In either case, we fix the field intensity to $`h=1`$ which corresponds to $`f=1/(2\pi )0.159`$ flux quanta per plaquette. We have chosen the fractional value of $`f`$ to avoid the commensurability effect associated with the vortex-lattice formation. The lattice sizes studied are $`L=6,8,10,12`$ and 16 ($`\lambda _0=\mathrm{}`$), and $`L=6,8,10,12`$ ($`\lambda _0=2`$). Equilibration is checked by monitoring the stability of the results against at least three-times longer runs for a subset of samples. Sample average is taken over 300 ($`L=6`$), 200-300 ($`L=8`$), 120 ($`L=10`$), 75-150 ($`L=12`$) and 136 ($`L=16`$) independent bond realizations.
We begin with the case of no screening ($`\lambda _0=\mathrm{}`$). The size and temperature dependence of the calcualted Binder ratio is shown in Fig.1(a). As can be seen from Fig.1(a), $`g(L)`$ for different $`L8`$ tends to merge, or weakly cross, at $`T/J=0.68\pm 0.02`$, indicating that the VG transition occurs at a finite temperature in the absence of screening. Observed near marginal behavior suggests that $`D=3`$ is close to the lower critical dimension.
In the present model, as well as in reality, the nature of fluctuations along the field (longitudinal direction) and perpendicular to the field (transverse direction) could differ. An extreme possibility here may be that the VG order occurs only in some spatial component, say, in the transverse component, keeping the other (longitudinal) component disordered . Indeed, the possibility of such “two-dimensional” or purely “transverse” vortex order has been discussed in the literature as a “decoupling” transition . In order to probe such exotic possibility, we introduce a transverse Binder ratio in terms of the layer-overlap $`q_k^{}`$ defined for the $`k`$-th $`xy`$-layer of the lattice by,
$$q_k^{}=\frac{1}{L^2}\underset{ik}{}\psi _i^{(1)}\psi _i^{(2)},$$
(5)
$$g_{trans}(L)=2\frac{(1/L)_k[|q_k^{}|^4]}{(1/L)_k[|q_k^{}|^2]^2}.$$
(6)
When the VG order occurs in each layer, $`g_{trans}(L\mathrm{})`$ should be nonzero. In particular, if the purely transverse VG order is to occur as a consequence of the layer-decoupling, $`g_{trans}(L\mathrm{})`$ should stay finite while $`g(L\mathrm{})`$ vanishes.
The calculated $`g_{trans}(L)`$ is shown in Fig.1(b). As can be seen from Fig.1(b), $`g_{trans}(L)`$ exhibits behavior quite similar to $`g(L)`$, revealing a merging or a weak crossing at $`T/J=0.67\pm 0.02`$. This indicates that the present model exhibits only a single bulk VG transition where both the transverse and longitudinal components order simulataneously.
In the case of finite screening $`\lambda _0=2`$, as shown in Figs.2(a) and 2(b), $`g(L)`$ and $`g_{trans}(L)`$ constantly decrease with increasing $`L`$ at all temperatures studied, suggesting that a finite-temperature VG transition is absent in the presence of screening. Similar behavior is observed also for the case of stronger screening, $`\lambda _0=1`$, and it can be concluded that the screening effect destabilizes the VG order at finite temperature. Thus, concerning the presence or absence of the VG order, the present model yields the same answer as the spatially isotropic gauge-glass model.
Next, we turn to the critical properties of the model. For a finite-temperature VG transition for $`\lambda _0=\mathrm{}`$, we estimate the correlation-legth exponent $`\nu =2.2\pm 0.4`$ via the finite-size scaling analysis of $`g(L)`$ (see Fig.3). Then, from the order parameter $`q^{(2)}=[<q^2>]`$, the critical-point-decay exponent is determined to be $`\eta =0.5\pm 0.2`$. Likewise, from the decay of the autocorrelation function of the superconducting order parameter at $`T=T_g`$ , the dynamical exponent $`z`$ is estimated to be $`z=3.3\pm 0.5`$.
In order to examine the possibility of anisotropic scaling where the longitudinal and transverse correlation lengths have different $`\nu `$ values, we perform the scaling analysis of $`g_{trans}`$ and $`q_{trans}^{(2)}`$ as compared with $`g`$ and $`q^{(2)}`$ : If the scaling is really anisotropic, there is no reason to expect that the exponents determined from these transverse quantities coincide with those determined from the bulk quantities. We get $`\nu 1.8`$ and $`\eta 0.5`$ from $`g_{trans}`$ and $`q_{trans}^{(2)}`$, which agree within the errors with $`\nu `$ and $`\eta `$ determined above from $`g`$ and $`q^{(2)}`$. We found no evidence of anisotropic scaling, although the occurrence of small anisotropy cannot be ruled out from the present data.
Concerning the $`T_g=0`$ transition of $`\lambda _0=2`$, we get $`\nu 4.0`$ and $`\nu 3.5`$ from $`g`$ and $`q^{(2)}`$, and $`\nu 4.0`$ and $`\nu 3.0`$ from $`g_{trans}`$ and $`q_{trans}^{(2)}`$, which again agree within the errors. (Note that $`\eta `$ should be equal to -1 for a $`T=0`$ transition with nondegenerate ground state as expected in the present model.) Hence, the scaling also appears to be isotropic with $`\nu =3.5\pm 1.0`$.
The obtained exponents are summarized in Table 1 and are compared with the values reported for the spatially isotropic gauge-glass model with and without screening. Apparently, there exists a significant deviation between the two results. In particular, $`\nu `$ of the present model appears to be significantly larger than $`\nu `$ of the gauge-glass model , which might suggest that the present model lies in a universality class different from that of the gauge-glass model.
Finally, we wish to discuss the experimental implication of our results. The present study suggests, in accord
TABLE 1 Critical exponents of the present model compared with the values reported for the gauge-glass model for both cases of $`\lambda _0=\mathrm{}`$ (no screening) and $`\lambda _0<\mathrm{}`$ (finite screening).
| | model | $`\nu `$ | $`\eta `$ | $`z`$ |
| --- | --- | --- | --- | --- |
| $`\lambda _0=\mathrm{}`$ | present | $`2.2(4)`$ | $`0.5(2)`$ | $`3.3(5)`$ |
| ($`T_g>0`$) | gauge glass | $`1.3(4)`$ $`1.3(3)`$ | | $`4.7(7)`$ |
| $`\lambda _0<\mathrm{}`$ | present | $`3.5(10)`$ | $`1`$ | |
| ($`T_g=0`$) | gauge glass | $`1.05(10)`$ $`1.05(3)`$ | | |
with the previous studies based on the gauge-glass model, that in random type-II superconductors with point disorder there should be no VG phase at finite temperature in the strict sense. As discussed, experimental data on films and twinned crystals supporting the occurrence of a thermodynamic transition into the truly superconducting glassy state might well reflect the properties associated with extended detects. In this connection, the properties of a sample exclusively containing point defects is of great interest \[4-6\]. Recently, Petrean et al measured the transport properties of such sample, untwinned, proton irradiated YBCO . These authors observed an Ohmic behavior at all temperature studied, but found that the linear resistivity appeared to vanish at a finite $`T_g^{}`$ as $`\rho _L(TT_g^{})^s`$ with a universal exponent $`s5.3\pm 0.7`$. Since the non-Ohmic region could not directly be reached in these measurements, it is not clear at the present stage whether the non-Ohmic behavior really sets in below the apparent $`T_g^{}`$ deduced from the high-temperature Ohmic regime. An another possibility, which is entirely consistent with the present result, may be that the power-law decrease of $`\rho _L`$ eventually breaks down at some temperature close to $`T_g^{}`$, yielding a small but nonzero $`\rho _L`$ even at $`T<T_g^{}`$. If this is the case, the universal critical behavior of $`\rho _L`$ observed above $`T_g^{}`$ should be governed by the $`\lambda _0=\mathrm{}`$ VG fixed point which, however, should eventually be unstable against the screening effect. Indeed, our present estimate of $`s=(z1)\nu 5.1`$ for the $`\lambda _0=\mathrm{}`$ transition is close to the experimental value of Ref.. It might be interesting to experimentally determine the exponents $`\nu `$, $`z`$ and $`\eta `$ separately, as well as to go further down to lower temperatures in order to examine whether the non-Ohmic behavior really sets in below $`T_g^{}`$.
In summary, we have introduced a VG model which possesses a uniform field and cures the limitations of the gauge-glass model. Eextensive simulations show that, while the stable vortex-glass phase occurs in the absence of screening, it is eventually destroyed by the screening effect. Critical exponents associated the VG transitions appear to differ from those reported for the gauge-glass model, suggesting that real VG transitions may lie in a different universality class.
The numerical calculation has been performed on the FACOM VPP500 at the supercomputer center, ISSP, University of Tokyo. The author is thankful to R. Ikeda and S. Okuma for useful discussion.
|
no-problem/9907/cond-mat9907104.html
|
ar5iv
|
text
|
# Universality in metallic nanocohesion: a quantum chaos approach
\[
## Abstract
Convergent semiclassical trace formulae for the density of states and cohesive force of a narrow constriction in an electron gas, whose classical motion is either chaotic or integrable, are derived. It is shown that mode quantization in a metallic point contact or nanowire leads to universal oscillations in its cohesive force: the amplitude of the oscillations depends only on a dimensionless quantum parameter describing the crossover from chaotic to integrable motion, and is of order 1 nano-Newton, in agreement with recent experiments. Interestingly, quantum tunneling is shown to be described quantitatively in terms of the instability of the classical periodic orbits.
\]
An intriguing question posed by Kac is, “Can one hear the shape of a drum?” That is, given the spectrum of the wave equation or Schrödinger’s equation for free particles on a domain, can one infer the domain’s shape? This question was answered in the negative ; nevertheless there is an intimate relation between the two. In the context of metallic nanocohesion , a related question has recently emerged: “Can one feel the shape of a metallic nanocontact?” It was shown experimentally that the cohesive force of Au nanocontacts exhibits mesoscopic oscillations on the nano-Newton scale, which are synchronized with steps of order $`2e^2/h`$ in the contact conductance. In a previous article , it was argued that these mesoscopic force oscillations, like the corresponding conductance steps , can be understood by considering the nanocontact as a waveguide for the conduction electrons (which are responsible for both conduction and cohesion in simple metals). Each quantized mode transmitted through the contact contributes $`2e^2/h`$ to the conductance and a force of order $`\epsilon _F/\lambda _F`$ to the cohesion, where $`\lambda _F`$ is the de Broglie wavelength at the Fermi energy $`\epsilon _F`$. It was shown by comparing various geometries that the force oscillations were determined by the area and symmetry of the narrowest cross-section of the contact, and depended only weakly on other aspects of the geometry. Subsequent studies confirmed this observation, both for generic geometries , whose classical dynamics is chaotic, and for special geometries , whose classical dynamics is integrable. The insensitivity of the force oscillations to the details of the geometry, along with the approximate independence of their r.m.s. size on the contact area, was termed universality in Ref. . A fundamental explanation of the universality observed in both the model calculations and the experiments has so far been lacking.
In this Letter, we derive semiclassical trace formulae for the force and charge oscillations of a metallic nanocontact, modeled as a constriction in an electron gas with hard-wall boundary conditions (see Fig. 1 inset), by adapting methods from quantum chaos to describe the quantum mechanics of such an open system. It is found that Gutzwiller-type trace formulae , which typically do not converge for closed systems, not only converge, but give quantitatively accurate results for open quantum mechanical systems, which are typically more difficult to treat than closed systems by other methods. Using these techniques, we demonstrate analytically that the force oscillations $`\delta F`$ of a narrow constriction in a three-dimensional (3D) electron gas (i) depend only on the diameter $`D^{}`$ and radius of curvature $`R`$ of the neck, (ii) have an r.m.s. value which is independent of the conductance $`G`$ of the contact and depends only on a scaling parameter $`\alpha `$ which describes the crossover from chaotic to integrable motion, and (iii) are proportional to the charge oscillations induced on the contact by the quantum confinement. Furthermore, we show (iv) that quantum tunneling through the constriction is determined by the instability of the classical periodic orbits within the constriction, and that the force and charge oscillations are suppressed only weakly (algebraically) by tunneling, unlike conductance quantization, which is suppressed exponentially . Conclusion (ii) is specific to 3D contacts, and breaks down for, e.g., two-dimensional (2D) nanowires, where $`\text{rms}\delta FG^{1/2}`$. Conclusions (i), (ii), and (iv) are unchanged when electron-electron interactions are included within the Hartree approximation.
The properties of simple metals are determined largely by the conduction electrons, the simplest model of which is a free-electron gas confined within the surface of the metal. Here we take the confinement potential to be a hard wall; the effects of interest to us are virtually unchanged when one considers a more realistic confinement potential . The grand canonical potential $`\mathrm{\Omega }`$ is the appropriate thermodynamic potential describing the energetics of the electron gas in the nanocontact , and is
$$\mathrm{\Omega }=\frac{1}{\beta }𝑑Eg(E)\mathrm{ln}\left(1+e^{\beta (E\mu )}\right),$$
(1)
where $`g(E)`$ is the electronic density of states (DOS) and $`\beta `$ is the inverse temperature . The total number of electrons in the system is
$$N_{}=𝑑Ef(E)g(E),$$
(2)
where $`f(E)`$ is the Fermi-Dirac distribution function. The DOS of an open quantum system, such as that shown in Fig. 1 (inset), is given in terms of the electronic scattering matrix $`S(E)`$ by $`g(E)=(2\pi i)^1\text{Tr}\{S^{}(E)S/E\text{H.c.}\}`$, where a factor of 2 for spin has been included.
The DOS can be decomposed in terms of a smooth Weyl contribution $`\overline{g}(E)`$ and a fluctuating term $`\delta g(E)`$,
$$g(E)=\frac{k_E^3𝒱}{2\pi ^2E}\frac{k_E^2𝒮}{8\pi E}+\frac{k_E𝒞}{6\pi ^2E}+\delta g(E),$$
(3)
where $`k_E=(2mE/\mathrm{}^2)^{1/2}`$, $`𝒱`$ is the volume of the electron gas, $`𝒮`$ is its surface area, and $`𝒞=\frac{1}{2}𝑑\sigma \left(1/R_1+1/R_2\right)`$ is the mean curvature of its surface, $`R_{1,2}`$ being the principal radii of curvature. The first three terms in Eq. (3) are macroscopic, while $`\delta g`$ determines the mesoscopic fluctuations of the equilibrium properties of the system. Inserting Eq. (3) into Eqs. (1) and (2), and taking the limit of zero temperature, one finds
$$\frac{\mathrm{\Omega }}{\epsilon _F}=\frac{2k_F^3𝒱}{15\pi ^2}+\frac{k_F^2𝒮}{16\pi }\frac{2k_F𝒞}{9\pi ^2}+\frac{\delta \mathrm{\Omega }}{\epsilon _F},$$
(4)
$$N_{}=\frac{k_F^3𝒱}{3\pi ^2}\frac{k_F^2𝒮}{8\pi }+\frac{k_F𝒞}{3\pi ^2}+\delta N_{},$$
(5)
where $`k_F=2\pi /\lambda _F`$ is the Fermi wavevector. The corrections to Eqs. (4) and (5) at finite temperature may be evaluated straightforwardly , and are quite small at room temperature, since $`\epsilon _F/k_B>10^4K`$.
The cohesive force of the nanocontact is given by the derivative of the grand canonical potential with respect to the elongation, $`F=\mathrm{\Omega }/L`$. Under elongation, the contact narrows and its surface area $`𝒮`$ increases. The increase of $`𝒮`$ under elongation would lead to a macroscopic surface charge by Eq. (5). This is due to the hard-wall boundary condition, which leads to a depletion of negative charge in a layer of thickness $`\lambda _F`$ at the boundary . The macroscopic incompressibility of the electron gas can be included by imposing the constraint $`\overline{N}_{}=\text{const.}`$ , where $`\overline{N}_{}`$ is given by the first three terms in Eq. (5). The macroscopic electronic charge $`e\overline{N}_{}`$ is neutralized by the equal and opposite positive charge of the jellium background. The net charge imbalance on the nanocontact (neglecting screening) is thus $`\delta Q_0=e\delta N_{}`$, which we will show to be quite small—on the order of a single electron charge. Differentiating Eq. (4) with respect to $`L`$ with the constraint $`\overline{N}_{}=\text{const.}`$, one finds
$$F=\frac{\mathrm{\Omega }}{L}|_{\overline{N}_{}}=\frac{\sigma _𝒱}{5}\frac{𝒮}{L}+\frac{2}{5}\frac{(𝒞/\pi )}{L}\mathrm{\Delta }F_{\mathrm{top}}+\delta F,$$
(6)
where $`\sigma _𝒱=\epsilon _Fk_F^2/16\pi `$ is the surface energy of a noninteracting electron gas at fixed $`𝒱`$ and $`\mathrm{\Delta }F_{\mathrm{top}}=4\epsilon _F/9\lambda _F`$. The reduction of the surface energy by a factor of 5 has been discussed by Lang . The second term on the right-hand-side of Eq. (6), termed the “topological force” by Höppler and Zwerger since it depends only on the topology of the cross-section in the adiabatic limit, is reduced by a factor of 2.5. Importantly, since the constraint $`\overline{N}_{}=\text{const.}`$ differs from the constraint $`𝒱=\text{const.}`$ used in previous work only by terms of order $`(k_FD^{})^1`$, the mesoscopic fluctuations $`\delta F`$ and $`\delta N_{}`$ are quite insensitive to the choice of constraint.
The fluctuating part of the DOS $`\delta g`$ may be evaluated in the semiclassical (stationary-phase) approximation as a sum over the periodic classical orbits of the system . For closed systems, the sum over periodic orbits is generically not convergent, and a broadening of the energy structure in $`\delta g(E)`$ must be introduced by hand . However, we shall see that for an open system, such as a nanocontact, the periodic orbit sum converges; the finite dwell-time of a particle in an open system introduces a natural energy broadening.
Let us first consider the case of a 2D nanocontact. For a finite radius of curvature $`R`$, there is only one unstable periodic classical orbit (plus harmonics), which moves up and down at the narrowest point of the neck. One obtains
$$\delta g_{\mathrm{sc}}^{2\mathrm{D}}(E)=\frac{2mD^{}}{\pi \mathrm{}^2k_E}\underset{n=1}{\overset{\mathrm{}}{}}\frac{\mathrm{cos}(2nk_ED^{})}{\mathrm{sinh}(n\chi )},$$
(7)
where the Lyapunov exponent $`\chi `$ of the primitive periodic orbit satisfies $`\mathrm{exp}(\chi )=1+D^{}/R+\sqrt{(1+D^{}/R)^21}`$. Eq. (7) diverges when $`\chi 0`$, i.e., when $`R\mathrm{}`$. In that limit, the nanocontact acquires translational symmetry along the $`z`$ axis, so that a generalization of the Gutzwiller formula obtained by Creagh and Littlejohn must be used, which gives a finite result. In this limit, the motion is classically integrable. One can treat small deviations from translational symmetry via perturbation theory in $`1/R`$. The resulting asymptotic behavior for large $`R`$ may be combined with the result \[Eq. (7)\] valid for small $`R`$ to construct the following interpolation formula, valid for arbitrary $`R`$:
$$\delta g_{\mathrm{int}}^{2\mathrm{D}}(E)=\frac{\sqrt{8}mD^{}}{\pi \mathrm{}^2k_E}\underset{n=1}{\overset{\mathrm{}}{}}\frac{𝒞(2nk_ED^{}\frac{\pi }{4},\sqrt{\frac{nk_EL^2}{\pi R}})}{\mathrm{sinh}(n\chi )},$$
(8)
where $`𝒞(x,y)\mathrm{cos}(x)\mathrm{C}(y)\mathrm{sin}(x)\mathrm{S}(y)`$, with C and S Fresnel integrals. In Eq. (8), the specific shape of the nanocontact was taken to be $`D(z)=D^{}+z^2/R`$. For a discussion of related interpolation formulae, see Ref. . Classically, only the case $`R=\mathrm{}`$ is integrable. But semiclassically, there is a smooth crossover between the strongly chaotic ($`R0`$) and the nearly integrable ($`R\mathrm{}`$) regimes. The scaling parameter describing this crossover is
$$\alpha =L/\sqrt{\lambda _FR}.$$
(9)
We refer to $`\alpha `$ as the quantum chaos parameter, since the quantum fluctuations of the system correspond to those of a chaotic system when $`\alpha 1`$ and correspond to those of a quasi-integrable system when $`\alpha 1`$.
Fig. 1 shows a comparison of the semiclassical result $`g_{\mathrm{sc}}=\overline{g}+\delta g_{\mathrm{int}}^{2D}`$ and a numerical calculation of $`g`$ using a recursive Green’s function technique . The agreement of the semiclassical result and the numerical calculation is quite good, even in the extreme quantum limit $`G2e^2/h`$. The small discrepancy is of the size expected due to diffractive corrections from the sharp corners present in the geometry studied numerically, where the nanocontact was connected to straight wires of width $`k_FD=52`$ for technical reasons.
The denominator $`\mathrm{sinh}n\chi `$ in Eqs. (7) and (8) describes the effects of tunneling. In the limit $`RD^{}`$, the Lyapunov exponent $`\chi \sqrt{2D^{}/R}`$, and one recovers the WKB approximation of Ref. . In the opposite limit $`RD^{}`$, $`\mathrm{sinh}\chi D^{}/R`$, so $`\delta g`$ is suppressed relative to the value expected in the WKB approximation (which neglects tunneling) by a factor of $`\sqrt{2R/D^{}}`$. In the adia-
batic approximation, the energies of the transverse modes in the point contact are $`\epsilon _n(z)=(\mathrm{}^2/2m)(\pi n/D(z))^2=\epsilon _n(0)m\omega _n^2z^2/2+\mathrm{}`$ and the probability that an electron of energy $`E`$ in mode $`n`$ will be transmitted through the point contact is $`T_n(E)\left(1+\mathrm{exp}\left\{2\pi [E\epsilon _n(0)]/\mathrm{}\omega _n\right\}\right)^1`$. The quality of the conductance quantization thus decreases exponentially with the parameter $`\mathrm{}\omega _n/\mathrm{\Delta }\epsilon _n\pi ^1\sqrt{2D^{}/R}`$, where $`\mathrm{\Delta }\epsilon _n=\epsilon _n\epsilon _{n1}`$, while the DOS fluctuations $`\delta g`$ are suppressed only inversely proportional to this parameter. The fact that the suppression in each case depends only on the ratio $`D^{}/R`$ implies that the suppression of $`\delta g`$, like the degradation of conductance quantization, is a consequence of tunneling. Indeed, it is the rounding of the DOS due to tunneling that causes the sums over $`n`$ in Eqs. (7) and (8) to converge. That quantum tunneling through a point contact can be expressed purely in terms of the instability $`\chi `$ of the classical periodic orbits within the contact is remarkable.
Let us now consider the experimentally relevant case of an axially-symmetric 3D nanocontact. For finite $`R`$, all classical periodic orbits lie in the plane of the narrowest cross section of the contact; however there are now countably many distinct families of singly-degenerate periodic orbits , labeled by their winding number $`w`$ about the axis of symmetry $`z`$ and by the number of vertices $`v2w`$. The interpolation formula for $`\delta g`$, describing the crossover from the chaotic regime $`\alpha 1`$ to the integrable regime $`\alpha 1`$, is
$`\delta g_{\mathrm{int}}^{3\mathrm{D}}(E)`$ $`=`$ $`{\displaystyle \frac{m}{\mathrm{}^2\sqrt{\pi k_E}}}{\displaystyle \underset{w=1}{\overset{\mathrm{}}{}}}{\displaystyle \underset{v=2w}{\overset{\mathrm{}}{}}}{\displaystyle \frac{f_{vw}L_{vw}^{3/2}}{v^2\mathrm{sinh}(v\chi _{vw}/2)}}`$ (11)
$`\times 𝒞(k_EL_{vw}\frac{3v\pi }{2},\alpha \sqrt{v\mathrm{sin}\varphi _{vw}k_E/k_F}),`$
where $`\varphi _{vw}=\pi w/v`$, $`f_{vw}=1+\theta (v2w)`$, and
$`\mathrm{exp}(\chi _{vw})=1+{\displaystyle \frac{L_{vw}\mathrm{sin}\varphi _{vw}}{vR}}+\sqrt{\left(1+{\displaystyle \frac{L_{vw}\mathrm{sin}\varphi _{vw}}{vR}}\right)^21},`$
with $`L_{vw}=vD^{}\mathrm{sin}\varphi _{vw}`$ the length of a periodic orbit. We emphasize that the double sum over $`w`$ and $`v`$ in Eq. (11) converges due to the finite Lyapunov exponent $`\chi _{vw}`$. In Eq. (11), higher-order terms in the small parameter $`1/k_FD^{}`$ ($`<0.21`$ for contacts of nonzero conductance) have been omitted.
The mesoscopic force and charge fluctuations are calculated by inserting Eq. (11) into Eqs. (1), (2) and (6). In order to demonstrate the universality of the force oscillations, it is necessary to make some physically reasonable assumptions regarding the scaling of the geometry when the nanowire is elongated. It is natural to assume that the deformation occurs predominantly in the narrowest section, where the wire is weakest. This assumption, combined with the constraint of incompressibility $`\overline{N}_{}=\text{const.}`$, implies $`D^2L\text{const.}`$ Furthermore, the radius of curvature $`RL^2/(DD^{})`$, where $`D`$ is the diameter at $`\pm L/2`$, which implies $`\mathrm{ln}R/\mathrm{ln}L=2+(\mathrm{ln}D^{}/\mathrm{ln}L)/(D/D^{}1)2`$. Thus the quantum chaos parameter $`\alpha \mathrm{const}.`$ under elongation.
Using these assumptions about the scaling of the geometry with elongation, the derivative with respect to $`L`$ in Eq. (6) can be evaluated; the general formula for $`\delta F`$ is rather lengthy, and will be presented elsewhere. Here we give only the limiting behavior of the leading-order semiclassical results:
$`\delta F`$ $`\begin{array}{c}\\ \\ \alpha 1\end{array}`$ $`{\displaystyle \frac{\epsilon _F}{L}}{\displaystyle \underset{w=1}{\overset{\mathrm{}}{}}}{\displaystyle \underset{v=2w}{\overset{\mathrm{}}{}}}\sqrt{{\displaystyle \frac{L_{vw}}{\lambda _F}}}{\displaystyle \frac{f_{vw}\mathrm{sin}(k_FL_{vw}b_v)}{v^2\mathrm{sinh}(v\chi _{vw}/2)}},`$ (15)
$`\delta F`$ $`\begin{array}{c}\\ \\ \alpha 1\end{array}`$ $`{\displaystyle \frac{2\epsilon _F}{\lambda _F}}{\displaystyle \underset{w=1}{\overset{\mathrm{}}{}}}{\displaystyle \underset{v=2w}{\overset{\mathrm{}}{}}}{\displaystyle \frac{f_{vw}}{v^2}}\mathrm{sin}(k_FL_{vw}3v\pi /2),`$ (19)
where $`b_v=3v\pi /2\pi /4`$. $`\delta F`$ is an oscillatory function of $`k_FD^{}`$; the conductance of the contact is also determined by $`k_FD^{}`$, indicating that the force oscillations are synchronized with the conductance steps, as shown in Ref. and observed experimentally .
The rms amplitude of the force oscillations may be readily calculated from Eqs. (15) and (19). We find that $`\text{rms}\delta F`$ is independent of $`D^{}`$, and, apart from small corrections due to tunneling when $`RD^{}`$, depends only on the quantum chaos parameter $`\alpha `$:
$$\text{rms}\delta F=\{\begin{array}{cc}0.36208\alpha ^1\frac{\epsilon _F}{\lambda _F},& \alpha 1,\\ & \\ 0.58621\frac{\epsilon _F}{\lambda _F},& \alpha 1.\end{array}$$
(20)
The result for $`\alpha 1`$ agrees with the result for a straight wire ($`\alpha =0`$) derived previously by Höppler and Zwerger . Eq. (20) is also consistent with previous results based on the WKB approximation . For a realistic geometry of the nanowire , one expects both the radius of curvature and the elongation to be on the scale of $`\lambda _F`$, implying $`\alpha 1`$. There is also experimental evidence of exceptional geometries with $`R\lambda _F`$, implying $`\alpha 1`$. Thus the mesoscopic oscillations of the cohesive force are expected to be universal $`\text{rms}\delta F\epsilon _F/\lambda _F1nN`$ in monovalent metals, in agreement with all available experimental data .
In nanowires lacking axial symmetry, e.g., with an aspect ratio $`a1`$, one can show that $`\text{rms}\delta Fa\epsilon _F/\lambda _F`$. However, such shapes are energetically highly unfavorable due to the increased surface energy. Eq. (20) is therefore expected to describe all spontaneously occuring nanocontacts.
Eq. (11) and the assumption $`D^2L=\text{const.}`$ imply that the force and charge oscillations are proportional to each other in 3D nanocontacts: $`\delta F=\epsilon _F\delta N_{}/L+𝒪(1/k_FD^{})`$. In an interacting system, the charge oscillations are screened , and the Hartree correction to the grand canonical potential is bounded by $`\mathrm{\Delta }\mathrm{\Omega }<\delta N_{}^2/2g(\epsilon _F)`$. Evaluating the elementary sums over periodic orbits, we find that the average interaction correction $`\mathrm{\Delta }\mathrm{\Omega }`$ is small compared to the mesoscopic oscillations of $`\mathrm{\Omega }`$:
$$\frac{\mathrm{\Delta }\mathrm{\Omega }}{\text{rms}\delta \mathrm{\Omega }}<\frac{1.36791}{k_FD^{}},$$
(21)
where $`k_FD^{}>4.81`$ for a contact with nonzero conductance. This result justifies the use of the independent-electron approximation .
In conclusion, we have shown that trace formulae à la Gutzwiller converge and give quantitatively accurate results for the equilibrium quantum fluctuations in point contacts and nanowires. Using this approach, we have shown that the cohesive force of a metallic nanocontact, modeled as a hard-wall constriction in an electron gas, exhibits universal mesoscopic oscillations whose size $`\text{rms}\delta F\epsilon _F/\lambda _F`$ is independent of the conductance and shape of the contact, and depends only on a dimensionless parameter $`\alpha `$ characterizing the degree of quantum chaos. Our prediction of universality is consistent with all experiments performed to date .
We wish to thank R. Blümel, J. Morris, R. Prange, and H. Primack for useful discussions. This work was supported in part by the National Science Foundation under Grant No. PHY94–07194. F. K. acknowledges support from grant SFB 276 of the Deutsche Forschungsgemeinschaft. J. B. acknowledges support from Swiss National Foundation PNR 36 “Nanosciences” grant # 4036-044033.
|
no-problem/9907/astro-ph9907409.html
|
ar5iv
|
text
|
# Distinguishing Between CDM and MOND: Predictions for the Microwave Background
## 1 Introduction
Central to cosmology is the resolution of the mass discrepancy problem. In the current standard picture, the discrepancy between observed luminous mass and inferred dynamical mass in extragalactic systems is attributed to the presence of nonbaryonic cold dark matter (CDM). However, the predictions of CDM (e.g., Navarro, Frenk, & White 1997) fail the precision tests afforded by the rotation curves of low surface brightness galaxies (McGaugh & de Blok 1998a; Moore et al. 1999). In contrast, the modified Newtonian dynamics (MOND) introduced by Milgrom (1983) as an alternative to dark matter accurately predicted the behavior of these systems well in advance of the observations (McGaugh & de Blok 1998b; de Blok & McGaugh 1998; see also Begeman, Broeils, & Sanders 1991; Sanders 1996; Sanders & Verheijen 1998).
In conventional cosmology, CDM is required for two fundamental reasons. One is that the dynamically inferred mass density of the universe greatly exceeds that appropriate for baryons as determined from primordial nucleosynthesis ($`\mathrm{\Omega }_m\mathrm{\Omega }_b`$; e.g., Copi, Schramm, & Turner 1995). The other is that gravitational formation of large scale structure proceeds slowly (as $`t^{2/3}`$) in an expanding universe. It is only possible to reach the rich amount of structure observed at $`z=0`$ from the smooth ($`\mathrm{\Delta }T/T10^5`$) microwave background at $`z1400`$ if there is a nonbaryonic component whose density fluctuations can grow unimpeded by radiation pressure.
It does appear possible to explain these points with MOND. MOND is an alteration of the force law at very small acceleration scales, $`a<a_0=1.2\times 10^8\mathrm{cm}\mathrm{s}^2`$. The low acceleration scale applies in most disk galaxies and the universe<sup>1</sup><sup>1</sup>1Note that the modification is not on some length scale. The predictions of MOND therefore do not vary by many orders of magnitude from the scales of galaxies to that of the entire universe. as a whole. In the context of MOND, conventional measures of the dynamical mass density of the universe are overestimated by a factor which depends on the typical acceleration. Accounting for this leads to a very low density universe with $`\mathrm{\Omega }_m\mathrm{\Omega }_b`$ (McGaugh & de Blok 1998b; Sanders 1998).
Perhaps the simplest possible MOND universe one can consider is one in which $`a_0`$ remains constant in time (Felten 1984; Sanders 1998). In this case, the universe does not enter the MOND regime of very low acceleration until $`1+z2.33(\mathrm{\Omega }_mh/0.02)^{1/3}`$, or $`z1.6`$. Everything is normal at higher redshift, so conventional results like primordial nucleosynthesis and recombination are retained. However, small regions can enter the MOND regime at early times as the phase transition begins (Sanders 1998). Once radiation releases its hold on the baryons ($`z200`$ in a low density universe), these regions will behave as if they possess a large quantity of dark matter. Consequently, structure forms very rapidly. Indeed, early structure formation is another promising way to distinguish between CDM and MOND (Sanders 1998). For the present purpose, it suffices to realize that if the MOND force law is operative, structure forms much more rapidly than the Newtonian $`t^{2/3}`$. It is not necessary to have CDM for a rich amount of large scale structure to grow from an initially smooth cosmic microwave background.
In this paper I begin to explore how anisotropies in the microwave background might help to distinguish between CDM and MOND. As a starting point, for MOND I make two basic assumptions: that $`a_0`$ is constant, and the background metric is flat in the usual Robertson-Walker sense. This is not the only possibility for a MOND universe (Milgrom 1989). The acceleration constant may vary with time, and the nature of the background geometry is unclear. Still, this seems like the most obvious point of departure. In essence, I am examining the microwave background anisotropy properties of a conventional cosmology in which all the matter is baryonic in the amount specified by primordial nucleosynthesis, but the amplitude of the anisotropies is not constrained to be large by the slow growth of structure. Failure of the assumptions should result in a more pronounced effect on the microwave background than what I discuss below, making it easier to distinguish between CDM and MOND.
## 2 The Baryon Fraction Test
The most obvious difference between CDM and MOND in the context of the microwave background anisotropies is the baryon fraction. CDM is thought to outweigh ordinary baryonic matter by a factor of $`10`$ (Evrard, Metzler, & Navarro 1996). The precise value depends on the Hubble constant and on the type of system examined (McGaugh & de Blok 1998a). If instead MOND is the cause of the observed mass discrepancies, there is no CDM. The difference between a baryon fraction $`f_b0.1`$ and unity should leave a distinctive imprint on the microwave background.
The main impact of varying the baryon fraction is on the relative amplitude of the peaks in the angular power spectrum of the microwave background (as expanded in spherical harmonics). In general, increasing $`f_b`$ increases the baryon drag, which enhances the amplitude of compressional (odd numbered) peaks while suppressing rarefaction (even numbered) peaks (Hu, Sugiyama, & Silk 1997). The precise shape of the power spectrum is thus very sensitive to $`f_b`$.
In order to investigate this aspect of the problem, I have used CMBFAST (Seljak & Zaldarriaga 1996) to compute the expected microwave background power spectrum in several representative cases. These have reasonable baryon fractions for each case and baryon-to-photon ratios consistent with primordial nucleosynthesis. Several specific cases are illustrated in Figure 1. These have $`\mathrm{\Omega }_b=0.01`$, 0.02, and 0.03 with $`\mathrm{\Omega }_{\mathrm{CDM}}=0.2`$ or 0 (so $`f_b=0.05`$, 0.1, 0.15 or 1). Other model parameters are held fixed ($`h=0.75`$, $`T_{CMB}=2.726`$ K, $`Y_p=0.24`$, $`N_\nu `$ = 3), and adiabatic initial conditions are assumed. As a check, models with $`\mathrm{\Omega }_{\mathrm{CDM}}=0.3`$ and 0.4 were also run with the same baryon fractions and $`H_0`$ scaled to maintain the same baryon-to-photon ratio. As expected, these resulted in power spectra which are indistinguishable in shape.
I am interested in the shape of the power spectrum, not the absolute positions of the peaks. The latter depends mostly on the scale and geometry of the universe. For purposes of computation, I assume the universe is flat, with $`\mathrm{\Omega }_\mathrm{\Lambda }=1\mathrm{\Omega }_{\mathrm{CDM}}\mathrm{\Omega }_b`$. This results in a CDM universe close to the current “concordant” model (e.g., Ostriker & Steinhardt 1995). In the case of MOND, the resulting model is very close to the de Sitter case. This is a plausible case for a MOND universe (indeed, the relation of inertial mass to a finite vacuum energy density has been suggested as a possible physical basis for MOND: Milgrom 1999), but is by no means the only possibility. A model with no cosmological constant and $`\mathrm{\Omega }_m=\mathrm{\Omega }_b0.02`$ is plausible, but would be very open if the geometry were Robertson-Walker. The position of the first peak in the power spectrum moves to smaller angular scales in open universes because of the dependence of the angular diameter distance on $`\mathrm{\Omega }_m`$. For such low $`\mathrm{\Omega }_m`$ with $`\mathrm{\Omega }_\mathrm{\Lambda }=0`$, the position of the first peak occurs at $`\mathrm{}_1>1000`$. This is inconsistent with recent observations which constrain $`\mathrm{}_1`$ to be near 200 (Miller et al. 1999). However, the geometry in MOND might not be Robertson-Walker, so the position of the first peak is not uniquely specified. It is important to realize that while the position of the first peak provides an empirical constraint on the geometry traversed by the microwave background photons, in the context of MOND this does not necessarily translate into a measure of $`\mathrm{\Omega }_m`$.
The test is therefore not in the absolute positions of the peaks, but in the shape of the spectrum. As the baryon fraction becomes very high<sup>2</sup><sup>2</sup>2Since it is possible that neutrinos have significant mass, I also consider a model with $`\mathrm{\Omega }_{\mathrm{CDM}}=0`$ and $`\mathrm{\Omega }_\nu =\mathrm{\Omega }_b=0.02`$. This sharpens the peaks noticeably, but is otherwise similar to the pure baryon models. ($`f_b1`$), the even numbered peaks are suppressed to the point of disappearing. One is left with a spectrum that looks rather like a stretched version of the standard CDM case.
The difference between the CDM and MOND cases is obvious by inspection (Figure 1). However, from an observer’s perspective, it is not so easy to distinguish them. The second peak has disappeared in the MOND case, so what would have been the third peak we would now count as the second peak. The absolute positions of the peaks are not specified a priori by either theory. The absolute amplitude in the CDM case is constrained by the need to match large scale structure at $`z=0`$. The mechanics to do a similar exercise with MOND do not currently exist, so the absolute amplitude is also not specified a priori. We must therefore rely on the relative amplitudes and positions of the peaks to measure the difference. Since the third peak becomes the second peak in MOND, the observable difference is rather more difficult to perceive than one might have expected, at least for the assumptions made here.
The ratios of the positions and amplitudes of the peaks are given in Table 1. The peak position ratios depend on the sound horizon at recombination, which should not depend on MOND (for constant $`a_0`$) because this is well before the universe approaches the low acceleration regime. Other parameters do matter a bit, which can complicate matters.
One difference we could hope to distinguish is in the ratio of the positions of the first and second peaks. In the CDM models, $`\mathrm{}_2/\mathrm{}_12.35`$, while in the case of MOND $`\mathrm{}_2/\mathrm{}_12.66`$. This requires a positional accuracy determination of $`5\%`$ beyond $`\mathrm{}>500`$, no small feat.
If we can recognize that second peak is actually missing, so that what we called the second peak in MOND actually corresponds to the third peak in CDM, then the distinction is greater: for CDM, $`\mathrm{}_3/\mathrm{}_13.6`$, which should be compared to MOND’s 2.66. It is not clear how to do this observationally. Once the position of the first peak is tied down, the given ratio predicts the expected position of the second observable peak (under the assumptions made here). This is not very different in the two cases.
The ratios of the positions of the next observable peaks help not at all. For CDM, $`\mathrm{}_3/\mathrm{}_2=1.54`$. For MOND, $`\mathrm{}_3/\mathrm{}_2=1.57`$.
The ratio of the absolute amplitudes of the peaks can also distinguish the two cases, but require comparable accuracy. In CDM, $`(C_{\mathrm{},1}/C_{\mathrm{},2})_{abs}1.7`$, while in MOND $`(C_{\mathrm{},1}/C_{\mathrm{},2})_{abs}2.4`$. This may appear to be a substantial difference, but recall that what is measured is the temperature anisotropy. Since $`\mathrm{\Delta }T\sqrt{C_{\mathrm{}}}`$, one requires $`7\%`$ accuracy to distinguish the two cases at the $`2\sigma `$ level. The amplitude ratios of the second and third peaks have a bit more power to distinguish between CDM and MOND, but are more difficult to measure. The precise value of this ratio is very sensitive to $`f_b`$ in the CDM case. In CDM, $`(C_{\mathrm{},2}/C_{\mathrm{},3})_{abs}<1.6`$ for $`f_b>0.05`$, while in MOND $`(C_{\mathrm{},2}/C_{\mathrm{},3})_{abs}1.9`$.
Using the absolute amplitude of the peak heights does not untilize all the information available. In the purely baryonic MOND cases, there is a longer drop from the first peak to the first trough, and a shorter rise to the second peak than in the CDM cases. Therefore, measuring the peak heights relative to the bottom of the intervening trough may be a better approach. To do this, we define $`(C_{\mathrm{},n}/C_{\mathrm{},n+1})_{rel}=(C_{\mathrm{},n}C_{\mathrm{},min})/(C_{\mathrm{},n+1}C_{\mathrm{},min})`$ to be the ratio of the amplitudes at maxima $`n`$ and $`n+1`$ less the amplitude of the intervening minimum. This does indeed appear more promising. The purely baryonic MOND cases all have $`(C_{\mathrm{},1}/C_{\mathrm{},2})_{rel}>5`$, while the CDM cases have $`(C_{\mathrm{},1}/C_{\mathrm{},2})_{rel}<4`$ (Table 1). This is a nice test, for in most cases this ratio falls well on one side or the other (for $`\mathrm{\Omega }_b=0.02`$, $`(C_{\mathrm{},1}/C_{\mathrm{},2})_{rel}^{MOND}/(C_{\mathrm{},1}/C_{\mathrm{},2})_{rel}^{CDM}=2`$).
By inspection of Figure 1, one might also think that the width of the first peak could be a discriminant, as measured at the amplitude of the first minimum. This is a bit more sensitive to how other parameters shift or stretch the power spectrum. It is also very sensitive to the neutrino mass. Baryonic models with zero neutrino mass have perceptibly broader peaks than the equivalent CDM model, but zero CDM models with finite neutrino mass have peaks which are similar in width to those in the CDM models.
## 3 Assumptions and Caveats
I have made some predictions for the microwave background temperature anisotropies which should, with sufficiently accurate measurements, distinguish between CDM and MOND dominated universes. The predictions are based on some simple assumptions, most notably that the MOND acceleration constant $`a_0`$ does not vary substantially with time, and the geometry of the universe is flat in the Robertson-Walker sense. Neither of these need hold in MOND, but plausibly may (Sanders 1998), making this the obvious point of departure for this discussion. I have endeavored to make the most conservative assumptions in the sense that failures of these assumptions should lead to microwave background anisotropies more deviant from the standard CDM case, and hence more readily perceptible, than the cases I have discussed.
It should be noted that the signature of a purely baryonic universe is not necessarily reflected in the usual way in the power spectrum of large scale structure at $`z=0`$ \[$`P(k)`$ instead of $`C_{\mathrm{}}`$\]. The calculation for the microwave background power spectrum can be made under the assumption that MOND is not yet important at the epoch of recombination. It certainly is relevant by $`z=0`$. The scale which is nonlinear now is much larger in MOND than in CDM. The rapid nonlinear growth of structure seems likely to wash out the bumps and wiggles that would otherwise be imprinted on and preserved in the power spectrum of large scale structure in the standard framework. So while one expects a definite signature of baryon domination in the microwave background, one does not necessarily expect this to be reflected in $`P(k)`$.
A conventional effect which may be different in the CDM and MOND cases is reionization. I have assumed that the background radiation encounters effectively zero optical depth along the way to us. However, the optical depth can be nonzero if the universe is reionized early enough, thus perturbing the signal in the microwave background (cf. Peebles & Jusziewicz 1998). Structure forms faster in MOND than in CDM, so this is a greater concern. However, the degree to which it happens depends on the details of how stars and other potential ionizing sources actually form, which is not understood in either case. The main effect of a significant optical depth is to wash out the anisotropy signal. This should not much perturb the observational signatures I have discussed, which focus on the detailed structure of the peaks relative to one another. In purely baryonic models, it is conceivable that the amplitude of the second peak will be amplified by this process, which formally would invalidate the test based on the ratio of the peak-to-trough amplitudes. However, such a microwave background power spectrum would be clearly distinct from the standard CDM case.
The integrated Sachs-Wolfe effect is another matter which may be affected by the rapid growth of structure in MOND. How much depends on the unknown details of the timing. Matter domination does not occur in MOND until $`z200`$ because of the low mass density of a baryon-only universe. Growing potentials vary rapidly, but there is not a tremendous amount of time between then and $`z10`$ when $`L^{}`$ galaxy mass objects have collapsed (Sanders 1998). So it is not obvious how strong this effect will be, though it can potentially have a significant impact.
It seems unlikely that there are any effects which will cause CDM and MOND universes to be indistinguishable once sufficiently accurate observations of the microwave background are obtained. For the simple assumptions I have made, the distinction is surprisingly subtle, but certainly present. Any breakdown of these assumptions should lead to a greater distinction between the two. However, it remains a substantial challenge to understand some of basic effects which can impact the microwave background in the context of MOND.
## 4 Conclusions
Modern cosmological models require copious amounts of nonbaryonic cold dark matter for well established reasons. Yet the existence of CDM has yet to be confirmed. The alternative to dark matter postulated by Milgrom (1983), MOND, has long had considerable success in describing the rotation curves of spiral galaxies (Begeman et al. 1991; Sanders 1996; Sanders & Verheijen 1998), a fact which has no explanation in the standard framework. Moreover, MOND successfully predicted, a priori, the behavior of low surface brightness galaxies (McGaugh & de Blok 1998b; de Blok & McGaugh 1998), a test which CDM models fail (McGaugh & de Blok 1998a; Moore et al. 1999). Yet MOND has no clear cosmology.
In this paper, I have attempted to make some predictions for the temperature anisotropies in the microwave background which might potentially discriminate between CDM and MOND dominated cosmologies. In this context, the essential difference between the two is the baryon fraction ($`f_b0.1`$ for CDM and $`f_b=1`$ for MOND). I have used this fact to examine the differences expected for microwave background observations in as conservative and model independent a way as possible.
Upcoming experiments to measure the anisotropies of the microwave background to high precision should be able to distinguish between CDM and MOND. For the simple assumptions investigated here, the observational signatures are surprisingly subtle, requiring high accuracy (i.e., peak position or amplitude to $`5\%`$ at $`\mathrm{}>500`$.) Perhaps the most promising test is the ratio of peak-to-trough amplitudes of the first two peaks, with $`(C_{\mathrm{},1}/C_{\mathrm{},2})_{rel}<4`$ in plausible CDM models and $`(C_{\mathrm{},1}/C_{\mathrm{},2})_{rel}>5`$ in MOND.
These predictions are offered in the hope of clearly distinguishing between CDM and MOND in the near future.
|
no-problem/9907/solv-int9907013.html
|
ar5iv
|
text
|
# Lattice geometry of the Hirota equation
## 1 Introduction
The integrable discrete geometry deals with lattice submanifolds described by integrable equations. Among the integrable discrete (difference) equations an important role is played by the Hirota equation which is the discrete analog of the two-dimensional Toda system . Both the Toda system and the Hirota equation are important in the theory of integrable equations and in their applications. It turns out that the two-dimensional Toda system was studied in classical differential geometry and describes the so called Laplace sequences of two-dimensional conjugate nets .
The lattice geometric interpretation of the discrete Toda system was found in and is based on the observation that the discrete analog of the conjugate net on a surface, which is given by the two-dimensional lattice made of planar quadrilaterals , allows for definition of the corresponding Laplace sequence of discrete conjugate nets. Since then, the multidimensional quadrilateral lattice (multidimensional lattice made of planar quadrilaterals – the integrable discrete analog of a multidimensional conjugate net) became one of the central notions of the integrable discrete geometry. In particular, many classical results of the theory of conjugate nets and of their reductions have been generalized recently to the discrete level .
The goal of the present article is to reinterpret and expand results obtained in using new notions provided by the general theory of quadrilateral lattices. Different formulations of the Hirota equations are reviewed and, for each of them, the geometric interpretation of the corresponding functions is given.
In Sections 2 and 3 we reformulate, using more convenient notation, results found in . In Section 4 we present the discrete analog of the standard version of the Toda system as an equation governing Laplace transformations of the rotation coefficients. Then in Section 5, basing on the geometric interpretation of the $`\tau `$–function of the quadrilateral lattice given in , we show that the $`\tau `$–functions of the Laplace sequence of quadrilateral lattices solve the original Hirota’s form of the discrete Toda system — this last result fills out the missing point of paper .
## 2 Laplace transformations of quadrilateral lattices
###### Definition 2.1.
Two dimensional quadrilateral lattice is a mapping of the two dimensional integer lattice into $`M`$ dimensional projective space such that elementary quadrilaterals of the lattice are planar:
$$x:^2^M,T_1T_2xx,T_1x,T_2x.$$
In the above definition $`T_i`$, $`i=1,2`$, denotes the shift operator along the $`i`$-th direction of the lattice.
In the non-homogenous coordinates of the projective space a two dimensional quadrilateral lattice is represented by the mapping $`𝒙:^2^M`$ satisfying the Laplace equation
$$\mathrm{\Delta }_1\mathrm{\Delta }_2𝒙=(T_1A_{12})\mathrm{\Delta }_2𝒙+(T_2A_{21})\mathrm{\Delta }_2𝒙,$$
(2.1)
which is equivalent to the planarity condition; here $`\mathrm{\Delta }_i=T_i1`$, $`i=1,2`$, is the partial difference operator, and the functions $`A_{12},A_{21}:^2`$, define the position of the point $`T_1T_2𝒙`$ with respect to the points $`𝒙`$, $`T_1𝒙`$ and $`T_2𝒙`$.
Planarity of elementary quadrilaterals implies that the ”tangent” lines containing opposite sides of an elementary quadrilateral intersect. The Laplace transformation $`_{ij}`$, $`ij`$ of the lattice $`x`$ is defined as intersection of the line $`x,T_ix`$ with the line $`T_j^1x,T_iT_j^1x`$ (see Figure 1).
Using elementary calculations one can show the following results .
###### Proposition 2.1.
In the non-homogenous representation the Laplace transformation $`_{ij}(𝐱)`$ of the quadrilateral lattice $`𝐱`$ is given by
$$_{ij}(𝒙)=𝒙\frac{1}{A_{ji}}\mathrm{\Delta }_i𝒙.$$
(2.2)
###### Proposition 2.2.
The Laplace transformations of quadrilateral lattices are again quadrilateral lattices and the corresponding transformations of the coefficients of the Laplace equation (2.1) of the lattice $`_{ij}(𝐱)`$ read
$`_{ij}(A_{ij})`$ $`={\displaystyle \frac{A_{ji}}{T_jA_{ji}}}(T_iA_{ij}+1)1,`$ (2.3)
$`_{ij}(A_{ji})`$ $`=T_j^1\left({\displaystyle \frac{T_i_{ij}(A_{ij})}{_{ij}(A_{ij})}}\left(A_{ji}+1\right)\right)1.`$ (2.4)
###### Proposition 2.3.
Under the assumption that the transformed lattices are non-degenerate, i.e. their quadrilaterals do not degenerate to segments or points, we have
$$_{ij}_{ji}=_{ji}_{ij}=\text{id.}$$
(2.5)
In this way given two dimensional quadrilateral lattice $`𝒙`$ one can define a sequence of quadrilateral lattices
$$𝒙^{(l)}=_{12}^l(𝒙),l,_{12}^1=_{21}.$$
In analogy to the Laplace sequence of conjugate nets, the above sequence can be called the Laplace sequence of quadrilateral lattices. Equations (2.3)-(2.4) can be then rewritten in the form
$`{\displaystyle \frac{\mathrm{\Delta }_2A_{21}^{(l)}}{A_{21}^{(l)}}}`$ $`={\displaystyle \frac{T_1A_{12}^{(l)}A_{12}^{(l+1)}}{(T_1A_{12}^{(l)}+1)(A_{12}^{(l+1)}+1)}},`$ (2.6)
$`{\displaystyle \frac{\mathrm{\Delta }_1A_{12}^{(l)}}{A_{12}^{(l)}}}`$ $`={\displaystyle \frac{T_2A_{12}^{(l)}A_{21}^{(l1)}}{(T_2A_{21}^{(l)}+1)(A_{21}^{(l1)}+1)}},`$ (2.7)
which is the discrete analog of the coupled Volterra system.
## 3 Projective invariants of the Laplace sequence
The planarity of elementary quadrilaterals of the quadrilateral lattice and the construction of the Laplace sequence are essentially of the projective nature . It would be therefore interesting to know the pure projective-geometric version of the equation describing the Laplace sequence of quadrilateral lattices.
The basic numeric invariant of projective transformations is the so called cross-ratio of four collinear points, which is given in the affine representation as
$$\text{cr}(a,b;c,d)=\left(\frac{ca}{cb}\right):\left(\frac{da}{db}\right);$$
notice the simple identity
$$\text{cr}(a,b;c,d)=\text{cr}(b,a;d,c).$$
(3.1)
Define the function $`K_{ij}`$ as the cross-ratio of $`x`$, $`_{ij}(x)`$, $`T_ix`$ and $`T_j_{ij}(x)`$. Elementary calculations show that
$`T_i𝒙_{ij}(𝒙)`$ $`={\displaystyle \frac{1+A_{ji}}{A_{ji}}}\mathrm{\Delta }_i𝒙,`$
$`T_j_{ij}(𝒙)𝒙`$ $`={\displaystyle \frac{T_iA_{ij}+1}{T_jA_{ji}}}\mathrm{\Delta }_i𝒙,`$
$`T_j_{ij}(𝒙)_{ij}(𝒙)`$ $`=\left({\displaystyle \frac{1}{A_{ji}}}{\displaystyle \frac{1+T_iA_{ij}}{T_jA_{ji}}}\right)\mathrm{\Delta }_i𝒙,`$
and, therefore,
$$K_{ij}=\text{cr}(𝒙,_{ij}(𝒙);T_i𝒙,T_j_{ij}(𝒙))=\frac{A_{ji}(T_iA_{ij}+1)T_jA_{ji}}{(1+T_iA_{ij})(1+A_{ji})}.$$
Equations (2.3) and (2.4) allow to find the Laplace transforms of the projective invariants
$`_{ij}(K_{ij})`$ $`=T_j^1\left({\displaystyle \frac{K_{ij}(T_iT_jK_{ij})}{(T_iK_{ij})(T_jK_{ij})}}{\displaystyle \frac{(T_iK_{ij}+1)(T_jK_{ij}+1)}{T_iK_{ji}+1}}1\right),`$ (3.2)
$`_{ij}(K_{ji})`$ $`=K_{ij};`$ (3.3)
notice that equation (3.3) is a simple consequence of Proposition 2.3 and property (3.1) of the cross-ratio.
Equations (3.2) and (3.3) can be rewritten in terms of the function $`K=K_{12}`$ in the following form
$$T_2\left(\frac{K^{(l+1)}+1}{K^{(l)}+1}\right)T_1\left(\frac{K^{(l1)}+1}{K^{(l)}+1}\right)=\frac{(T_1T_2K^{(l)})K^{(l)}}{(T_1K^{(l)})(T_2K^{(l)})},$$
(3.4)
known as the gauge invariant form of the Hirota equation.
## 4 Rotation coefficients of the quadrilateral lattice
As it was shown in it is convenient to reformulate the Laplace equation (2.1) as a first order system. We introduce the suitably scaled tangent vectors $`𝑿_i`$, $`i=1,2`$,
$$\mathrm{\Delta }_i𝒙=(T_iH_i)𝑿_i,$$
(4.1)
in such a way that the $`j`$-th variation of $`𝑿_i`$ is proportional to $`𝑿_j`$ only (see Figure 2)
$$\mathrm{\Delta }_j𝑿_i=(T_jQ_{ij})𝑿_j,ij,$$
(4.2)
the coefficients $`Q_{ij}`$ in equation (4.2) are called the rotation coefficients.
The scaling factors $`H_i`$ in equation (4.1), called the Lamé coefficients, satisfy the linear equations
$$\mathrm{\Delta }_iH_j=(T_iH_i)Q_{ij},ij,$$
adjoint to (4.2); moreover
$$A_{ij}=\frac{\mathrm{\Delta }_jH_i}{H_i},ij.$$
The Laplace transformation of the Lamé coefficients and the scaled tangent vectors was found in and is presented in the following
###### Proposition 4.1.
The Lamé coefficients of the transformed lattice read
$`_{ij}(H_i)`$ $`={\displaystyle \frac{H_j}{Q_{ij}}},`$
$`_{ij}(H_j)`$ $`=T_j^1\left(Q_{ij}\mathrm{\Delta }_j\left({\displaystyle \frac{H_j}{Q_{ij}}}\right)\right),`$
the tangent vectors of the new lattice are given by
$`_{ij}(𝑿_i)`$ $`=\mathrm{\Delta }_i𝑿_i+{\displaystyle \frac{\mathrm{\Delta }_iQ_{ij}}{Q_{ij}}}𝑿_i,`$
$`_{ij}(𝑿_j)`$ $`={\displaystyle \frac{1}{Q_{ij}}}𝑿_i.`$
From above formulas follow transformation rules for the rotation coefficients.
###### Proposition 4.2.
The rotation coefficients transform according to
$`_{ij}(Q_{ij})`$ $`=T_j^1\left(T_iQ_{ij}{\displaystyle \frac{Q_{ij}T_iT_jQ_{ij}}{T_jQ_{ij}}}\left(1(T_iQ_{ji})(T_jQ_{ij})\right)\right),`$ (4.3)
$`_{ij}(Q_{ji})`$ $`={\displaystyle \frac{1}{Q_{ij}}}.`$ (4.4)
Equations (4.3) and (4.4) can be rewritten in terms of the function $`Q=Q_{12}`$ as
$$\mathrm{\Delta }_2\frac{\mathrm{\Delta }_1Q^{(l)}}{Q^{(l)}}=T_1\left(\frac{T_2Q^{(l)}}{Q^{(l1)}}\right)\frac{T_2Q^{(l+1)}}{Q^{(l)}}.$$
(4.5)
## 5 Geometry of the $`\tau `$–function
To give the geometric meaning to the $`\tau `$-function let us introduce the vectors $`\stackrel{~}{𝑿}_i`$ pointing in the negative directions
$$\stackrel{~}{\mathrm{\Delta }}_i𝒙=(T_i^1\stackrel{~}{H}_i)\stackrel{~}{𝑿}_i,\text{or}\mathrm{\Delta }_i𝒙=\stackrel{~}{H}_i(T_i\stackrel{~}{𝑿}_i),$$
where $`\stackrel{~}{\mathrm{\Delta }}_i=T_i^11`$ is the backward difference operator. The scaling factors $`\stackrel{~}{H}_i`$ (the backward Lamé coefficients) are chosen in such a way that the $`\stackrel{~}{\mathrm{\Delta }}_i`$ variation of $`\stackrel{~}{𝑿}_j`$ is proportional to $`\stackrel{~}{𝑿}_i`$ only (see Figure 3). We define the backward rotation coefficients $`\stackrel{~}{Q}_{ij}`$ as the corresponding proportionality factors
$$\stackrel{~}{\mathrm{\Delta }}_i\stackrel{~}{𝑿}_j=(T_i^1\stackrel{~}{Q}_{ij})\stackrel{~}{𝑿}_i,\text{or}\mathrm{\Delta }_i\stackrel{~}{𝑿}_j=(T_i\stackrel{~}{𝑿}_i)\stackrel{~}{Q}_{ij},ij.$$
(5.1)
The backward Lamé coefficients $`\stackrel{~}{H}_i`$ satisfy then the following system of linear equations
$$\mathrm{\Delta }_j\stackrel{~}{H}_i=(T_j\stackrel{~}{Q}_{ij})\stackrel{~}{H}_j,ij,$$
adjoint to system (5.1)
Since the forward and backward rotation coefficients $`Q_{ij}`$ and $`\stackrel{~}{Q}_{ij}`$ describe the same lattice $`𝒙`$ but from different points of view, then one cannot expect that they are independent. Indeed, defining the functions $`\rho _i:^2`$, $`i=1,2`$, as the proportionality factors between $`𝑿_i`$ and $`T_i\stackrel{~}{𝑿}_i`$ (both vectors are proportional to $`\mathrm{\Delta }_i𝒙`$):
$$𝑿_i=\rho _i(T_i\stackrel{~}{𝑿}_i),T_iH_i=\frac{1}{\rho _i}\stackrel{~}{H}_i,i=1,2,$$
we have the following result
###### Proposition 5.1.
The forward and backward rotation coefficients of the lattice $`𝐱`$ are related through the following formulas
$$\rho _jT_j\stackrel{~}{Q}_{ij}=\rho _iT_iQ_{ji},$$
and the factors $`\rho _i`$ are first potentials satisfying equations
$$\frac{T_j\rho _i}{\rho _i}=1(T_iQ_{ji})(T_jQ_{ij}),ij.$$
(5.2)
The right hand side of equation (5.2) is symmetric with respect to the interchange of $`i`$ and $`j`$, which implies the existence of a potential $`\tau :^2`$, such that
$$\rho _i=\frac{T_i\tau }{\tau };$$
therefore equation (5.2) defines the second potential $`\tau `$:
$$\frac{(T_iT_j\tau )\tau }{(T_i\tau )(T_j\tau )}=1(T_iQ_{ji})(T_jQ_{ij}),ij.$$
(5.3)
The potential $`\tau `$ connecting the forward and backward data is the $`\tau `$-function of the quadrilateral lattice .
Let us find the Laplace transformation of the $`\tau `$-function. Formulas (4.3) and (4.4) imply that
$$1(T_i_{ij}(Q_{ji}))(T_j_{ij}(Q_{ij}))=\frac{Q_{ij}T_iT_jQ_{ij}}{T_jQ_{ij}T_iQ_{ij}}\frac{\tau T_iT_j\tau }{T_i\tau T_j\tau },$$
(5.4)
which, due to equation (5.3), allows for identification
$$_{ij}(\tau )=\tau Q_{ij}.$$
(5.5)
It should be mentioned here that the above formula was strongly suggested by the identification of the Schlesinger transformation of the theory of the multicomponent Kadomtsev–Petviashvili hierarchy with the Laplace transformation of conjugate nets .
###### Corollary 5.2.
The geometric meaning of $`\tau _{ij}`$ as the Laplace transformation $`_{ij}(\tau )`$ of the $`\tau `$-function applies for any dimension of the quadrilateral lattice.
Finally, equation (5.3) rewritten in terms of the $`\tau `$-function and its Laplace transformations take the following form
$$\tau ^{(l)}T_1T_2\tau ^{(l)}=(T_1\tau ^{(l)})(T_2\tau ^{(l)})(T_1\tau ^{(l1)})(T_2\tau ^{(l+1)}),$$
(5.6)
which is the original Hirota’s bilinear form of the discrete Toda system.
## 6 Conclusion
The geometric interpretation of the Hirota equation (integrable discrete analog of the Toda system) is given by the Laplace sequence of quadrilateral lattices, therefore various representations of the lattice give different versions of the equation. In the paper we presented four different forms of the Hirota equation: (i) the discrete coupled Volterra system (2.6)-(2.7) for the coefficients of the Laplace equations, (ii) the gauge invariant form of the Hirota equation (3.4) for projective invariants of the Laplace sequence, (iii) the discrete Toda system for the rotation coefficients (4.5), and (iv) the original form of the Hirota equation (5.6) for the $`\tau `$-function of the quadrilateral lattice.
## Acknowledgements
The author would like to thank the organizers of the SIDE III meeting for invitation and support.
|
no-problem/9907/hep-ph9907250.html
|
ar5iv
|
text
|
# 1 Motivation
## 1 Motivation
The traditional choice of a Lorentz frame to perform jet finding in $`e^\pm p`$ DIS final state is the Breit frame, since in such a frame $`k_{}`$-type jet clustering algorithms would preserve factorization which is an important feature of QCD. Vectors, whether partons or calorimeter cells, used as the input to the jet algorithm, have to be boosted from the laboratory frame to the Breit frame and then clustered into jets. However, boosting from the laboratory frame to the Breit frame introduces systematic errors that may affect the jet-finding results. In particular , when boosting calorimeter cells, a problem arises near the outer edges of the forward and rear sections of a cylindrical calorimeter system. This is the region where the cells are least projective radially, and the longitudinal variation in the energy deposit in these cells results in large differences in the polar angle between cells after boosting them to the Breit frame. Many methods have been tried to reduce this problem, but in the end, none give a satisfactory result<sup>1</sup><sup>1</sup>1This may be one of the reasons why there are still very few published results with data analysis from HERA using the $`k_{}`$ jet algorithm..
Our goal is to demonstrate the existence of a new Lorentz frame, more suitable for DIS jet finding.
## 2 The Breit Frame
In order to define fully a jet clustering algorithm one needs to introduce an auxiliary vector $`\overline{p}`$ of the form
$$\overline{p}=xf(Q^2)𝒫+g(Q^2)q.$$
Here $`𝒫`$ and $`q`$ are the incoming proton and virtual photon four-momenta and $`f,g`$ are any function of $`Q^2`$. The simplest example of a suitable auxiliary vector is
$$\overline{p}=\mathrm{\hspace{0.17em}2}x𝒫+q$$
with $`\overline{p}^2Q^2`$ and $`\overline{p}q=\mathrm{\hspace{0.17em}0}`$. The last equation can be used to specify a frame of reference in which the cluster resolution variables $`d_{ij}`$ are to be evaluated. For frames of reference where a virtual photon is purely space-like ($`q_0^{}=0`$) there are two solutions of the equation $`\stackrel{}{\overline{p^{}}}\stackrel{}{q^{}}=\mathrm{\hspace{0.17em}0}`$. The first solution ($`\stackrel{}{\overline{p^{}}}=\mathrm{\hspace{0.17em}0}`$) corresponds to the rest frame of $`\overline{p}`$ and known as the Breit frame (BF) of reference.
In terms of $`\overline{p}`$ the Lorentz parameters of the BF are as follows
$$\gamma =\overline{p}_o/\sqrt{\overline{p}^2},\stackrel{}{\eta }=\stackrel{}{\overline{p}}/\sqrt{\overline{p}^2}.$$
(1)
Fig. 1a shows the Lorentz factor $`\gamma `$ as a function of $`x`$ at different $`y`$. The arrow shows a unique point $`x=x_o=k/P`$, $`y=1`$ where the HERA laboratory and Breit frames coincide ($`\gamma =1`$) . Here $`k`$ and $`P`$ are the incoming lepton and proton momenta. A large variation of $`\gamma `$ with ($`x,Q^2`$) causes problems noted in Sec. 1 and discussed in .
Figure 1
## 3 The Photon Frame
The second solution of the equation
$$\overline{p_o^{}}q_o^{}\stackrel{}{\overline{p^{}}}\stackrel{}{q^{}}=0$$
corresponds to $`\stackrel{}{\overline{p^{}}}0`$ and $`\stackrel{}{\overline{p^{}}}\stackrel{}{q^{}}`$ at $`q_o^{}=\mathrm{\hspace{0.17em}0}`$. In the general form the Lorentz parameters of a new frame, called the Photon frame of reference, are
$$\gamma =\mathrm{}_o/\sqrt{\mathrm{}^2},\stackrel{}{\eta }=\stackrel{}{\mathrm{}}/\sqrt{\mathrm{}^2}$$
(2)
with
$$\mathrm{}=(\sqrt{q_o^2+Q^2},q_o\stackrel{}{q}/\sqrt{\stackrel{}{q^2}})$$
and $`\mathrm{}^2=Q^2,\mathrm{}q=0.`$
Here we would like to enumerate some properties of the new frame. From (2) one sees that the lababoratory and Photon frames are connected by a boost along the direction of the momentum transfer vector $`\stackrel{}{q}`$. Fig. 1b shows the Lorentz factor (2) as a function of $`x`$ at different $`y`$. At $`x>10^3`$ $`\gamma _{Ph}`$ depends on ($`x,Q^2`$) values in a very different way compared with $`\gamma _{Br}`$ in Fig. 1a. In the range $`10^2<x<\mathrm{\hspace{0.17em}10}^1`$ the Photon frame (PF) is very close to the HERA frame though $`Q^2`$ varies significantly. At $`x=x_o`$ (the arrow in Fig. 1b) the PF coincides with the HERA frame along the line in the phase space independent of $`Q^2`$ and $`y`$ values and a virtual photon is pure space-like in the laboratory frame of reference.
Deep inelastic lepton-nucleon scattering in the PF is described in the parton model (zeroth order QCD) by the space diagram in Fig. 2a. An auxiliary angle between the scattered lepton and quark is denoted as $`\alpha `$. Angles $`\delta ,\theta ,\xi `$ and $`\alpha `$ relate to $`q_o`$, $`Q^2`$ and the incoming lepton and proton energies, $`ϵ`$, $`E`$, as follows
Figure 2
$$cos\delta =\frac{\sqrt{Q^2+q_o^2}}{2ϵq_o},cos\xi =\frac{\sqrt{Q^2+q_o^2}}{2xE+q_o},$$
$`(3)`$
$$cos\theta =1\mathrm{\hspace{0.17em}2}cos^2\xi ,cos\alpha =1\frac{2}{y}cos\delta cos\xi $$
$`(4)`$
with
$$q_0=\frac{(kxP)Q^2}{2x(kE+ϵP)}(kxP)y.$$
Due to the relations (4) in between $`\delta ,\theta ,\xi `$ and $`\alpha `$ there are only two independent angles. Fig. 2b shows variation of these angles with $`x`$ at $`y=0.5`$. At $`x<10^3`$ the BF and the PF are very close to each other ($`\theta \pi ,\xi 0`$). Direct comparison of (1) and (2) also confirms the last conclusion, since at small $`x`$ $`\gamma _{Ph}\gamma _{Br}q_o/Q,\stackrel{}{\eta }_{Ph}\stackrel{}{\eta }_{Br}\stackrel{}{q}/Q`$. In the parton model the line $`x=x_o`$ has a special significance, since both incoming and outgoing $`e^\pm `$ and parton have the same energy and back-scatter off each other ($`\alpha =\pi `$).
We point out that jet finding in the PF preserves factorization. Careful analysis of examples given in shows that in term of the vector $`\overline{p^{}}=2x𝒫^{}+q^{}`$ in the PF the $`k_{}`$-type resolution variable $`b_{ij}`$ has the same form as Eqs. (25)-(26) of Ref. . As compared with the PF the BF current hemisphere due to the static geometry is dominated by the fragments of the struck quark. This makes comparisons of multiplicities in $`e^+e^{}`$ and the current region of $`e^\pm p`$ easier in the BF. On the other hand, to perform the DIS jet finding in $`e^\pm p`$ it is preferable to use the PF because it reduces the above-mentioned problems.
## 4 Conclusions
A new Lorentz frame, called the Photon frame, with a pure space-like virtual photon is found. Many features of the Photon frame make of it attractive for jet finding. In the kinematical region interesting for DIS jet study boosts are small, substantially reducing the systematic errors.
Acknowledgment. I would like to thank the DESY Directorate and the organisers of this meeting for financial support and to acknowledge friendly assistance and discussions with E. De Wolf, J. Hartmann, R. Klanner, G. Wolf. I am grateful to P. Bussey for reading the manuscript and comments. This work supported in part under the DFG grant #436 RUS 113/248/1.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.